Unix::Sysadmin Setup - Configuration of Unix::Sysadmin


  while (!$done)
        do lots(work);


Unix::Sysadmin is an object oriented Perl framework for Unix system administration. It's main features are platform independence (at least among FreeBSD, Linux and Solaris), secure transport via ssh and a peer-to-peer management model that is in tune with many Unix networks we've seen.

This document covers setting up the framework to run on your network. See the Unix::Sysadmin manpage is section %%8 of the manual for a high-level overview of the entire framework. The usasetup program mentioned in this document is documented in detail in the usasetup manpage, also in section %%8. You should read that manpae after reading this document.

Initial Setup

This framework does much of what NIS does, but across platforms and using a secure transport based on SSH. The trust model is therefore closely patterned on SSH1. (SSH2 is still less common than SSH1, so the framework doesn't support SSH2 yet.) In a basic setup, a managing host is selected. This host holds both parts of a special SSH key pair. Hosts that are managed have the public half of this key placed in ~root/.ssh/authorized_keys. This gives the managing host (or hosts) access to the managed machine as root. The first step to setting up the framework is to generate an access key on the host that will be driving the framework.

To do this, issue the following command:

 # ssh-keygen
 Initializing random number generator...
 Generating p:  ......++ (distance 108)
 Generating q:  ............++ (distance 198)
 Computing the keys...
 Testing the keys...
 Key generation complete.
 Enter file in which to save the key (/root/.ssh/identity): 
 Enter file in which to save the key (/root/.ssh/identity): /root/.ssh/access
 Enter passphrase: **press return here**
 Enter the same passphrase again: **press return here**
 Your identification has been saved in /root/.ssh/access.
 Your public key is:
 **shows public key**
 Your public key has been saved in /root/.ssh/

The key produced in this way has no passphrase so that the framework can run unattended. Be sure to guard the private half of the key carefully, since anyone who has the private half of the key will be able to access your clients as root. If you followed the instructions above, the private half of the SSH key will be in the file !root/.ssh/access. The public half of the key will be in the file !root/.ssh/ This half will be distributed to all the hosts you want to manage with the framework. This is described in the next section.

Client Setup

To setup a client to be managed using the framework, perform the following steps:

  1. Distribute the key
    Using scp, rcp or ftp, place the public portion (~root/.ssh/ on the managing host) in the ~root/.ssh/authorized_keys file on each of the hosts you want to manage. If the root user has no .ssh directory, create it and the authorized_keys file like so:
     client# mkdir ~root/.ssh
     client# chmod 700 ~root/.ssh
     client# cp ~root/.ssh/authorized_keys
     client# chmod 600 ~root/.ssh/authorized_keys

    If the ~root/.ssh directory already exists, check if it already contains an authorized_keys file. If it does, use an editor to add to the file.

  2. Make sure sshd is running on the client.
    Check to see if sshd is running. On BSD or Linux do this:
     bsd-or-linux-client# ps auxww | grep sshd
     root       731  0.0  0.1  1604  368 ?        S    Sep11   0:02 sshd

    On Solaris, do this:

     solaris-client# ps -elf | grep sshd
     8 S     root   769   348  0  41 20        ?    249        ? 13:20:48 ?        0:00 /usr/local/sbin/sshd

    If sshd is NOT running, you need to make it run in order to install this framework. Check out if you don't have SSH.

  3. Check the Client SSH Configuration
    SSH needs to allow root to have access to the client. By default, both SSH and OpenSSH are configured to NOT allow such access. You need to find the sshd_config file for the client's SSH configuration and change this default. The file may be in various locations depending on your SSH version and local configuration. For the SSH package from, this file is usually found either in /etc/ or in /usr/local/etc/. For OpenSSH, this file is in /etc/ssh/. In any case, you need to find the line in sshd_config that looks like this:
     PermitRootLogin no

    and change it to

     PermitRootLogin yes

    There is another line in the default sshd_config file which I recommend you change:

    The line that looks like:

     PasswordAuthentication yes

    Should be

     PasswordAuthentication no

    Changing this may annoy your users because they will no longer be able to use their password to access the machine with ssh. They will have to generate a key pair with ssh-keygen and place the public half of their key in ~/.ssh/authorized_keys file. Then they will have to have the private portion of the key avaiable to them when they log in. The reason this inconvenience is worth the trouble is that anyone who can reach port 22 on your box can try to guess passwords at the login prompt using ssh. Many organizations let connections bound for port 22 through their Internet firewalls. If this is true for you, then you have a script kiddie problem with the default SSH1 configuration. If you alloow password access, permitting root logins makes this problem slightly worse. I say ``slightly'' because once an intruder has access to a user account on your box, there are so many ways to break root that they may as well have root to start with.

  4. Restart SSHD
    If you have made configuration changes in sshd_config, kill and restart sshd to make them effective. Make sure you aren't logged on to the client using ssh when you do this!

  5. Test Access From Managing Host
    Back on the machine you have selected to be the managing host, run the following command:
      ssh -lroot -i ~root/.ssh/access client

    You should be logged in to the the machine 'client' as root. If this works, you have succeeded in setting up the client for use with Unix::Sysadmin.

Managing Host Setup

The usasetup script takes care of setting up the Unix::Sysadmin framework on the managing host. The following sections give an overview of the process. See the usasetup manpage for more details.

Framework Storage Area

Unix::Sysadmin stores its database in a specific area on the managing host's filesystem. (The area could be on an NFS mounted share, but then your hashed passwords would fly across the network in the clear, so don't do that.) The usasetup script asks where you want this area to live. The default is /usr/local/pusa.

Canonical Master

The easiest way to initialize the framework's data files is to use an existing host on the network as a ``canonical master.'' During framework bootstrap, the user, group and automount databases on this host will be imported into the framework's databases. After bootstrap the canonical master's data can be used to update the framework's data. This allows password changes, account deletions and additions and so forth to migrate into the framework automatically. This is particularly useful since the framework does not yet provide commands to perform these functions itself. An important thing to consider is the OS of the canonical master compared to the clients managed by the framework. If the canonical master runs an OS similar to most of the clients, it will reduce the number of hoops you will jump through resolving collisions in usr and group names and ids later. usasetup prompts for the name of this host.

Initial Sanity Checks

usasetup now performs a series of checks on the values you just entered. If the database area you chose already exists, the script checks the permissions on the files stored there and sets them to restrictive values. Otherwise it creates the directory.


The framework needs information about the host it is running on. usasetup now checks to see if this information has already been collected. If so, it gives you the choice of keeping the old data, entering completely new information, or using the old data as the default for its questions. If no information has been previously entered, usasetup probes the host it is running on to obtain defaults for a series of configuration questions. One 'key' question is the location of the SSH access key you created earlier.

Canonical Master Sanity Check

Next usasetup checks out the canonical master host you entered previously. It tries to ping the host. If that's successful, it tries to access the system using SSH and the access key you entered earlier. If all goes well, the script stores some information about the canonical master and moves on.

Database Setup

The databases for users, groups and automount entries are now created and/or updated. If the databases already exist, usasetup gives you the choice of keeping them intact, initializing new ones from the canonical master, or merging the canonical master's files into your databases. If the databases do not exist, usasetup creates them and initializes their contents from the canonical master.

Client Machine Setup

The final step in usasetup is to setup the database of client machines that will be managed by the framework. The script searches for a preexisting host database and offers to keep, reinitialize or merge it. It then offers a list of ways to specify the names of your clients. For each client you specify, usasetup checks the managibility of the host, and adds a record to the host database.

Testing and Tweaking

The final steps in the setup of Unix::Sysadmin are testing the configuration and tweaking it to do want you want. A test script called usatest (see the usatest manpage) generates password, group and automount files for each client in your database and copies the existing files to the managing host so you can compare the output of the framework with the clients current setups. The generated files will probably differ significantly from the existing one, particularly for clients with OSen that differ from the canonical master's. The following sections describe how to interpret some of the differences you may find and how to tweak the framework to eliminate the ones you don't want.

Running usatest

usatest takes no input from the user. It first creates an output directory called test under the framework's configuration area. If a directory of that name already exists, the script renames it to test.yyymmddhhmmss. Several of these directories may be created during the process of tweaking the framework. After you are satisfied with your tweaks, you may remove these directories. After creating the test directory, usatest loops through the host list, creating subdirectories for each host under test called *hostname*/etc. The script generates password, group and automounter files in theses subdirectories, then copies the originals from the hosts, placing them in the same subdirectory with a .orig extension added to the end of the filename.

Comparing the Files

After usatest finishes, you should peruse the directories and run diff on the files created. Some things to look for are collisions in user and group names and IDs, the order with which groups and users with identical IDs are handled, and conflicts in the automount entries.

User and Group Name and ID Collisions.

Different operating systems have different standard users and groups. Sometimes the standard names are the same, but the IDs differ. Sometimes the same ID can stand for different users or groups. Also, in a large network, conflicting name and/or ID values may have been chosen by different administrators of different machines over time. In order to effectively manage these collisions from a central framework, the conflicts must be resolved in some way. When non-standard names or ids conflict, it might make sense to rename and/or renumber around such conflicts. This is particularly true if the users and/or groups need to be used throughout the network. Sometimes this isn't possible or desireable for technical or political reasons. Technical reasons also sometimes bar this sort of solution when OS mandated conflicts are present.

The strategy used by Unix::Sysadmin is twofold. First the framework provides a way to preserve arbitrary groups and users based on name or ID. Secondly. a mechanism is provided to map group/user /names/ids from the values stored in the framework to some other arbitrary value.

upreserve and gpreserve, umap and gmap

Records in the User and Group list files may have a (u|g)preserve= value. This is a colon seperated list of names and/or IDs that the framework will preserve when it encounters them in the [master.]passwd/shadow/group files it is about to replace. (See the Unix::Sysadmin::Host.list manpage for more details on these parameters) This mechanism works well for protecting system users or groups from being overwritten by different values in the framework's database. For example, the standard list of groups for FreeBSD contains these entries:


Contrast this to the standard /etc/group file for Solaris 8:


These don't match up well at all! In fact, only two groups, sys and mail have the same name and gid in both lists. Let's suppose that the canonical master is running FreeBSD. The Group.list file created by usasetup therefore contains entries corresponding to the first list above. How do we distribute this database to a Solaris 8 client without completely mangling its configuration?

Glad you asked, since I had to solve this exact problem where I work. In my Host.list file, under the entries for Solaris clients, I added the following lines:

gpreserve=1:2:4:5:7:8:9:12:14:60001:60002:other:bin:adm:uucp:tty:lp:nuucp:daemon:sysadmin:noaccess groupmap=wheel=root:65533=65534

The first line is a list of gids and group names to preserve. I have included all the names and gids that collide between the two lists. This means that these groups will not be replaced by values from the framework when an /etc/group file is produced for this host. Another approach to this problem might be to add the groups in the second list that don't conflict, such as sysadmin, nogroup and noaccess, to the Group.list file. This means they would propogate to other hosts where they presumambly would do no harm. I was too distracted writing this framework at the time to think of that however. And besides, there is still an irreducable minimun of groups that need to be preserved. The second line is a bit more interesting. It defines a group mapping for two groups, one by name and one by gid. The first mapping replaces the root group on the Solaris client with the wheel group from the framework. This leaves Solaris users with the comfortable group named root, while propogating the wheel group's membership list from the framework to the client's root group. (While gid 0 doesn't have the same function on SysV as on BSD, group file permission semantics are propogated by this method). The second mapping dumps the framework's nogroup on top of the Solaris client's nogroup. The upreserve and umap parameters work similarly.

Deliberate (Desirable) Collisions

Sometimes two users are deliberately given the same UID. For instance, the BSDs have a toor user with uid 0 to provide a backup mechanism for accessing root privileges. Users will sometimes have more than one name for the same account to aid in things like mail processing. A problem arises in the framework when considering the order such accounts should have in the password and group files. If toor comes before root in the passwd database, things can get very confusing. It's not enough to preserve the original order in the password file, since that order can change in unexpected ways through remapping and such. The solution to this problem chosen for the Unix::Sysadmin framework is to give each user and group entry a weight parameter. For users, this parameter is named uidtie.. For groups, it's gidtie. Lower numbers mean, higher priority except that records that have no (u|g)idtie parameter implicitly have the highest tie value possible. This makes it more convenient to bring one of many possible records to the head of the list simply by giving it a low numbered (but non-zero) tie value. As before, the Unix::Sysadmin::Host.list manpage has more detail on these parameters.


Man(%%3) pages (programmer's docs):

the Unix::Sysadmin::Host manpage, the Unix::Sysadmin::User manpage, the Unix::Sysadmin::Automount manpage, the Unix::Sysadmin::Group manpage, the Unix::Sysadmin::Netgroup manpage, the Unix::Sysadmin::List manpage, the Unix::Sysadmin::Cmds manpage, the Unix::Sysadmin::Files manpage the Unix::Sysadmin::Utility manpage the Unix::Sysadmin::Config manpage, the Unix::Sysadmin::Scoped manpage

Man(%%5) pages (file formats):

the Unix::Sysadmin::Host.list manpage, the Unix::Sysadmin::User.list manpage, the Unix::Sysadmin::Automount.list manpage, the Unix::Sysadmin::Group.list manpage, the Unix::Sysadmin::Netgroup.list manpage

Man(%%8) pages (manager's docs):

the Unix::Sysadmin manpage, the Unix::Sysadmin::Setup manpage the usasetup manpage

the usatest manpage

the usabackup manpage

the usaupdate manpage

the usapush manpage


Howard Owen <> =cut