Software Secret Weapons™

The Sysadmins Secret Weapons
by Pavel Simakov on 2006-10-14 18:30:56 under Smoke & Mirrors, view comments
Bookmark and Share

Presentation by Rick Moen at the August 2001 LinuxWorld Expo (This write-up is also be available at

This talk will concern several underappreciated, powerful software tools for system administration.

  • SSH: Invented in 1995 by Tatu Ylönen of Finland, for arbitrary secure connections across insecure networks. It replaces the Berkeley r-commands, and permits supplanting of telnet, partial replacement for ftp, securing of many other protocols.

    To put SSH in context: Just two blocks south of Moscone Center is the site of my home in the early 90s: the CoffeeNet building on Harrison Street. The CoffeeNet was a 100% Linux-based Internet cafe, with about ten free-of-charge workstations and plug-in access for laptop users, which could sample and log all traffic coming in and out of the building, including from my machines in my apartment upstairs.

    This meant that I lived in a 24-hour security nightmare, where I had to assume that my local network was hostile and would use any exposed information to attack my machines. We would in fact frequently find people downstairs in the cafe running password-sniffing programs on their laptop computers, and trying to break into various of the upstairs machines. That might seem an extreme situation, but it was a fair preview of the network-based security threats most of us face today, to various degrees.

    In the more-trusting world of the 1980s, the TCP/IP-using computing world came to rely on some horribly insecure protocols: not just the fairly obscure r-commands, but also POP3 e-mail, ftp file-transfer, telnet remote login, the X Window System's spotty security model, and others. SSH can fix that, by transporting otherwise insecure network traffic in its encrypted tunnel. Along with its cousin SSL (also known as TLS) it's a sort of "armouring" for network traffic.

    The assumption that you cannot trust the network surrounding your machines, but that you can using SSH and its kin to transport data securely across insecure networks, is unfamiliar to many, but an increasingly healthy attitude to take.

    Some history:

    Early 1990s: Tatu Ylönen made early SSH versions available from Helsinki University of Technology, under a you-may-do-anything-with-it licence, and issued a series of bug-fix versions through 1.2.12.

    Late 1995: Ylönen signed a commercial-distribution agreement with Data Fellows, Ltd. (now F-Secure Corporation) for a parallel 2.0 series implementing a new, slightly better version of the SSH protocol. Possibly as part of the agreement, SSH versions 1.2.1 through 1.2.12 were removed from and all official mirror sites, and all new 1.2 releases starting with 1.2.13 were free of charge for non-commercial use only. Ylönen eventually ended the Data Fellows contract and set up his own firm, SSH Communications Security, Ltd. ("") to sell both the 2.x and 1.x branches commercially. He also filed draft RFC documents with the IETF to define the SSH v. 1.5 and 2.0 protocols formally, which has helped keep the protocols standard despite the flowering of third-party implementations on some dozen-plus operating systems, since then.

    The world was already adopting SSH for many purposes starting with remote server administration, despite patent-infringement problems and the hostility of many governments to the public using strong cryptographic software. System administrators would not let little problems like governments keep them from using an essential tool: SSH 1.2.x was in near-universal usage among sysadmins despite licensing and legal problems, all through the 1990s.

    1999: Björn Grönvall of Sweden re-found a copy of the deliberately buried SSH v. 1.2.12, forked off his own version as the "ossh" project, and started bringing it up to date. Soon thereafter, the OpenBSD Foundation noticed Grönvall's work, and set a team of programmers working on a further fork, dubbed OpenSSH, placing all new code under the BSD licence. In less than a year, it gained support for the SSH v. 2.0 protocols and the "portable" (non-OpenBSD-only) version of OpenSSH has de-facto replaced Ylonen's project as the standard SSH implementation.

    • Implementations: Two main server implementations,'s and OpenSSH. Interoperable with almost negligible exceptions.

    • Client implementations for PalmOS, MS-DOS, all Unixes, WinCE, Amiga OS, Cisco IOS, Microsoft Win32 (Win9x/WinME/WinNT/Win2k), Microsoft Win16, MS-DOS, BeOS, Java, Macintosh OS, VMS/OpenVMS, OS/2. Comprehensive list:

    • Forwarding: Most other network protocols such as X11, VNC can be forwarded over the SSH channel. Infeasible cases: ftp, (current) NFS/NIS. Also ranges of ports, UDP-based traffic, and dynamic ports. ftp can't run over SSH because it uses two communications channels rather than one: You could run the control channel over SSH, but that would leave the file-transfer channel out in the open. But the matter mainly isn't pursued because the SSH protocol suite has a partial substitute in its "scp" = secure copy function (partial in that it doesn't furnish an ftp-style view of the remote directory tree), e.g.:

      scp secretstuff

      There are GUI front-ends to scp on numerous operating systems, including some recently that give the user an ws_ftp-style display of the remote system.

      NFS/NIS cannot in practice be tunnelled over SSH -- not even the TCP-based variant of NFS -- because that would require mapping the RPC portmapper service (on which NFS/NIS are based) over SSH, despite its use of unpredictable port numbers. But NFS, at least, would probably also be unbearably slow, even if you could. ("Nightmare File System", indeed.)

    • One can use SSH tunnelling in combination with other tools to automate interhost tasks with minimal trust (rsync mirroring, tape backup, etc.).

      Here's a simple example of remote backup over SSH, using Andrew Tridgell's indispensible rsync tool:

      export RSYNC_RSH=ssh #You might want to put this in /etc/profile
      rsync -avz /home

      That works interactively, but one would also want to be able to automate such backups without creating a security hole. We'll cover that shortly.

    • Advanced SSH topics: Key/passphrase management. Automation of sessions and limited-role applications. Integration with cron. Trust model. Automated no-hassle forwarding of X11. Session compression.

    • The first step towards automating a process like the rsync example, above, is to understand keypairs: SSH keys come in a matched set of a public key and a private key, with the private key protected by being optionally further encrypted by a "passphrase" (a 3DES symmetric-cipher key). You generate the keypairs using the "ssh-keygen" utility:

      As root:
      ssh-keygen -f bkup-keyfile #results in bkup-keyfile and

      Add the contents of to /root/.ssh/authorized_keys on server. This lists the public key as belonging to a trusted keypair. But now we restrict the key to carrying out just that one command: Prefix the key you just added on the server as follows:

      From-="",command="the exact form of the rsync command recorded in auth.log, or wherever ssh logs to?, no-port-forwarding,no-X11-forwarding,no-agent-forwarding

      Now, create a backup script on client that runs the rsync command:

      ssh-add /root/.ssh/bkup-keyfile rsync -avz /home
      and then create a cron job to have /usr/bin/ssh-agent run the script

      Up until recent versions of OpenSSH, rsync would sometimes encounter a deadlock on the select() call with the SSH process, and the file transfer would freeze, always at the same point. After long rounds of finger-pointing, it seems that this problem was probably resolved in OpenSSH 2.9.x, which switched to non-blocking I/O calls. This now makes it practical to automate file transfers (such as backups) over OpenSSH.

    • X11 forwarding: One of the creature-comforts of the SSH protocol is that it can automatically take care of all the messiest details of running the X11 protocol in client-server mode between two machines. That is, it takes care of the "DISPLAY" variable and MIT "magic cookie" authentication issues that otherwise will sop up your time and effort. Also, it handles the "cookie" over the secure SSH channel, making that alone a major improvement over xauth and similar approaches. Additionally, the SSH tunnel has a built-in compression function.

      You will probably need to include an "-X" option with ssh to enable SSH forwarding, which is usually disabled because of the greater security risk it creates.

    • Forwarding of arbitrary ports: Here's an example of redirecting the POP3 port (110/tcp) and the SMTP port (25/tcp) over an SSH tunnel:

      ssh -L -L
      You would then configure your mail client to talk to localhost for both inbound and outbound mail. This assumes that the POP3 and SMTP daemons on the far end are SSL-capable.

    • Cryptography legal/export issues.

      Originally, SSH v. 1.x, like PGP, had the twin problems of hostile governments and patent infringement. The latter was because it used the RSA public cipher (USA patent expired Sept. 2000) and the IDEA conventional cipher (patented in most of the world until 2010-2011). Before Sept. 2000, most users could only legally use short RSA keys in the USA -- about the most widely violated patent ever. One of the advantages of the v. 2.x SSH protocols is that they substitute unpatented ciphers.

      The government-hostility issues are slowly fading. A series of court victories by USA programmer Daniel J. Bernstein led to the USA Department of Commerce allowing unestricted export of open-source cryptographic source code in mid-2000, followed by a ruling allowing unrestricted export of binaries generated from that source code, late last year. There are still countries where effective cryptography is banned or severely limited, but this is generally vanishing.

      Add-ons: SSH can be extended without much effort to support Kerberos v. 4 or v. 5 authentication, SecureID keys, and OPIE / SKEY one-time passwords.

  • Screen: Runs console sessions remotely on your behalf, keeps them open after you disconnect, with easy reconnection and resuming all running applications the way you left them. Natural combination with SSH.

    The GNU Project's screen utility is one of my mainstays, letting me keep my most-used applications open and able to be resumed exactly as I left them, all the time. I normally leave several copies of my preferred e-mail program, mutt, running, plus the lynx Web browser and slrn for reading Usenet newsgroups. Screen as a "session multiplexer" will keep my place in each such session. I reach my machine from wherever I am using SSH remote login, and then do "screen -r" to reattach all the running screen sessions to my current terminal.

    Screen has a built-in cut-and-paste mechanism reminiscent of the old Quarterdeck DesqView one. It also has automated session logging, supports screenshots, configurable window titles, and has a screen-lock feature. Type "ctrl-a ?" to see a command quick-reference.

    Screen's default configuration file is (naturally) /etc/screenrc, which individual users can override using ~/.screenrc .

    For the truly devoted, there's an entire screen-type window manager, called "ratpoison", , with the slogan "Say goodbye to the rodent".

  • kibitz/xkibitz: Kibitz is a well-debugged script in the Expect scripting language that permits two people to interact inside a single session, and as such is perfect for remote expert-to-novice tutoring and assistance, multiple-author document editing, and technical support. Don Libes at NIST created it and put it into the public domain.

    Rather like screen, kibitz includes the ability to scroll back sessions, save the entire session to screen, and even edit the session log while it's being recorded.

    (Kibitz also makes possible multi-player text-type games, but don't tell the boss that.)

    User alice kibitzes with user bob by typing:
    kibitz bob
    Alice now sees a new shell for her joint session with Bob, and Bob gets prompted as to how he can join the conversation (if he wishes). Either can exit by typing Ctrl-D or "exit".

    Alice can kibitz with Bob even if he's on host by saying
    Unfortunately, kibitz uses rshell (one of the insecure Berkeley r-commands) for this, but you can trick it into running over SSH using a command alias.

    xkibitz (expanded-kibitz) is an enhanced version that better supports multiple users coming and going from an ongoing kibitz session.

    • Accumulates session log, which may be selectively played back.

    • Facilitates real-time collaboration, even for document editing. "Type over the shoulder" of users, to help them.

    • The kibitz interpreter is built using the "expect" language, which must be present on both hosts, as must tcl.

  • rsync: The perfect tool for copying directory trees within a host and between hosts. Useful for mirroring and some types of backup. Developed/maintained by Andrew Tridgell and others of Samba fame.

    • Simple syntax:
      rsync -avz source destination

    • On-the-fly compression (the -z option)

    • Preserves file attributes (the -a option -- more reliable than most versions of tar, cpio, or GNU "cp -a")

    • Very economical of bandwidth -- does incremental file transfer.

    • Anonymous rsync option: increasingly, the preferred way of offering files over the Net, increasingly -- scales better than most methods, has been adopted wholesale by the Debian Project for mirroring.

    • Rsync over SSH -- optional (default is rsh transport)

    • Lets you automate a "safety-net" backup, as I described in the SSH discussion.

    • Minor drawback: Uses a considerable amount of RAM to do its work, and can be quite CPU-intensive if you have compression enabled. Watch out for the possibility of select() deadlocks.

    • Using rsync for DNS zone transfers, in place of POP/IMAP, etc. Author Andrew Tridgell seems to use rsync for an unbelievable variety of uses. Next up: rsync filesystem? HTML over rsync?

  • sudo: (pronounced su-do). Lets sysadmins grant small portions of privileged-user access as needed to individual users or groups of users. Alternative to the peon-or-god model. This program originated among student admins at State University of New York at Buffalo and Colorado University at Boulder.

    • sudo as tool for groups of sysadmins: At one firm where I was chief sysadmin, we started a regime whereby even I didn't have the root password for the company's servers: Instead, I would start a sudo session to wield administrative access, and then yield it. Even though, in that case, I was wielding root-level authority, this had the automatic advantage of at least logging exactly what was done with that authority, and by whom.

      A user or group of users are described and has his/their powers enumerated in the /etc/sudoers file, optionally with a password per user/group. Sudo will then require that password to be given at set intervals, default 5 minutes (called "ticketing").

    • Sudo as a record-keeping tool (logging): The logging feature I mentioned above is always available and handy. Obviously, it cannot be relied upon for system forensics after a security-compromise -- unless you log to a separate log-host, e.g., via serial cable. But something is better than nothing.

    • Security limitations: I've heard a pretty decent case made for the position that installing and using sudo is (or at least can be) a positive threat to system security. For one thing, it's yet another root-authority utility added to the list of main security-attack targets. But, more subtly, it blurs the definition of who is a "trusted user", and makes it more difficult to watch for likely paths of attack to gain system privilege. The pertinent quotation would be from Mark Twain's Pudd'nhead Wilson: "Put all your eggs in one basket -- and watch that basket!" I.e., concentrate all the superuser powers just in one account, and watch it carefully -- and possibly limit access to it using a wheel group.

      Additionally, you need to watch carefully for situations where you thought you limited a user (or group) to just one privileged utility with limited capabilities, but it turns out that the user found ways to subvert your control using shell escapes (letting him get to a root shell).

    • The sudoers-lint checker utility: Because the /etc/sudoers file's syntax is fairly elaborate, it's useful to check it: Note sudolog-usage analysis tool at the same site.

    • Edit sudoers using the "visudo" utility, only -- which locks the file against simultaneous edits and does basic syntax checking. Example /etc/sudoers file:

         User_Alias      WEBMASTERS = will, wendy, wim
         Runas_Alias     OP = root, operator
         Host_Alias      SPARC = bigtime, eclipse, moet, anchor
         Cmnd_Alias      SHUTDOWN = /usr/sbin/shutdown
         root            ALL = (ALL) ALL
         %wheel          ALL = (ALL) ALL
         # users in the WEBMASTERS User_Alias (will, wendy, and wim)
         # may run any command as user www (which owns the web pages)
         # or simply su to www.
         WEBMASTERS      www = (www) ALL, (root) /usr/bin/su www
         # bob may run anything on the sparc machines as any user
         # listed in the Runas_Alias "OP" (ie: root and operator)
         bob             SPARC = (OP) ALL
         alice           ALL = SHUTDOWN

      Optionally, use "visudo --with-editor=/path/to/another/editor" if you just can't abide vi.

    • For us Bastard Operator from Hell types: There's an option in sudo (disabled by default) to issue insults to users who fail sudo's password prompts. This includes HAL (2001: A space Odyssey) insults, insults from the old British "Goon Show" comedy television program, a group of "classic" insults, and a set of finely honed ones from the Colorado University Ops group.

      With or without insults enabled, unauthorised attempts to use sudo get logged and selected administrators can be automatically notified by e-mail.

    • sudo-like alternatives: super, runas, priv, calife, osh, ssu, su1, op, suSub, Power Broker.

  • enscript: Markku Rossi's GNU Project replacement for Adobe's enscript utility. Sometimes called "genscript". Converts ASCII to PostScript for printing or other handling, and facilitates a number of useful formatting options.

    • N-up printing.

    • Multiple columns.

    • Fancy ("Gaudy") headers.

    • Adobe Font Metrics support.

    • Selective printing of specific page ranges.

    • Indenting, margins.

    • Formatting to various paper sizes (Letter, Legal, A3, A4, A5, A4BigMargin)

    • Landscape/portrait.

    • Automatic colour support.

    • Automated highlighting for source code.

    I won't even try, here, to chronicle all the neat ways this tool can be used. Ordinarily, I barely scratch the surface of its feature set, by pretty-printing files and sending them to specific printers, like this: "enscript -G -P myfavoriteprinter foo.txt" (meaning use fancy headers with page numbering, and send foo.txt to printer myfavoriteprinter).

    Some variations:

    enscript -2 foo.txt  #Print using two columns.
    enscript -2r foo.txt  #Print using two columns, rotated 90 degrees.
    enscript -DDuplex:true foo.txt  #Print two-sided, printer permitting.
    enscript -G2rE -U2 foo.c  #Print with gaudy header, two columns, 
                              landscape, code highlighting, 2-up style.

  • netcat: General-purpose tool for reading and writing data to TCP or UDP-based IP connections. Originally by "hobbit" <>; rewritten by Weld Pond of L0pht Heavy Industries.

    • Print directly to printers, bypassing spools. Note that you can reach PostScript printers on port 9100, thus:

      lynx -dump | enscript -G -p - | nc 9100

    • Pull down Web page "raw", to debug http transactions.

    • Can replace telnet for checking out connections.

    • Listen to local network on specified port numbers.

    • Debug network problems; send data directly to ports.

    • Receive data, raw.

    • Do hex dump of transmitted or received data.

    "nc host port" creates a TCP connection to the given port on the given target host. Your stdin is sent to the remote host, and anything it responds is dumped to your stdout, continuing until the connection shuts down. This can actually be either UDP or TCP. Unlike with telnet, no EOF problems, no problems processing binary data streams, no mixing of error messages in the output. Netcat is also much smaller and faster. It can optionally do line-at-a-time mode, one line every N seconds.

    For security scanning, netcat can also do port-scanning of network hosts, with a built-in randomiser option. In this mode, it's similar to nmap. Example: "echo QUIT | nc -v -w 5 target 20-250 500-600 5990-7000"

    You can also do rsync-like data transfer between two hosts, using a pair of simultaenous commands like this:

    nc -l -p 1234 | tar xvfp -   #receiving
    tar cfp - /some/dir | nc -w 3 othermachine 1234  #sending

  • CVS (Concurrent Versioning System): The standard repository for versioned data of all sorts.

    • Use for DNS zonesfiles, collaborative papers, ASCII project data of all sorts. Most things you change over time and/or collaborate with others on can/should be checked into CVS. If you maintain DNS zonefiles or other versioned data and are not yet using CVS or an equivalent, you need to -- if only because it keeps a record of who made which changes, in what order and when, with a relatively easy means of reverting to any stage of the chain of changes.

      Keep complete records of a changing project (new versions stored as diffs). Collaborate with others, keeping people's changes distinct. Scales better than emacs/vim-style locking, is more reliable. Stores everything in flat files.

      The simplest use of CVS (anonymous checkout):

         cvs checkout arla
         login: anoncvs
         password: anoncvs
          # The username/password used for anonymous access differs from site to site.

      So you can retrieve the latest arla sourcecode (in directory ./arla), in order to build it. CVS does not build anything, nor administer projects. That part is up to you.

    • History: CVS was developed by the GNU Project as an enhancement to an earlier and more primitive version-control system, RCS. Accordingly, CVS has crufty bits of RCS showing through it.

      1986: Dick Grune posts shell scripts to comp.sources.unix.
      1989: Brian Berliner & Jeff Polk re-code it in C. Built as an extension to RCS. And it basically hasn't improved much, since then.

    • CVS over SSH and various other transports:

      Overall syntax is



      export CVS_RSH=ssh cvs -d checkout foo # Note that default transport for "ext" access is rsh.

      cvs -d checkout someproj # pserver = password server runs out of inetd. # pserver does the evil cleartext-password thing. Avoid if possible.

      cvs -d checkout foo # This is "GSSAPI" e.g. Kerberos authentication. Rare.

    • CVSweb is a simple Web front-end for browsing CVS repositories.

    • Problems: Binary data handling. Repeated merges handled badly. Symlinks, multiple hard links, etc. aren't handled properly. Directory changes, file renames, and permission and other meta-data changes aren't handled well. Commits aren't atomic. Locks are overly strict. Next-generation replacement from the Subversion Project, coming soon, will fix all of that.

    • Current alternatives: Perforce, Bitkeeper, SourceSafe, ClearCase, others.

  • find: Incredibly flexible tool for locating files that match specified characteristics and taking specified actions upon them. Little-understood because of its very long manpage and the sheer amount of functionality crammed into a single utility.

    • Search directory tree to find file or group of files.

    • Document suid/sgid files on your system; security-check your system.

    • Combining the xargs command with "find".

    • Using find / xargs with tar or cpio, for backups.

    A couple of modest examples, to get you started:

    find . -name '*.cpp' -exec grep expression {} \:
       One grep process per file is inefficient, and there are various
       ways around that problem.

    find . -name '*.cpp' -type f | xargs grep expression /dev/null

    Sysadmins argue endlessly over what is the correct variation.

Further resources:










Comment (1)

  • Comment by ricardo — March 18, 2011 @ 4:08 pm

    Em menos de 1 minuto você estará assistindo TV a Cabo de graça em seu PC

Leave a comment

  Copyright © 2004-2014 by Pavel Simakov
any conclusions, recommendations, ideas, thoughts or the source code presented on this site are my own and do not reflect a official opinion of my current or past employers, partners or clients