SaltyCrane: sshhttps://www.saltycrane.com/blog/2021-02-07T19:49:30-08:00Notes on Fabric 2 and Python 3
2021-02-07T19:49:30-08:00https://www.saltycrane.com/blog/2021/02/notes-fabric-2-and-python-3/<p>
<a href="https://docs.fabfile.org/en/2.5/index.html">Fabric 2</a> is a Python
package used for running commands on remote machines via SSH. Fabric 2
supports Python 3 and is a rewrite of the Fabric I used
<a href="/blog/2009/10/notes-python-fabric-09b1/">years ago</a>. Here are my
notes on using Fabric 2 and Python 3.
</p>
<h4>Set up SSH config and SSH agent</h4>
<ul>
<li>
Create or edit your <code>~/.ssh/config</code> file to contain your remote
host parameters
<pre>
Host myhost
User myusername
HostName myhost.com
IdentityFile ~/.ssh/id_rsa
</pre
>
</li>
<li>
Add your private key to your SSH agent
<pre class="console">
$ ssh-add ~/.ssh/id_rsa
</pre
>
</li>
</ul>
<h4>Create a project, create a virtualenv, and install fabric2</h4>
<pre class="console">
$ mkdir -p /tmp/my-project
$ cd /tmp/my-project
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install fabric2
</pre>
<h4>Create a fabfile.py script</h4>
<p>
Create a file <code>/tmp/my-project/fabfile.py</code> with the following
contents. Note: "myhost" is the same name used in
<code>~/.ssh/config</code> described above.
</p>
<pre class="python">
from fabric2 import task
hosts = ["myhost"]
@task(hosts=hosts)
def mytask(c):
print("Starting mytask...")
with c.cd("/var"):
c.run("ls -l")
print("Done.")
</pre>
<h4>Run the fabric script</h4>
<p>
In <code>/tmp/my-project</code>, with the virtualenv activated, run the fabric
task to list the contents of <code>/var</code> on the remote host.
</p>
<pre class="console">$ fab2 mytask </pre>
<p>Output:</p>
<pre>
Starting mytask...
total 48
drwxr-xr-x 2 root root 4096 backups
drwxr-xr-x 9 root root 4096 cache
drwxrwxrwt 2 root root 4096 crash
drwxr-xr-x 38 root root 4096 lib
drwxrwsr-x 2 root root 4096 local
drwxrwxrwt 2 root root 4096 lock
drwxrwxr-x 14 root root 4096 log
drwxrwsr-x 2 root root 4096 mail
drwxr-xr-x 2 root root 4096 opt
drwxr-xr-x 5 root root 4096 spool
drwxrwxrwt 2 root root 4096 tmp
drwxr-xr-x 3 root root 4096 www
Done.
</pre>
<h4>See also / References</h4>
<ul>
<li>
<a href="http://www.fabfile.org/upgrading.html#the-whole-thing"
>Example fabfile (Upgrading from 1.x docs)</a
>
</li>
<li>
<a
href="http://docs.paramiko.org/en/stable/api/client.html#paramiko.client.SSHClient"
>
paramiko SSHClient API reference
</a>
</li>
</ul>
How to expose a Flask local development server to the public using SSH remote port forwarding
2013-02-12T18:50:36-08:00https://www.saltycrane.com/blog/2013/02/how-expose-flask-local-development-server-public-using-ssh-remote-port-forwarding/<p>Here is how to run a Flask local development server on your local machine and expose it to the public via a remote server you have control over. This uses SSH remote port forwarding which is a converse of local port forwarding described here: <a href="/blog/2012/10/how-run-django-local-development-server-remote-machine-and-access-it-your-browser-your-local-machine-using-ssh-port-forwarding/">How to run a Django local development server on a remote machine and access it in your browser on your local machine using SSH port forwarding</a></p>
<ol>
<li>On the remote host, edit the sshd_config file (mine was located at /etc/ssh/sshd_config)
to allow remote hosts to connect to ports forwarded for the client:
<pre>GatewayPorts yes</pre>
</li>
<li>On the remote host, restart the SSH server:
<pre class="console">$ sudo service sshd restart </pre>
</li>
<li>On the local host, SSH to the remote host:
<pre class="console">$ ssh -v -R 50051:localhost:5000 eliot@my.remotehost.com </pre>
</li>
<li>On the local host, run the Flask dev server:
<pre class="console">$ python runserver.py localhost 5000 </pre>
</li>
<li>Go to <a href="http://my.remotehost.com:50051">http://my.remotehost.com:50051</a> in the browser</li>
</ol>
<h4>Using RemoteForward in your ~/.ssh/config</h4>
<p>You can also achieve the same results by using the <code>RemoteForward</code> in your <code>~/.ssh/config</code> file:</p>
<pre>Host myremote
User eliot
HostName my.remotehost.com
RemoteForward 50051 localhost:5000</pre>
<h4>References</h4>
<ul>
<li><a href="http://www.hackinglinuxexposed.com/articles/20030309.html">
http://www.hackinglinuxexposed.com/articles/20030309.html</a></li>
<li><a href="http://serverfault.com/questions/285616/how-to-allow-remote-connections-from-non-localhost-clients-with-ssh-remote-port">
http://serverfault.com/questions/285616/how-to-allow-remote-connections-from-non-localhost-clients-with-ssh-remote-port</a></li>
<li><a href="http://linux.die.net/man/5/sshd_config">
http://linux.die.net/man/5/sshd_config</a></li>
</ul>
<h4>See also</h4>
<p><a href="http://progrium.com/localtunnel/">localtunnel</a> by Jeff Lindsay exposes your local development server without requiring a public remote server.</p>
How to run a Django local development server on a remote machine and access it in your browser on your local machine using SSH port forwarding
2012-10-23T18:11:59-07:00https://www.saltycrane.com/blog/2012/10/how-run-django-local-development-server-remote-machine-and-access-it-your-browser-your-local-machine-using-ssh-port-forwarding/<p>Here is how to run a Django local development server on a remote machine and access it in your browser on your local machine using SSH port forwarding. (This is useful if there is a firewall blocking access to the port of your Django local dev server (port 8000).</p>
<ol>
<li>On the local host, SSH to the remote host:
<pre class="console">$ ssh -v -L 9000:localhost:8000 eliot@my.remotehost.com </pre>
</li>
<li>On the remote host, run the Django dev server:
<pre class="console">eliot@my.remotehost.com:/path/to/my/django/project$ python manage.py runserver 0.0.0.0:8000 </pre>
</li>
<li>On the local host, go to <a href="http://localhost:9000">http://localhost:9000</a> in the browser</li>
</ol>
<p>Note: The local port and the remote port can be the same (i.e. you can use 8000 instead of 9000). I just made them different to show which port is which.</p>
<h4>Using LocalForward in your ~/.ssh/config</h4>
<p>You can also achieve the same results by using the <code>LocalForward</code> in your <code>~/.ssh/config</code> file:</p>
<pre>Host myremote
User eliot
HostName my.remotehost.com
LocalForward 9000 localhost:8000</pre>
<h4>Reference</h4>
<p><a href="http://magazine.redhat.com/2007/11/06/ssh-port-forwarding/">http://magazine.redhat.com/2007/11/06/ssh-port-forwarding/</a></p>
Notes on debugging ssh connection problems
2011-08-31T16:12:43-07:00https://www.saltycrane.com/blog/2011/08/notes-debugging-ssh-connection-problems/<ul>
<li>Run the ssh client in verbose mode
<pre class="console">$ ssh -vvv user@host </pre>
</li>
<li>On the server, check auth.log for errors
<pre class="console">$ sudo tail -f /var/log/auth.log </pre>
<p>On Red Hat, it's <code>/var/log/secure</code></p>
</li>
<li>For more debugging info, (assuming you have control of the ssh server)
run the sshd server in debug mode on another port
<pre class="console">$ sudo /usr/sbin/sshd -ddd -p 33333 </pre>
Then specify the port, <code>-p 33333</code> with the ssh client. e.g.
<pre class="console">$ ssh -vvv -p 33333 user@host </pre>
</li>
</ul>
<p>Commands run on Ubuntu 10.04</p>
<h4 id="message-too-long">sftp error: <code>Received message too long 170160758</code></h4>
<p>Problem was in the .bashrc. See <a href="http://www.snailbook.com/faq/sftp-corruption.auto.html">http://www.snailbook.com/faq/sftp-corruption.auto.html</a></p>
Fabric post-run processing Python decorator
2010-11-06T22:00:52-07:00https://www.saltycrane.com/blog/2010/11/fabric-post-run-processing-python-decorator/<pre class="python">import traceback
from functools import wraps
from fabric.api import env
# global variable for add_hooks()
parent_task_name = ''
def add_post_run_hook(hook, *args, **kwargs):
'''Run hook after Fabric tasks have completed on all hosts
Example usage:
@add_post_run_hook(postrunfunc, 'arg1', 'arg2')
def mytask():
# ...
'''
def true_decorator(f):
return add_hooks(post=hook, post_args=args, post_kwargs=kwargs)(f)
return true_decorator
def add_hooks(pre=None, pre_args=(), pre_kwargs={},
post=None, post_args=(), post_kwargs={}):
'''
Function decorator to be used with Fabric tasks. Adds pre-run
and/or post-run hooks to a Fabric task. Uses env.all_hosts to
determine when to run the post hook. Uses the global variable,
parent_task_name, to check if the task is a subtask (i.e. a
decorated task called by another decorated task). If it is a
subtask, do not perform pre or post processing.
pre: callable to be run before starting Fabric tasks
pre_args: a tuple of arguments to be passed to "pre"
pre_kwargs: a dict of keyword arguments to be passed to "pre"
post: callable to be run after Fabric tasks have completed on all hosts
post_args: a tuple of arguments to be passed to "post"
post_kwargs: a dict of keyword arguments to be passed to "post"
'''
# create a namespace to save state across hosts and tasks
class NS(object):
run_counter = 0
def true_decorator(f):
@wraps(f)
def f_wrapper(*args, **kwargs):
# set state variables
global parent_task_name
if not parent_task_name:
parent_task_name = f.__name__
NS.run_counter += 1
print 'parent_task_name: %s' % parent_task_name
print 'count/N_hosts: %d/%d' % (NS.run_counter, len(env.all_hosts))
# pre-run processing
if f.__name__ == parent_task_name and NS.run_counter == 1:
if pre:
print 'Pre-run processing...'
pre(*pre_args, **pre_kwargs)
# run the task
r = None
try:
r = f(*args, **kwargs)
except SystemExit:
pass
except:
print traceback.format_exc()
# post-run processing
if (f.__name__ == parent_task_name and
NS.run_counter >= len(env.all_hosts)):
if post:
print 'Post-run processing...'
post(*post_args, **post_kwargs)
return r
return f_wrapper
return true_decorator</pre>
Class-based Fabric scripts via a Python metaprogramming hack
2010-09-23T23:43:03-07:00https://www.saltycrane.com/blog/2010/09/class-based-fabric-scripts-metaprogramming-hack/<p>This is a hack to enable the definition of
<a href="http://fabfile.org/">Fabric</a> tasks as methods in a class
instead of just as module level functions.
This class-based approach provides the benefits of inheritance and method overriding.
</p>
<p>I have <a href="http://www.saltycrane.com/blog/2007/02/how-to-share-non-global-c-data/">a</a>
<a href="http://www.saltycrane.com/blog/2007/04/data-hiding-in-c-object-oriented/">history</a>
of using object-oriented techniques in places they weren't meant to be used. This one was
not all my idea, so may <a href="http://www.thirstymind.org/">Andrew</a> get any blame he
deserves. Here's the story:</p>
<p>
We had several Fabric scripts which violated
<a href="http://en.wikipedia.org/wiki/Don%27t_repeat_yourself">DRY</a>.
Andrew wished for a class-based Fabric script.
We discussed ideas.
<a href="http://stackoverflow.com/questions/1911281/how-do-you-get-list-of-methods-in-a-python-class">Stackoverflow</a>
<a href="http://stackoverflow.com/questions/1621350/dynamically-adding-functions-to-a-python-module">answered</a>
<a href="http://stackoverflow.com/questions/2933470/how-do-i-call-setattr-on-the-current-module">my</a>
<a href="http://stackoverflow.com/questions/3061/calling-a-function-from-a-string-with-the-functions-name-in-python">questions</a>.
I hacked.
<a href="http://stackoverflow.com/questions/3664302/how-to-dynamically-create-module-level-functions-from-methods-in-a-class">
Stackoverflow fixed it for me.
</a>
I made one more tweak and here it is:
<p><b><code>util.py</code>:</b></p>
<pre class="python">import inspect
import sys
def add_class_methods_as_module_level_functions_for_fabric(instance, module_name):
'''
Utility to take the methods of the instance of a class, instance,
and add them as functions to a module, module_name, so that Fabric
can find and call them. Call this at the bottom of a module after
the class definition.
'''
# get the module as an object
module_obj = sys.modules[module_name]
# Iterate over the methods of the class and dynamically create a function
# for each method that calls the method and add it to the current module
for method in inspect.getmembers(instance, predicate=inspect.ismethod):
method_name, method_obj = method
if not method_name.startswith('_'):
# get the bound method
func = getattr(instance, method_name)
# add the function to the current module
setattr(module_obj, method_name, func)</pre>
<p>As the docstring says, this function takes the methods of a class instance and adds them as functions
to the module (fabfile.py) so Fabric an find and call them. Here is an example.
</p>
<p><b><code>base.py</code>:</b></p>
<pre class="python">from fabric import api as fab
class Deployment(object):
name = ''
local_file = ''
remote_file = ''
def base_task1(self):
'base task 1'
fab.run('svn export /path/to/{self.name}'.format(self=self))
def base_task2(self):
'base task 2'
fab.put(self.local_file, self.remote_file)</pre>
<p><b><code>fabfile.py</code>:</b></p>
<pre class="python">import base
import util
from fabric import api as fab
class _MyWebsiteDeployment(base.Deployment):
name = 'my_website'
local_file = '/local/path/to/my_website/file'
remote_file = '/remote/path/to/my_website/file'
def my_website_task(self):
'my website task'
fab.run('echo "I am special"')
instance = _MyWebsiteDeployment()
util.add_class_methods_as_module_level_functions_for_fabric(instance, __name__)</pre>
<p>Running <code>fab -l</code> gives:</p>
<pre class="console">$ fab -l
Available commands:
base_task1 base task 1
base_task2 base task 2
my_website_task my website task</pre>
Notes on sshfs on Ubuntu
2010-04-05T22:08:09-07:00https://www.saltycrane.com/blog/2010/04/notes-sshfs-ubuntu/<p><a href="http://fuse.sourceforge.net/sshfs.html">sshfs</a>
is an easy way to mount a remote filesystem using
<a href="http://en.wikipedia.org/wiki/Secure_Shell">ssh</a>
and <a href="http://en.wikipedia.org/wiki/Filesystem_in_Userspace">FUSE</a>.
If your remote server is already running a ssh server that supports
<a href="http://en.wikipedia.org/wiki/SSH_file_transfer_protocol">sftp</a>
(Ubuntu's ssh server does),
there is nothing to set up on the remote server and set up on the
client is relatively easy.</p>
<p>Other options for mounting a remote filesystem are
<a href="http://www.webdav.org/">WebDAV</a>,
<a href="http://www.samba.org/">Samba</a>, and
<a href="http://en.wikipedia.org/wiki/Network_File_System_%28protocol%29">NFS</a>.
I'm no expert, but from what I've gathered, sshfs is faster than
WebDAV and slower than Samba and NFS. However, Samba and NFS are typically
more difficult to set up than sshfs. Here are my notes for
setting up sshfs. I am running on Ubuntu Hardy.
</p>
<h4 id="cmdline">OPTION 1: Use sshfs from the command line</h4>
<ul>
<li>Install sshfs
<pre class="console">$ apt-get update
$ apt-get install sshfs</pre>
</li>
<li>Create a mount point
<pre class="console">$ mkdir -p /var/www/remote_files</pre>
</li>
<li>Mount the remote filesystem
<pre class="console">$ sshfs root@10.232.139.234:/mnt/files /var/www/remote_files \
> -o IdentityFile=/path/to/my_ssh_keyfile \
> -o ServerAliveInterval=60 -o allow_other</pre>
where:
<ul>
<li><code>root</code> is the ssh username</li>
<li><code>10.232.139.234</code> is the remote host</li>
<li><code>/mnt/files</code> is the remote path</li>
<li><code>/var/www/remote_files</code> is the local path</li>
<li><code>/path/to/my_ssh_keyfile</code> is the ssh keyfile</li>
<li>The <code>ServerAliveInterval</code> option will keep your connection
from timing out.</li>
<li>The <code>allow_other</code> option allows other users to access
the filesystem</li>
</ul>
</li>
</ul>
<h4 id="fstab">OPTION 2: Use sshfs with /etc/fstab</h4>
<ul>
<li>Install sshfs as above</li>
<li>Edit /etc/fstab:
<pre>sshfs#root@10.232.139.234:/mnt/files /var/www/remote_files fuse allow_other,IdentityFile=/path/to/my_ssh_keyfile,ServerAliveInterval=60 0 0</pre>
where the options are explained above.
</li>
<li>Create a mount point
<pre class="console">$ mkdir -p /var/www/remote_files</pre>
</li>
<li>Mount
<pre class="console">$ mount /var/www/remote_files</pre>
</li>
</ul>
<h4 id="help">For more help, try <code>sshfs --help</code></h4>
<pre>usage: sshfs [user@]host:[dir] mountpoint [options]
general options:
-o opt,[opt...] mount options
-h --help print help
-V --version print version
SSHFS options:
-p PORT equivalent to '-o port=PORT'
-C equivalent to '-o compression=yes'
-1 equivalent to '-o ssh_protocol=1'
-o reconnect reconnect to server
-o sshfs_sync synchronous writes
-o no_readahead synchronous reads (no speculative readahead)
-o sshfs_debug print some debugging information
-o cache=YESNO enable caching {yes,no} (default: yes)
-o cache_timeout=N sets timeout for caches in seconds (default: 20)
-o cache_X_timeout=N sets timeout for {stat,dir,link} cache
-o workaround=LIST colon separated list of workarounds
none no workarounds enabled
all all workarounds enabled
[no]rename fix renaming to existing file (default: off)
[no]nodelay set nodelay tcp flag in ssh (default: on)
[no]nodelaysrv set nodelay tcp flag in sshd (default: off)
[no]truncate fix truncate for old servers (default: off)
[no]buflimit fix buffer fillup bug in server (default: on)
-o idmap=TYPE user/group ID mapping, possible types are:
none no translation of the ID space (default)
user only translate UID of connecting user
-o ssh_command=CMD execute CMD instead of 'ssh'
-o ssh_protocol=N ssh protocol to use (default: 2)
-o sftp_server=SERV path to sftp server or subsystem (default: sftp)
-o directport=PORT directly connect to PORT bypassing ssh
-o transform_symlinks transform absolute symlinks to relative
-o follow_symlinks follow symlinks on the server
-o no_check_root don't check for existence of 'dir' on server
-o SSHOPT=VAL ssh options (see man ssh_config)
FUSE options:
-d -o debug enable debug output (implies -f)
-f foreground operation
-s disable multi-threaded operation
-o allow_other allow access to other users
-o allow_root allow access to root
-o nonempty allow mounts over non-empty file/dir
-o default_permissions enable permission checking by kernel
-o fsname=NAME set filesystem name
-o subtype=NAME set filesystem type
-o large_read issue large read requests (2.4 only)
-o max_read=N set maximum size of read requests
-o hard_remove immediate removal (don't hide files)
-o use_ino let filesystem set inode numbers
-o readdir_ino try to fill in d_ino in readdir
-o direct_io use direct I/O
-o kernel_cache cache files in kernel
-o [no]auto_cache enable caching based on modification times
-o umask=M set file permissions (octal)
-o uid=N set file owner
-o gid=N set file group
-o entry_timeout=T cache timeout for names (1.0s)
-o negative_timeout=T cache timeout for deleted names (0.0s)
-o attr_timeout=T cache timeout for attributes (1.0s)
-o ac_attr_timeout=T auto cache timeout for attributes (attr_timeout)
-o intr allow requests to be interrupted
-o intr_signal=NUM signal to send on interrupt (10)
-o modules=M1[:M2...] names of modules to push onto filesystem stack
-o max_write=N set maximum size of write requests
-o max_readahead=N set maximum readahead
-o async_read perform reads asynchronously (default)
-o sync_read perform reads synchronously
Module options:
[subdir]
-o subdir=DIR prepend this directory to all paths (mandatory)
-o [no]rellinks transform absolute symlinks to relative
[iconv]
-o from_code=CHARSET original encoding of file names (default: UTF-8)
-o to_code=CHARSET new encoding of the file names (default: ANSI_X3.4-1968)</pre>
<h4 id="references">References</h4>
<ul>
<li><a href="http://sysblogd.wordpress.com/2007/08/23/ubuntu-mounting-remote-filesystem-using-sshfs-fuse/">
Ubuntu: Mounting remote filesystem using sshfs (FUSE)</a></li>
<li><a href="http://www.debuntu.org/2006/04/27/39-mounting-a-fuse-filesystem-form-etcfstab">
Mounting a fuse Filesystem from /etc/fstab</a></li>
<li><a href="https://help.ubuntu.com/community/SSHFS">
SSHFS - Community Ubuntu Documentation</a></li>
<li><a href="http://sourceforge.net/apps/mediawiki/fuse/index.php?title=SshfsFaq">
sshfs FAQ</a></li>
<li><a href="http://www.linuxforums.org/forum/misc/159035-sshfs-passwordless-login.html">
sshfs passwordless login - Linux Forums</a></li>
<li><a href="http://blog.brianhartsock.com/2007/02/18/sshfs/">
Brian Hartsock's Blog - SSHFS</a></li>
</ul>
<h4 id="webdav-vs-sshfs">Webdav vs. sshfs</h4>
<ul>
<li><a href="http://jamiedubs.com/macfuse-sshfs-vs-webdav">
MacFUSE sshfs vs WebDAV benchmarks / Jamie Wilkinson</a></li>
<li><a href="http://gioorgi.com/2009/webdav-versus-sshfs/">
WebDAV versus Sshfs | Gioorgi.com</a></li>
</ul>
Python paramiko notes
2010-02-24T12:05:01-08:00https://www.saltycrane.com/blog/2010/02/python-paramiko-notes/<p><a href="http://www.lag.net/paramiko/">Paramiko</a>
is a Python <a href="http://en.wikipedia.org/wiki/Secure_Shell">ssh</a>
package. The following is an example that makes use of my
<a href="/blog/2008/11/creating-remote-server-nicknames-sshconfig/">ssh
config file</a>, creates a ssh
client, runs a command on a remote server, and reads a remote file using
<a href="http://en.wikipedia.org/wiki/SSH_file_transfer_protocol">sftp</a>.
Paramiko is released under the
<a href="http://en.wikipedia.org/wiki/GNU_Lesser_General_Public_License">
GNU LGPL</a>
</p>
<h4 id="install">Install paramiko</h4>
<ul>
<li><a href="http://www.saltycrane.com/blog/2010/02/how-install-pip-ubuntu/">
Install pip</a>
</li>
<li>Install paramiko
<pre>sudo pip install paramiko</pre>
</li>
</ul>
<h4 id="example">Example</h4>
<pre class="python">from paramiko import SSHClient, SSHConfig
# ssh config file
config = SSHConfig()
config.parse(open('/home/eliot/.ssh/config'))
o = config.lookup('testapa')
# ssh client
ssh_client = SSHClient()
ssh_client.load_system_host_keys()
ssh_client.connect(o['hostname'], username=o['user'], key_filename=o['identityfile'])
# run a command
print "\nRun a command"
cmd = 'ps aux'
stdin, stdout, stderr = ssh_client.exec_command(cmd)
for i, line in enumerate(stdout):
line = line.rstrip()
print "%d: %s" % (i, line)
if i >= 9:
break
# open a remote file
print "\nOpen a remote file"
sftp_client = ssh_client.open_sftp()
sftp_file = sftp_client.open('/var/log/messages')
for i, line in enumerate(sftp_file):
print "%d: %s" % (i, line[:15])
if i >= 9:
break
sftp_file.close()
sftp_client.close()
# close ssh client
ssh_client.close()</pre>
<p>Results:</p>
<pre>Run a command
0: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1: root 1 0.0 0.0 1920 536 ? S 2009 0:00 /sbin/init
2: root 2 0.0 0.0 0 0 ? S 2009 0:00 [migration/0]
3: root 3 0.0 0.0 0 0 ? SN 2009 0:00 [ksoftirqd/0]
4: root 4 0.0 0.0 0 0 ? S 2009 0:00 [watchdog/0]
5: root 5 0.0 0.0 0 0 ? S< 2009 0:00 [events/0]
6: root 6 0.0 0.0 0 0 ? S< 2009 0:00 [khelper]
7: root 7 0.0 0.0 0 0 ? S< 2009 0:00 [kthread]
8: root 8 0.0 0.0 0 0 ? S< 2009 0:00 [xenwatch]
9: root 9 0.0 0.0 0 0 ? S< 2009 0:00 [xenbus]
Open a remote file
0: Feb 21 06:47:03
1: Feb 21 07:14:03
2: Feb 21 07:34:03
3: Feb 21 07:54:04
4: Feb 21 08:14:04
5: Feb 21 08:34:05
6: Feb 21 08:54:05
7: Feb 21 09:14:05
8: Feb 21 09:34:06
9: Feb 21 09:54:06</pre>
<h4 id="sftp-helper-code">Some SFTP helper code</h4>
<p><em>Added 2011-09-15</em></p>
<pre class="python">import errno
import os.path
import paramiko
class SFTPHelper(object):
def connect(self, hostname, **ssh_kwargs):
"""Create a ssh client and a sftp client
**ssh_kwargs are passed directly to paramiko.SSHClient.connect()
"""
self.sshclient = paramiko.SSHClient()
self.sshclient.load_system_host_keys()
self.sshclient.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.sshclient.connect(hostname, **ssh_kwargs)
self.sftpclient = self.sshclient.open_sftp()
def remove_directory(self, path):
"""Remove remote directory that may contain files.
It does not support directories that contain subdirectories
"""
if self.exists(path):
for filename in self.sftpclient.listdir(path):
filepath = os.path.join(path, filename)
self.sftpclient.remove(filepath)
self.sftpclient.rmdir(path)
def put_directory(self, localdir, remotedir):
"""Put a directory of files on the remote server
Create the remote directory if it does not exist
Does not support directories that contain subdirectories
Return the number of files transferred
"""
if not self.exists(remotedir):
self.sftpclient.mkdir(remotedir)
count = 0
for filename in os.listdir(localdir):
self.sftpclient.put(
os.path.join(localdir, filename),
os.path.join(remotedir, filename))
count += 1
return count
def exists(self, path):
"""Return True if the remote path exists
"""
try:
self.sftpclient.stat(path)
except IOError, e:
if e.errno == errno.ENOENT:
return False
raise
else:
return True</pre>
<h4 id="see-also">See also</h4>
<ul>
<li><a href="http://www.lag.net/paramiko/docs/">Paramiko API documention</a></li>
<li><a href="http://github.com/robey/paramiko/">Paramiko source code on github</a></li>
</ul>
A hack to copy files between two remote hosts using Python
2010-02-08T12:51:29-08:00https://www.saltycrane.com/blog/2010/02/hack-copy-files-between-two-remote-hosts-using-python/<p>I sometimes need to copy a file (such as a database dump) between two
remote hosts on EC2. Normally this involves a few steps: scp'ing the ssh keyfile
to Host 1, ssh'ing to Host 1, looking up the address for Host 2, then scp'ing the desired
file from Host 1 to Host 2.</p>
<p>I was excited to read in the man page that scp can copy files between two
remote hosts directly. However, it didn't work for me.
<a href="http://fixunix.com/ssh/258554-should-i-able-scp-between-two-remote-hosts.html#post700563">Apparently</a>,
running <code>scp host1:myfile host2:</code> is like running
<code>ssh host1 scp myfile host2:</code> so I still need the address of host2
and my ssh keyfile on host1.
</p>
<p>My inablility to let go of this small efficiency increaser, led me to
(what else?) write a Python script.
I know this is a hack so if you know of a better way of doing this, let
me know.
</p>
<p>The script parses my <code>~/.ssh/config</code> file to find
the ssh keyfile and address for host 2, uses scp to copy the ssh keyfile
to host 1, then runs the <code>ssh host1 scp ...</code> command with the
appropriate options filled in. The script captures all of the ssh
options for host 2 and passes them on the command line to <code>scp</code>
via the <code>-o</code> command-line option. Note, I only tested this
to set the <code>User</code> option– I don't know if all ssh options
will work.
</p>
<p><em>Warning: the script disables the StrictHostKeyChecking SSH option,
so you are more vunerable to a man-in-the-middle attack.</em></p>
<p><em>Update 2010-02-16:</em> I've found there is already a SSH config
file parser in the <a href="http://www.lag.net/paramiko/">paramiko</a>
library. The source can be viewed
<a href="http://github.com/robey/paramiko/blob/master/paramiko/config.py">
on github</a>.
</p>
<p><em>Update 2010-05-04:</em> I modified my code to use the paramiko
library and also allow command line options to be passed directly to
the scp command.
The latest code is available in my github repository
<a href="http://github.com/saltycrane/remote-tools">remote-tools</a>.
</p>
<pre class="python">import itertools
import os
import re
import sys
SSH_CONFIG_FILE = '/home/eliot/.ssh/config'
def main():
host1, path1 = sys.argv[1].split(':', 1)
host2, path2 = sys.argv[2].split(':', 1)
o = get_ssh_options(host2)
keyfile_remote = '/tmp/%s' % os.path.basename(o['identityfile'])
ssh_options = ' -o'.join(['='.join([k, v]) for k, v in o.iteritems()
if k != 'hostname' and k != 'identityfile'])
run('scp %s %s:%s' % (o['identityfile'], host1, keyfile_remote))
run('ssh %s scp -p -i %s -oStrictHostKeyChecking=no -o%s %s %s:%s' % (
host1, keyfile_remote, ssh_options, path1, o['hostname'], path2))
def get_ssh_options(host):
"""Parse ~/.ssh/config file and return a dict of ssh options for host
Note: dict keys are all lowercase
"""
def remove_comment(line):
return re.sub(r'#.*$', '', line)
def get_value(line, key_arg):
m = re.search(r'^\s*%s\s+(.+)\s*$' % key_arg, line, re.I)
if m:
return m.group(1)
else:
return ''
def not_the_host(line):
return get_value(line, 'Host') != host
def not_a_host(line):
return get_value(line, 'Host') == ''
lines = [line.strip() for line in file(SSH_CONFIG_FILE)]
comments_removed = [remove_comment(line) for line in lines]
blanks_removed = [line for line in comments_removed if line]
top_removed = list(itertools.dropwhile(not_the_host, blanks_removed))[1:]
goodpart = itertools.takewhile(not_a_host, top_removed)
return dict([line.lower().split(None, 1) for line in goodpart])
def run(cmd):
print cmd
os.system(cmd)
if __name__ == '__main__':
main()</pre>
<p>Here is an example <code>~/.ssh/config</code> file:</p>
<pre>Host testhost1
User root
Hostname 48.879.24.567
IdentityFile /home/eliot/.ssh/test_keyfile
Host testhost2
User root
Hostname 56.384.58.212
IdentityFile /home/eliot/.ssh/test_keyfile</pre>
<p>Here is an example run. It copies <code>/tmp/testfile</code> from
<code>testhost1</code> to the same path on <code>testhost2</code>.</p>
<pre>python scp_r2r.py testhost1:/tmp/testfile testhost2:/tmp/testfile</pre>
<p>Here is the console output:</p>
<pre style="overflow: auto">scp /home/eliot/.ssh/test_keyfile testhost1:/tmp/test_keyfile
test_keyfile 100% 1674 1.6KB/s 00:00
ssh testhost1 scp -p -i /tmp/test_keyfile -oStrictHostKeyChecking=no -ouser=root /tmp/testfile 56.384.58.212:/tmp/testfile</pre>
<p>One inconvenience is that it doesn't show the progress for the main transfer.
If anyone knows how I can fix this, please let me know.</p>
Wmii Python script to monitor remote machines
2009-12-22T01:00:05-08:00https://www.saltycrane.com/blog/2009/12/wmii-python-script-monitor-remote-machines/<object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/jSes6lJdj5Y&hl=en_US&fs=1&rel=0&color1=0x3a3a3a&color2=0x999999"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/jSes6lJdj5Y&hl=en_US&fs=1&rel=0&color1=0x3a3a3a&color2=0x999999" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object>
<p>I like to monitor our web servers by ssh'ing into the remote machine and watching
"top", tailing log files, etc. Normally, I open a terminal, ssh into the
remote machine, run the monitoring command (e.g. "top"), then repeat for the rest
of the remote machines. Then I adjust the window sizes so I can see everything
at once.</p>
<p>My window manager, <a href="http://wmii.suckless.org/">wmii</a>, is great for
tiling a bunch of windows at once. It is also scriptable with Python, so I wrote a
Python script to create my web server monitoring view. Below is my script. I also put a
<a href="http://www.youtube.com/watch?v=jSes6lJdj5Y">video on YouTube</a>.
</p>
<pre class="python">#!/usr/bin/env python
import os
import time
NGINX_MONITOR_CMD = "tail --follow=name /var/log/nginx/cache.log | grep --color -E '(HIT|MISS|EXPIRED|STALE|UPDATING|\*\*\*)'"
APACHE_MONITOR_CMD = "top"
MYSQL_MONITOR_CMD = "mysqladmin extended -i10 -r | grep -i 'questions\|aborted_clients\|opened_tables\|slow_queries\|threads_created' "
CMDS_COL1 = ['urxvt -title "Nginx 1" -e ssh -t us-ng1 "%s" &' % NGINX_MONITOR_CMD,
'urxvt -title "Nginx 2" -e ssh -t us-ng2 "%s" &' % NGINX_MONITOR_CMD,
]
CMDS_COL2 = ['urxvt -title "Apache 1" -e ssh -t us-med1 "%s" &' % APACHE_MONITOR_CMD,
'urxvt -title "Apache 2" -e ssh -t us-med2 "%s" &' % APACHE_MONITOR_CMD,
'urxvt -title "Apache 3" -e ssh -t us-med3 "%s" &' % APACHE_MONITOR_CMD,
]
CMDS_COL3 = ['urxvt -title "MySQL 1" -e ssh -t us-my1 "%s" &' % MYSQL_MONITOR_CMD,
'urxvt -title "MySQL 2" -e ssh -t us-my2 "%s" &' % MYSQL_MONITOR_CMD,
]
COLUMNS = [CMDS_COL1, CMDS_COL2, CMDS_COL3]
def create_windows():
for i, col in enumerate(COLUMNS):
cindex = str(i+1)
for cmd in col:
os.system(cmd)
time.sleep(1)
os.system('wmiir xwrite /tag/sel/ctl send sel %s' % cindex)
os.system('wmiir xwrite /tag/sel/ctl colmode %s default-max' % cindex)
os.system('wmii.py 45.5 31.5 23')
if __name__ == '__main__':
create_windows()</pre>
<p><em>Note 1:</em> The script above uses another
<a href="http://www.saltycrane.com/blog/2009/04/scripting-wmii-column-widths-python/">
script I wrote previously, <code>wmii.py</code>,</a> to set the column widths.
</p>
<p><em>Note 2:</em> The remote server addresses are specified by the nicknames
<em>us-ng1</em>, <em>us-ng2</em>, <em>us-med1</em>, etc. configured in my
<code>~/.ssh/config</code> file as described
<a href="http://www.saltycrane.com/blog/2008/11/creating-remote-server-nicknames-sshconfig/">here</a>.
</p>
<p><em>Note 3 (on using ssh and top):</em> I first tried doing <code>ssh host top</code>,
but this gave me a <em><code>TERM environment variable not set.</code></em> error.
I then tried <code>ssh host "export TERM=rxvt-unicode; top"</code>, but this gave me
a <em><code>top: failed tty get</code></em> error. The solution that worked for me was
to use the <em><code>-t</code></em> option with <code>ssh</code>. E.g.
<code>ssh -t host top</code>. This is what I used in the script above.
</p>
<p><em>Note 4 (added 2010-03-05):</em> I used "tail --follow=name" instead of "tail -f"
so that tail will follow the log file even after it has been rotated. For more
information, see the
<a href="http://manpages.debian.net/cgi-bin/man.cgi?query=tail&apropos=0&sektion=0&manpath=Debian+5.0+lenny&format=html&locale=en">man
page for tail</a>.</p>
<p><em>Note 5 (added 2010-03-05):</em> To prevent your ssh session from timing out,
add the following 2 lines to your <code>~/.ssh/config</code> file
(<a href="http://ocaoimh.ie/how-to-fix-ssh-timeout-problems/">via</a>):</p>
<pre>Host *
ServerAliveInterval 60</pre>
Notes on Python Fabric 0.9b1
2009-10-04T22:36:28-07:00https://www.saltycrane.com/blog/2009/10/notes-python-fabric-09b1/<p><a href="http://fabfile.org">Fabric</a> is a Python package used for
deploying websites or generally running commands on a remote server.
I first used Fabric about a <a href="/blog/2008/09/notes-python-deployment-using-fabric/">
year ago</a> and thought it was great. Since then, Fabric has procured a
<a href="http://lists.gnu.org/archive/html/fab-user/2009-04/msg00041.html">
new maintainer</a>, a <a href="http://www.nongnu.org/fab/">new domain</a>,
and a few <a href="http://git.fabfile.org/cgit.cgi/fabric/refs/">new revisions</a>.
</p>
<p>Here are my notes on installing the latest stable version (0.9b1) on Ubuntu Jaunty and
running a simple example.</p>
<h4>Install Fabric 0.9b1</h4>
<ul>
<li>Install Easy Install & pip
<pre>sudo apt-get install python-setuptools python-dev build-essential</pre>
<pre>sudo easy_install -U pip</pre>
</li>
<li><p>Install Fabric</p>
<p>Note: According to the Fabric website, the latest version of the prerequisite Python library,
<a href="http://www.lag.net/paramiko/">Paramiko</a> has a bug, so it is recommended
to install the previous version, 1.7.4, instead. This can be accomplished by creating
a requirements file for pip:</p>
<pre>http://www.lag.net/paramiko/download/paramiko-1.7.4.tar.gz
http://git.fabfile.org/cgit.cgi/fabric/snapshot/fabric-0.9b1.tar.gz</pre>
<p>To install, use the <code>pip install</code> command with the <code>-r</code> option
and the path to your requirements file. For convenience, you can install Fabric
using my requirements file:
</p>
<pre>sudo pip install -r http://www.saltycrane.com/site_media/code/fabric-requirements.txt</pre>
</li>
</ul>
<h4>Using Fabric</h4>
<ul>
<li>Create a file called fabfile.py in ~/myproject:
<pre class="python">from __future__ import with_statement # needed for python 2.5
from fabric.api import env, run
def ec2():
env.hosts = ['ec2-65-234-55-183.compute-1.amazonaws.com']
env.user = 'saltycrane'
env.key_filename = '/path/to/my/id_ssh_keyfile'
def ps_apache():
run('ps -e -O rss,pcpu | grep apache')</pre>
</li>
<li>Run it
<pre>cd ~/myproject
fab ec2 ps_apache</pre>
<p>Results:</p>
<pre>[ec2-65-234-55-183.compute-1.amazonaws.com] run: ps -e -O rss,pcpu | grep apache
[ec2-65-234-55-183.compute-1.amazonaws.com] err: stdin: is not a tty
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 3571 10996 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 5047 28352 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 5048 27756 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 5049 23752 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 5050 27344 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 5055 27344 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 5166 28404 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 5167 27900 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 9365 1208 0.0 S ? 00:00:00 /bin/bash -l -c ps -e -O rss,pcpu | grep apache
Done.
Disconnecting from ec2-65-234-55-183.compute-1.amazonaws.com... done.</pre>
</li>
</ul>
<h4>List of available <code>env</code> options</h4>
<p>I extracted this list from
<a href="http://git.fabfile.org/cgit.cgi/fabric/tree/fabric/state.py?id=aabdc44f73063fe3787b00ce2119ba43886cc007"><code>state.py</code> (0.9b1)</a>.
Or view the <a href="http://git.fabfile.org/cgit.cgi/fabric/tree/fabric/state.py">tip version</a>
</p>
<pre class="python">env.reject_unknown_hosts = True # reject unknown hosts
env.disable_known_hosts = True # do not load user known_hosts file
env.user = 'username' # username to use when connecting to remote hosts
env.password = 'mypassword' # password for use with authentication and/or sudo
env.hosts = ['host1.com', 'host2.com'] # comma-separated list of hosts to operate on
env.roles = ['web'] # comma-separated list of roles to operate on
env.key_filename = 'id_rsa' # path to SSH private key file. May be repeated.
env.fabfile = '../myfabfile.py' # name of fabfile to load, e.g. 'fabfile.py' or '../other.py'
env.warn_only = True # warn, instead of abort, when commands fail
env.shell = '/bin/sh' # specify a new shell, defaults to '/bin/bash -l -c'
env.rcfile = 'myfabconfig' # specify location of config file to use
env.hide = ['everything'] # comma-separated list of output levels to hide
env.show = ['debug'] # comma-separated list of output levels to show
env.version = '1.0'
env.sudo_prompt = 'sudo password:'
env.use_shell = False
env.roledefs = {'web': ['www1', 'www2', 'www3'],
'dns': ['ns1', 'ns2'],
}
env.cwd = 'mydir'</pre>
<h4>How to check the status code of a command</h4>
<p>To check the return code of your command, set the <code>env.warn_only</code> option to True and check the
<code>return_code</code> attribute of object returned from <code>run()</code>.
For example:
</p>
<pre class="python">def ec2():
env.hosts = ['ec2-65-234-55-183.compute-1.amazonaws.com']
env.user = 'saltycrane'
env.key_filename = '/path/to/my/id_ssh_keyfile'
env.warn_only = True
def getstatus():
output = run('ls non_existent_file')
print 'output:', output
print 'failed:', output.failed
print 'return_code:', output.return_code</pre>
<pre>fab ec2 getstatus</pre>
<pre>[ec2-65-234-55-183.compute-1.amazonaws.com] run: ls non_existent_file
[ec2-65-234-55-183.compute-1.amazonaws.com] err: ls: cannot access non_existent_file: No such file or directory
Warning: run() encountered an error (return code 2) while executing 'ls non_existent_file'
output:
failed: True
return_code: 2
Done.
Disconnecting from ec2-65-234-55-183.compute-1.amazonaws.com... done.</pre>
<h4>Links</h4>
<ul>
<li><a href="http://git.fabfile.org/cgit.cgi/fabric/tree/">Browse Fabric source code</a></li>
<li><a href="http://lists.gnu.org/archive/html/fab-user/">Fabric mailing list</a></li>
<li><a href="http://docs.fabfile.org/0.9/usage.html">Using Fabric documentation</a></li>
</ul>
<ul>
<li><a href="http://morethanseven.net/2009/07/27/fabric-django-git-apache-mod_wsgi-virtualenv-and-p/">
Fabric, Django, Git, Apache, mod_wsgi, virtualenv and pip deployment</a> <em>(uses
a previous version of Fabric)</em>
</li>
<li><a href="http://gist.github.com/158177">Fabfile.py from the above article
updated for Fabric 0.9/1.0</a>
</li>
</ul>
<h4>Other notes</h4>
<ul>
<li><p>Error message: <code>paramiko.SSHException: Channel closed.</code></p>
<p>Try using Paramiko version 1.7.4 instead of 1.7.5.
See <a href="http://www.mail-archive.com/fab-user@nongnu.org/msg00844.html">
http://www.mail-archive.com/fab-user@nongnu.org/msg00844.html</a>.
</p>
</li>
<li>How to check the version of Paramiko:
<pre>$ python
Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41)
[GCC 4.3.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import paramiko
>>> paramiko.__version__
'1.7.5 (Ernest)'</pre>
</li>
<li><p>Error message: <code>Fatal error: No existing session</code></p>
<p>This occurred when I used the wrong username.</p>
</li>
</ul>
Creating remote server nicknames with .ssh/config
2008-11-20T15:01:35-08:00https://www.saltycrane.com/blog/2008/11/creating-remote-server-nicknames-sshconfig/<p>Using the <code>~/.ssh/config</code> file is an easy way to give your
remote machines nicknames and reduce the number of keystrokes needed
to login with <code>ssh</code>, <code>rsync</code>,
<code>hg push</code>/<code>pull</code>/<code>clone</code>, access
files via <a href="http://www.gnu.org/software/tramp/">Emacs Tramp</a>
(Transparent Remote (file) Access, Multiple Protocol),
or use any other SSH-based tool. You can also set other ssh options such
as <code>IdentityFile</code>, <code>Port</code>, or <code>CompressionLevel</code>.
For more information and a full list of options, check out the man page for <code>ssh_config</code> or
<a href="http://kimmo.suominen.com/docs/ssh/#config">this article</a>
by Kimmo Suominen.
</p>
<p>Here is part of my <code>~/.ssh/config</code> file. It defines the
nicknames turk, tyran, tuna, and tally for some EC2 servers I've been
working with.</p>
<pre>Host turk
User root
HostName ec2-67-202-21-122.compute-1.amazonaws.com
Host tuna
User root
HostName ec2-75-101-178-62.compute-1.amazonaws.com
Host tyran
User root
HostName ec2-67-202-43-207.compute-1.amazonaws.com
Host tally
User root
HostName ec2-67-202-59-207.compute-1.amazonaws.com</pre>
<p>Now, wherever I would normally have typed
<code>root@ec2-67-202-21-122.compute-1.amazonaws.com</code>,
I can just type <code>turk</code>. Here are some examples.</p>
<h4>SSH login</h4>
<p>Old way:</p>
<pre>ssh root@ec2-67-202-21-122.compute-1.amazonaws.com</pre>
<p>New way:</p>
<pre>ssh turk</pre>
<h4>rsync</h4>
<p>Old way:</p>
<pre>rsync -avz myproject root@ec2-67-202-21-122.compute-1.amazonaws.com:/srv</pre>
<p>New way:</p>
<pre>rsync -avz myproject turk:/srv</pre>
<h4>Mercurial</h4>
<p>Old way:</p>
<pre>hg push ssh://root@ec2-67-202-21-122.compute-1.amazonaws.com//srv/myproject</pre>
<p>New way:</p>
<pre>hg push ssh://turk//srv/myproject</pre>
<h4>Emacs Tramp</h4>
<p>To use your <code>~/.ssh/config</code> with Emacs Tramp, you will need
something like the following in your <code>.emacs</code>:</p>
<pre>(tramp-set-completion-function "ssh"
'((tramp-parse-sconfig "/etc/ssh_config")
(tramp-parse-sconfig "~/.ssh/config")))</pre>
<p>Old way:</p>
<pre>C-x C-f /root@ec2-67-202-21-122.compute-1.amazonaws.com:/srv/myproject/myfile.py</pre>
<p>New way:</p>
<pre>C-x C-f /turk:/srv/myproject/myfile.py</pre>
<h4>scp</h4>
<p>Old way:</p>
<pre>scp etc/.screenrc root@ec2-67-202-21-122.compute-1.amazonaws.com:/root</pre>
<p>New way:</p>
<pre>scp etc/.screenrc turk:/root</pre>
Notes on Python deployment using Fabric
2008-09-28T00:24:21-07:00https://www.saltycrane.com/blog/2008/09/notes-python-deployment-using-fabric/<p>I found out about
<a href="http://www.nongnu.org/fab/">Fabric</a> via Armin Ronacher's article
<a href="http://lucumr.pocoo.org/cogitations/2008/07/17/deploying-python-web-applications/">
Deploying Python Web Applications</a>.
Fabric is a
<a href="http://www.capify.org/">Capistrano</a> inspired
deployment tool for the Python community. It is very simple
to use. There are 4 main commands: <code>local</code> is
almost like <code>os.system</code> because it runs a command
on the local machine, <code>run</code> and <code>sudo</code>
run a command on a remote machine as either a normal user
or as root, and <code>put</code> transfers a file to a remote
machine.</p>
<p>Here is a sample setup which displays information about
the Apache processes on my remote EC2 instance.
</p>
<ul>
<li><a href="http://www.saltycrane.com/blog/2007/01/how-to-install-easy-install-for-python/">
Install Easy Install</a></li>
<li>Install Fabric
<pre>$ sudo easy_install Fabric</pre></li>
<li>Create a file called <code>fabfile.py</code> located at <code>~/myproject</code>
<pre class="python">def ec2():
set(fab_hosts = ['ec2-65-234-55-183.compute-1.amazonaws.com'],
fab_user = 'sofeng',
fab_password = 'mypassword',)
def ps_apache():
run("ps -e -O rss,pcpu | grep apache")</pre>
Note: for security reasons, you can remove the password from the fabfile and
Fabric will prompt for it interactively. Per
<a href="http://www.nongnu.org/fab/user_guide.html">the documentation</a>,
Fabric also supports key-based authentication.<br><br>
</li>
<li>Run it
<pre>$ cd ~/myproject
$ fab ec2 ps_apache</pre>
Results:
<pre> Fabric v. 0.0.9, Copyright (C) 2008 Christian Vest Hansen.
Fabric comes with ABSOLUTELY NO WARRANTY; for details type `fab warranty'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `fab license' for details.
Running ec2...
Running ps_apache...
Logging into the following hosts as sofeng:
ec2-65-234-55-183.compute-1.amazonaws.com
[ec2-65-234-55-183.compute-1.amazonaws.com] run: ps -e -O rss,pcpu | grep apache
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2163 5504 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2520 15812 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2521 3664 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2522 3664 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2523 3664 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2524 3664 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2619 3664 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2629 1204 0.0 R ? 00:00:00 /bin/bash -l -c ps -e -O rss,pcpu | grep apache
Done.</pre>
</li>
</ul>