SaltyCrane Blog — Notes on Python and web development on Ubuntu Linux

When is the try-finally block used in Python?

The finally block is used to define clean-up actions. Why is the finally block needed? Why can't the clean up actions be put after the try/except/else block? This works in some cases, but if there is a return, break, or continue, or an unhandled exception inside the try, except, or else clauses, that code will never be executed. The finally block executes even in these conditions.

try:
    print 'Inside try'
    raise Exception
finally:
    print 'Inside finally'
print 'Never get here'

Results:

Inside try
Inside finally
Traceback (most recent call last):
  File "tmp.py", line 13, in 
    raise Exception
Exception

Reference: http://docs.python.org/2/tutorial/errors.html#defining-clean-up-actions

Using Python's gzip and StringIO to compress data in memory

I needed to gzip some data in memory that would eventually end up saved to disk as a .gz file. I thought, That's easy, just use Python's built in gzip module.

However, I needed to pass the data to pycurl as a file-like object. I didn't want to write the data to disk and then read it again just to pass to pycurl. I thought, That's easy also-- just use Python's cStringIO module.

The solution did end up being simple, but figuring out the solution was a lot harder than I thought. Below is my roundabout process of finding the simple solution.

Here is my setup/test code. I am running Python 2.7.3 on Ubuntu 12.04.

import cStringIO
import gzip


STUFF_TO_GZIP = """Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?"""
FILENAME = 'myfile.json.gz'


def pycurl_simulator(fileobj):

    # Get the file size
    fileobj.seek(0, 2)
    filesize = fileobj.tell()
    fileobj.seek(0, 0)

    # Read the file data
    fout = open(FILENAME, 'wb')
    fout.write(fileobj.read())
    fout.close()

    return filesize

Try 1: seek from the end fails

Here is my first attempt using cStringIO with the gzip module.

def try1_seek_from_end_fails():

    ftemp = cStringIO.StringIO()
    fgzipped = gzip.GzipFile(
        filename=FILENAME, mode='wb', fileobj=ftemp)
    fgzipped.write(STUFF_TO_GZIP)
    filesize = pycurl_simulator(fgzipped)
    print filesize

I got this exception:

        Traceback (most recent call last):
          File "tmp.py", line 232, in <module>
            try1_seek_from_end_fails()
          File "tmp.py", line 83, in try1_seek_from_end_fails
            filesize = pycurl_simulator(fgzipped)
          File "tmp.py", line 25, in pycurl_simulator
            fileobj.seek(0, 2)
          File "/usr/lib/python2.7/gzip.py", line 415, in seek
            raise ValueError('Seek from end not supported')
        ValueError: Seek from end not supported

It turns out the gzip object doesn't support seeking from the end. See this thread on the Python mailing list: http://mail.python.org/pipermail/python-list/2009-January/519398.html

Try 2: data is not compressed

What if we don't seek() from the end and just tell() where we are? (It should be at the end after doing a write(), right?) Unfortunately, this gave me the uncompressed size.

Reading from the GzipFile object also gave me an error saying that I couldn't read from a writable object.

def try2_data_is_not_compressed():

    ftemp = cStringIO.StringIO()
    fgzipped = gzip.GzipFile(
        filename=FILENAME, mode='wb', fileobj=ftemp)
    fgzipped.write(STUFF_TO_GZIP)
    filesize = fgzipped.tell()
    print filesize

Try 5: file much too small

I googled, then looked at the source code for gzip.py. I found that the compressed data was in the StringIO object. So I performed my file operations on it instead of the GzipFile object. Now I was able to write the data out to a file. However, the size of the file was much too small.

def try5_file_much_too_small():

    fgz = cStringIO.StringIO()
    gzip_obj = gzip.GzipFile(
        filename=FILENAME, mode='wb', fileobj=fgz)
    gzip_obj.write(STUFF_TO_GZIP)
    filesize = pycurl_simulator(fgz)
    print filesize

Try 6: unexpected end of file

I saw there was a flush() method in the source code. I added a call to flush(). This time, I got a reasonable file size, however, when trying to gunzip it from the command line, I got the following error:

        gzip: myfile.json.gz: unexpected end of file
def try6_unexpected_end_of_file():

    fgz = cStringIO.StringIO()
    gzip_obj = gzip.GzipFile(
        filename=FILENAME, mode='wb', fileobj=fgz)
    gzip_obj.write(STUFF_TO_GZIP)
    gzip_obj.flush()
    filesize = pycurl_simulator(fgz)
    print filesize

Try 7: got it working

I knew that GzipFile worked properly when writing files directly as opposed to reading from the StringIO object. It turns out the difference was that there was code in the close() method of GzipFile which wrote some extra required data. Now stuff was working.

def try7_got_it_working():

    fgz = cStringIO.StringIO()
    gzip_obj = gzip.GzipFile(
        filename=FILENAME, mode='wb', fileobj=fgz)
    gzip_obj.write(STUFF_TO_GZIP)
    gzip_obj.flush()

    # Do stuff that GzipFile.close() does
    gzip_obj.fileobj.write(gzip_obj.compress.flush())
    gzip.write32u(gzip_obj.fileobj, gzip_obj.crc)
    gzip.write32u(gzip_obj.fileobj, gzip_obj.size & 0xffffffffL)

    filesize = pycurl_simulator(fgz)
    print filesize

Try 8: (not really) final version

Here's the (not really) final version using a subclass of GzipFile that adds a method to write the extra data at the end. If also overrides close() so that stuff isn't written twice in case you need to use close(). Also, the separate flush() call is not needed.

def try8_not_really_final_version():

    class MemoryGzipFile(gzip.GzipFile):
        """
        A GzipFile subclass designed to be used with in memory file like
        objects, i.e. StringIO objects.
        """

        def write_crc_and_filesize(self):
            """
            Flush and write the CRC and filesize. Normally this is done
            in the close() method. However, for in memory file objects,
            doing this in close() is too late.
            """
            self.fileobj.write(self.compress.flush())
            gzip.write32u(self.fileobj, self.crc)
            # self.size may exceed 2GB, or even 4GB
            gzip.write32u(self.fileobj, self.size & 0xffffffffL)

        def close(self):
            if self.fileobj is None:
                return
            self.fileobj = None
            if self.myfileobj:
                self.myfileobj.close()
                self.myfileobj = None

    fgz = cStringIO.StringIO()
    gzip_obj = MemoryGzipFile(
        filename=FILENAME, mode='wb', fileobj=fgz)
    gzip_obj.write(STUFF_TO_GZIP)
    gzip_obj.write_crc_and_filesize()

    filesize = pycurl_simulator(fgz)
    print filesize

Try 9: didn't need to do that (final version)

It turns out I can close the GzipFile object and the StringIO object remains available. So that MemoryGzipFile class above is completely unnecessary. I am dumb. Here is the final iteration:

def try9_didnt_need_to_do_that():

    fgz = cStringIO.StringIO()
    gzip_obj = gzip.GzipFile(
        filename=FILENAME, mode='wb', fileobj=fgz)
    gzip_obj.write(STUFF_TO_GZIP)
    gzip_obj.close()

    filesize = pycurl_simulator(fgz)
    print filesize

References

Here is some googling I did:

How to start a long-running process in screen and detach from it

How to start a long-running process in screen, detach from it, and reattach to it later.

Start a long running process in screen and detach

  • Ssh to the remote host, myremote:
    eliot@mylocal:~$ ssh myremote 
    
  • Start a new screen session
    eliot@myremote:~$ screen 
    
  • Start a long running process, "sleep 3600":
    eliot@myremote:~$ sleep 3600 
    
  • Detach from the screen session:
    eliot@myremote:~$ CTRL-A : detach ENTER 
    
    (Hit [CTRL-A], then type a colon character, then type "detach", then hit [ENTER])
  • Exit your remote SSH session:
    eliot@myremote:~$ exit 
    

Reattach to the existing screen session

  • Ssh to the remote host again:
    eliot@mylocal:~$ ssh myremote 
    
  • List your active screen sessions:
    eliot@myremote:~$ screen -ls 
    There is a screen on:
    	11518.pts-1.myremote	(Detached)
    1 Socket in /var/run/screen/S-eliot.
    
  • Reattach to your screen session:
    eliot@myremote:~$ screen -RD 
    
    Note: you don't actually have to use the -RD option. You could use -rD or -r. But I just use -RD all the time. If there is more than one screen session active you will have to say: screen -RD 11518.pts-1.myremote or whichever screen session you want to attach to.
  • It will show you the "sleep 3600" command running. To exit, CTRL-C the sleep process, type "exit" to exit the screen session, and "exit" again to exit the SSH session.

See also

How to use pip with crate.io

Here's how to use pip with crate.io (in case pypi.python.org goes down):
$ pip install --index-url=https://simple.crate.io yolk 
Or with logging to see what's happening:
$ pip install --log=my-pip-debug.log --index-url=https://simple.crate.io yolk 

See also

How to run a Django local development server on a remote machine and access it in your browser on your local machine using SSH port forwarding

Here is how to run a Django local development server on a remote machine and access it in your browser on your local machine using SSH port forwarding. (This is useful if there is a firewall blocking access to the port of your Django local dev server (port 8000).

  1. On the local host, SSH to the remote host:
    $ ssh -v -L 9000:localhost:8000 eliot@my.remotehost.com 
    
  2. On the remote host, run the Django dev server:
    eliot@my.remotehost.com:/path/to/my/django/project$ python manage.py runserver 0.0.0.0:8000 
    
  3. On the local host, go to http://localhost:9000 in the browser

Note: The local port and the remote port can be the same (i.e. you can use 8000 instead of 9000). I just made them different to show which port is which.

Using LocalForward in your ~/.ssh/config

You can also achieve the same results by using the LocalForward in your ~/.ssh/config file:

Host myremote
  User eliot
  HostName my.remotehost.com
  LocalForward 9000 localhost:8000

Reference

http://magazine.redhat.com/2007/11/06/ssh-port-forwarding/

Testing HTTPS w/ Flask's development server using stunnel on Ubuntu

Our website is served over HTTPS. To more easily test certain issues (e.g. mixed mode content warnings, or Mapquest SSL tile servers), I wanted to access my Flask local development server over HTTPS. These two articles describe how to do this using stunnel: Testing HTTPS with Django's Development Server, Django Development Server with HTTPS. Using stunnel, you can hit pages on your Django/Flask local dev server over HTTPS instead of HTTP. Here is how I installed it on Ubuntu Precise 12.04:

  • Install SSL development files
    $ sudo apt-get install libssl-dev 
    
  • Go to https://www.stunnel.org/downloads.html and download stunnel-4.54.tar.gz
  • Unpack, compile, install.
    $ tar xvf stunnel-4.54.tar.gz 
    $ cd stunnel-4.54 
    $ ./configure --prefix=/home/saltycrane/lib/stunnel-4.54 
    $ make 
    $ make install 
    NOTE: the make install step asked me a number of questions and created a certificate file at /home/saltycrane/lib/stunnel-4.54/etc/stunnel/stunnel.pem. Accept all the defaults for the certificate information (accurate certificate information isn't needed for this application).
  • Create a stunnel configuration file, /home/saltycrane/lib/stunnel-4.54/etc/stunnel/dev_https:
    pid =
    cert = /home/saltycrane/lib/stunnel-4.54/etc/stunnel/stunnel.pem
    debug = 7
    foreground = yes
    
    [https]
    accept = 7000
    connect = 5000
  • Start stunnel:
    $ /home/saltycrane/lib/stunnel-4.54/bin/stunnel /home/saltycrane/lib/stunnel-4.54/etc/stunnel/dev_https
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: Clients allowed=500
    2012.10.17 17:40:52 LOG5[12468:140357811214080]: stunnel 4.54 on x86_64-unknown-linux-gnu platform
    2012.10.17 17:40:52 LOG5[12468:140357811214080]: Compiled/running with OpenSSL 1.0.1 14 Mar 2012
    2012.10.17 17:40:52 LOG5[12468:140357811214080]: Threading:PTHREAD SSL:+ENGINE+OCSP Auth:none Sockets:POLL+IPv6
    2012.10.17 17:40:52 LOG5[12468:140357811214080]: Reading configuration from file /home/saltycrane/lib/stunnel-4.54/etc/stunnel/dev_https
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: Compression not enabled
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: Snagged 64 random bytes from /home/saltycrane/.rnd
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: Wrote 1024 new random bytes to /home/saltycrane/.rnd
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: PRNG seeded successfully
    2012.10.17 17:40:52 LOG6[12468:140357811214080]: Initializing service [https]
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: Certificate: /home/saltycrane/lib/stunnel-4.54/etc/stunnel/stunnel.pem
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: Certificate loaded
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: Key file: /home/saltycrane/lib/stunnel-4.54/etc/stunnel/stunnel.pem
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: Private key loaded
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: Using DH parameters from /home/saltycrane/lib/stunnel-4.54/etc/stunnel/stunnel.pem
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: DH initialized with 1024-bit key
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: ECDH initialized with curve prime256v1
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: SSL options set: 0x00000004
    2012.10.17 17:40:52 LOG5[12468:140357811214080]: Configuration successful
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: Service [https] (FD=7) bound to 0.0.0.0:7000
    2012.10.17 17:40:52 LOG7[12468:140357811214080]: No pid file being created
    
  • Start the python dev server:
    $ HTTPS=1 python bin/runserver.py 0.0.0.0 5000 
  • Go to https://localhost:7000 in your browser

See also

Setting up a Linux DVR w/ MythTV, Ubuntu 12.04, and a Hauppauge WinTV-HVR 1250 TV tuner card

Setting up MythTV involves a little pain, but once it's set up, it's pretty great. And you don't have to spend lots of money on a DVR from the cable company. With my modest hardware specs, playback is smooth and clear, however Picture in Picture is too jittery to be useful. Here's what I did to get my MythTV DVR running on my Ubuntu machine.

Parameters

Install the Hauppauge WinTV-HVR 1250 TV tuner card

Put the card in the computer. Connect the TV antenna to the card.

Check the TV tuner card is recognized

Ubuntu 12.04 includes drivers for the Hauppauge 1250 TV tuner card, so I did not need to install any drivers.

$ cat /var/log/dmesg
[   15.211985] cx23885 driver version 0.0.3 loaded
[   15.214279] cx23885 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17
[   15.214492] CORE cx23885[0]: subsystem: 0070:2259, board: Hauppauge WinTV-HVR1255 [card=20,autodetected]
[   15.214600] IR NEC protocol handler initialized
[   15.230936] IR RC5(x) protocol handler initialized
[   15.235576] MCE: In-kernel MCE decoding enabled.
[   15.237132] IR RC6 protocol handler initialized
[   15.237703] EDAC MC: Ver: 2.1.0
[   15.238256] AMD64 EDAC driver v3.4.0
[   15.242493] IR JVC protocol handler initialized
[   15.246743] IR Sony protocol handler initialized
[   15.250908] IR MCE Keyboard/mouse protocol handler initialized
[   15.256862] lirc_dev: IR Remote Control driver registered, major 250 
[   15.257125] IR LIRC bridge handler initialized
[   15.284735] lp0: using parport0 (interrupt-driven).
[   15.361892] tveeprom 0-0050: Hauppauge model 22111, rev E2F5, serial# 8323201
[   15.361895] tveeprom 0-0050: MAC address is 00:0d:fe:7f:00:81
[   15.361897] tveeprom 0-0050: tuner model is NXP 18271C2 (idx 155, type 54)
[   15.361899] tveeprom 0-0050: TV standards NTSC(M) ATSC/DVB Digital (eeprom 0x88)
[   15.361901] tveeprom 0-0050: audio processor is CX23888 (idx 40)
[   15.361903] tveeprom 0-0050: decoder processor is CX23888 (idx 34)
[   15.361904] tveeprom 0-0050: has no radio, has IR receiver, has no IR transmitter
[   15.361906] cx23885[0]: hauppauge eeprom: model=22111
[   15.361909] cx23885_dvb_register() allocating 1 frontend(s)

Install MythTV

$ sudo apt-get install mythtv

Set up the MythTV backend

Run mythtv-setup to select your TV tuner card and scan for channels.

$ mythtv-setup

Click "Yes" to add your user to the "mythtv" group.

Click "Yes" to restart your login session.

Change the following options:

  • 2. Capture cards -> (New capture card) -> Card type: DVB DTV capture card (v3.x) -> Finish
  • 4. Video sources -> (New video source) -> Video source name: FOOBAR, Listings grabber: North America (SchedulesDirect.org) (Internal), User ID: blank, Pass: blank
  • 5. Input connections -> [DVB: /dev/dvb/adapter0/frontend0] -> Video source: FOOBAR -> Scan for channels

After running mythtv-setup, it will ask you if you want to start the backend. Select yes to start the backend. It will also ask you if you want to run mythfilldatabase. Select yes to run mythfilldatabase. This may take a while.

Ensure mythv backend is running

After running mythtv-setup, the mythtv backend should start running.

To check that the backend is running, run:

$ ps -ef | grep myth

If the mythtv backend is not running, start it using the following command:

$ sudo service mythtv-backend start

Troubleshooting mythbackend

If mythbackend doesn't stay running, there may be some configuration that is broken. Check /var/log/syslog. If that does not have enough information, run the backend with the --verbose option:

$ mythbackend --verbose

Run the MythTV frontend

$ mythfrontend

Some keyboard shortcuts

  • P - pause/play
  • SPACE - set/clear bookmark
  • LEFT/RIGHT ARROW - skip back/forward
  • M - menu
  • D - delete

Other stuff

  • You may want to change the theme. I chose the TintedGlass 2.43 theme.
  • To get schedule information, I ended up signing up for a membership at www.schedulesdirect.org. It is $25/year (or ~$2/month). It seems to be the recommended way to get schedule information.

How run mythfrontend on another Ubuntu laptop connected to your LAN (Added 2013-06-07)


Since MythTV has a flexible client/server architecture, you can run the MythTV backend server on one machine and access it from multiple other machines running a Mythtv frontend. These steps assume the remote frontend is running on a laptop with Ubuntu 12.04 and it is connected to your local network (LAN) (not through the internet (though that is possible.).)

UPDATE: Playing 1080p HD content over my $30 Belkin G wireless router (rated at 54 Mbps) had occasional stalls in the playback. Repositioning my router helped, but after a couple days, I decided to order a Netgear N600 Wireless-N Dual Band Router. Hopefully this will solve my problem.

On the Mythtv backend server configured above:

  • Determine the IP address of the Mythtv backend server by running ifconfig
    $ ifconfig 
    For me, it is 192.168.2.2. This will be used in the steps below.
  • Follow the instructions here: http://www.mythtv.org/wiki/Mythfrontend
    • Edit /etc/mysql/my.cnf so the the bind-address line is commented out:
      #bind-address 127.0.0.1
    • Allow remote users access to the database. Note: replace "mypassword" with the value found in ~/.mythtv/mysql.txt.
      $ mysql -u root
      mysql> grant all on mythconverg.* to 'mythtv'@'%' identified by 'mypassword';
      mysql> flush privileges;
      mysql> exit
    • Restart mysql server:
      $ sudo service mysql restart 
  • Ensure mythbackend is not using 127.0.0.1.
    • Run mythtv-setup:
      $ mythtv-setup 
    • Change the IP address from 127.0.0.1 to 192.168.2.2 (or IP address you determined from above.)

On the laptop:

  • $ sudo apt-get install mythtv-frontend 
  • click "yes" to be added to the mythtv group
  • click "yes" to restart your session
  • click "OK" to the msg about logging out of your session
  • logout and login again
  • Run mythfrontend
    mytfrontend
  • For the hostname: enter the IP address of the Mythtv server. For me it is 192.168.2.2.
  • Enter the Mysql password. This can be found in ~/.mythtv/mysql.txt on the Mythtv server machine. Or you can check the settings of the mythfrontend running on the server machine.

How to watch your recorded videos on your Android phone over the internet


  • This method uses the MythTV Services API
  • PC: Set up a SSH server on your MythTV backend server
  • PC: Get the external IP address of your MythTV backend server
    $ curl http://ifconfig.me
    111.222.333.444 
  • Android: Install Connectbot on your Android phone and enable port forwarding of 6544. For more info see: http://parker1.co.uk/mythtv_ssh.php
    • Android: Using Connectbot, connect to your MythTV server using the IP address from above (111.222.333.444)
    • Android: Menu -> Port Forwards -> Menu -> Add port forward:
      • Nickname: mythtv
      • Type: local
      • Source port: 6544
      • Destination: localhost:6544
    • Android: Disconnect and reconnect
  • Android: Install and set up MythTV Android Frontend
    • Android: Touch the settings icon -> Away Profiles
      • Name: Away
      • MythTV Master Backend Address: http://localhost:6544/
      Save
    • Android: Away -> Recordings -> Select a show to watch -> watch it

See also:

How to control your DVR from your Android phone

  • Configure your MythTv Frontend on your PC:
    • Setup -> General -> Hit "Next" 6 times -> Check "Enable Network Remote Control"
    • Setup -> Appearance -> Hit "Next" 3 times and
      • Check "Enable LCD device"
      • Check "Display time"
      • Check "Display menus"
      • Check "Display music arstist and title"
      • Check "Display channel information"
  • Install MythDroid on your Android phone
  • Install MDD on your PC
    • Install libimlib2
      $ sudo apt-get install libimlib2-dev 
      
    • Download MDD
      $ wget http://mythdroid.googlecode.com/files/mdd-0.6.2.tgz 
      
    • Install MDD
      $ tar xvf mdd-0.6.2.tgz 
      $ cd mdd 
      $ perl Build.PL 
      
      Type "y" because you are running this on the PC that runs your MythTv frontend
      $ ./Build installdeps 
      
      Hit ENTER to accept all the defaults
      $ ./Build test 
      $ sudo ./Build install 
      
      Type "y" to stop mythfrontend. Then start it again
      $ mythfrontend 
      

Help / References

How to prevent nose (unittest) from using the docstring when verbosity >= 2

Some of our Python unit tests have docstrings. I find it annoying that, when using a verbosity level >= 2, nose prints the docstring instead of the class name and method name. Here's a hack to prevent it from doing that: Add a shortDescription() method to the test case class that returns None.

Here is an example of normal behavior:

import unittest

class MyTestCase(unittest.TestCase):
    def test_with_docstring(self):
        """Test that something does something
        """

    def test_without_docstring(self):
        pass
$ nosetests --verbosity=2 tmp.py
Test that something does something ... ok
test_without_docstring (tmp.MyTestCase) ... ok

Here is an example with the hack to prevent printing the docstring:

import unittest

class MyTestCase(unittest.TestCase):
    def shortDescription(self):
        return None

    def test_with_docstring(self):
        """Test that something does something
        """

    def test_without_docstring(self):
        pass
$ nosetests --verbosity=2 tmp.py
test_with_docstring (tmp.MyTestCase) ... ok
test_without_docstring (tmp.MyTestCase) ... ok

Hack to share & sync Google contacts between Android phones

I want to share and sync (in real time) Google (Gmail) contacts with my wife on our Android 2.3.6 Gingerbread phones. Google does not make this easy to do. Here's the best solution I could come up with (ref whitenack on androidcentral). (Note: these are not our real email addresses.)

  • This contact list resides only on the eliot@gmail.com account.
  • Contacts are removed from the "My Contacts" group and instead stored in groups called "Angela" and/or "Eliot". For shared contacts, the contact is in both groups. (Contact groups are like tags. A contact can be in multiple groups at the same time.)
  • Contacts in the "Angela" group show up on Angela's phone and contacts in the "Eliot" group show up on Eliot's phone. Contacts in both groups show up in both phones.
  • On Angela's phone, add the eliot@gmail.com account and check the box for syncing Contacts and uncheck the box for syncing Contacts from the angela@gmail.com account.
  • On Angela's phone, check the box for displaying the groups "My Contacts" and "Angela" under the eliot@gmail.com account and uncheck all the boxes for displaying contacts on the angela@gmail.com account.
  • On Eliot's phone, check the box for displaying the groups "My Contacts" and "Eliot" under the eliot@gmail.com account
  • On both phones, set the account used for creating new contacts to eliot@gmail.com: Contacts -> More -> Settings -> Contact storage -> Select the eliot@gmail.com account
  • When a *new* contact is added on either of the phones, it will be added to the "My Contacts" group on the eliot@gmail.com account. These contacts later need to be moved to the "Angela" and/or "Eliot" groups from the browser while signed in to the eliot@gmail.com account.
  • The angela@gmail.com account will not be able to view, add, or edit contacts from the browser (Gmail).

We are able to share and sync contacts in real time, however there are annoyances. The main problem is that contact list lives under one account, so it is not available to the secondary user (my wife) when she is using Gmail or wants to manage contacts in the browser. A second minor annoyance is that our Android phones don't allow us to assign a contact to a group, so all new contacts added from our phones will be added to the generic "My Contacts" group and need to be categorized later from the browser.

I also tried the free Google Apps because it has Contact sharing. However, I could not figure out how to get shared contacts to show up in our phones.

Will upgrading to Android 4.0 ICS help?

Test coverage with nose and coverage.py

It's fun to use nose + coverage.py to show my progress as I write tests. Seeing the bar next to my code change from red to green makes me happy. 100% test coverage does not mean tests are complete. For example, a boolean OR'ed conditional expression may not test all conditions even though the line is marked as covered. Other limitations are discussed here: Flaws in coverage measurement. However, good test coverage is at least a step towards having a good test suite.

Install nose and coverage.py

Activate your virtualenv and pip install nose and coverage.

$ pip install nose 
$ pip install coverage 

Run it

Here is the command line I use to run the tests. --with-coverage enables the nose-coverage plugin to check test coverage. --cover-erase erases coverage test results from a previous run. --cover-package specifies which Python package to analyze. Specifiy the package as you would using an import (e.g. dp.blueprints.info.views). If --cover-package is not specified, it will analyze everything. --cover-html enables pretty HTML coverage reports. This example is for the flask-encryptedsession tests.

$ nosetests --with-coverage --cover-erase --cover-package=flask_encryptedsession --cover-html
..........
Name                                      Stmts   Miss  Cover   Missing
-----------------------------------------------------------------------
flask_encryptedsession                        0      0   100%   
flask_encryptedsession.encryptedcookie       41      1    98%   176
flask_encryptedsession.encryptedsession      35      1    97%   75
-----------------------------------------------------------------------
TOTAL                                        76      2    97%   
----------------------------------------------------------------------
Ran 10 tests in 0.188s

OK

Display the HTML report

$ firefox cover/index.html 

Get branch coverage

Branch coverage is useful for checking "if" statements without an explicit "else" in the code. I had to install the development version of nose to use this feature: As of version 1.2.0, this feature is available.

$ pip install https://github.com/nose-devs/nose/tarball/master 
$ nosetests --cover-branches --with-coverage --cover-erase --cover-package=flask_encryptedsession --cover-html 
..........
Name                                      Stmts   Miss Branch BrPart  Cover   Missing
-------------------------------------------------------------------------------------
flask_encryptedsession                        0      0      0      0   100%   
flask_encryptedsession.encryptedcookie       41      1     12      1    96%   176
flask_encryptedsession.encryptedsession      35      1      4      1    95%   75
-------------------------------------------------------------------------------------
TOTAL                                        76      2     16      2    96%   
----------------------------------------------------------------------
Ran 10 tests in 0.234s

OK