newsGNU Scientific Library - News: Simple Git repository publication

 
 
Latest News
GNU Scientific Library 1.15 released posted by bjg, Sat 07 May 2011 09:46:23 PM UTC - 0 replies
GNU Scientific Library repository migrated to Bazaar posted by bjg, Tue 04 May 2010 11:56:08 AM UTC - 0 replies
GNU Scientific Library 1.14 released posted by bjg, Fri 12 Mar 2010 12:46:37 PM UTC - 0 replies
GNU Scientific Library 1.13 released posted by bjg, Wed 09 Sep 2009 03:40:05 PM UTC - 0 replies
Libre Planet meeting posted by bjg, Fri 20 Feb 2009 11:58:35 AM UTC - 0 replies
[Submit News]
[12 news in archive]

Simple Git repository publication

Item posted by Brian Gough <bjg> on Sun 26 Oct 2008 08:52:22 PM UTC.

If you are working on changes to a project using Git, such as GSL, it can be useful to make your repository public rather than sending patches.

The usual way to export a git branch involves running git-daemon, webdav or having git installed on the web server. However, it is also possible to export your repository simply by copying the .git directory to a web server. There are some limitations---your whole repository is exported, not just a specific branch, and you must run git-update-server-info each time before copying the files.

Directory index pages need to be enabled the web server---when a git client accesses your repository over http it needs to get a list of the files in each directory. With Apache this requires

You can then copy your repository to the web server:

Run these commands each time you want to export the repository e.g. after making some changes and committing them. Note the trailing slash on .git/ to copy the contents of the directory, rather than the directory itself. Both commands are needed, you can put them together in a script or add the command git-update-server-info to .git/hooks/post-commit and make it executable.

The repository should now be accessible remotely over http:

If you want to work against a remote repository you can copy it as usual to your local machine:

but to save space it is also possible to make a shallow copy:

The git documentation suggests that fetch will not work against a shallow copy, but it seems to work ok in this context.

Comments:

dump transports (http), rsync, and packing (posted by Jason Riedy, Mon 27 Oct 2008 01:11:51 PM UTC)

Packing your repository every so often is a good idea. Packs save a great deal of space. However, a dumb transport like http does not handle packs terribly efficiently. Also, a dumb push method like rsync needs a bit of help to handle packs while not stomping on clients.

Consider using something like the following config settings:
# Don't gc until there are 1000 loose objects
git config gc.auto 1000
# Limit the total number of pack files
gc.autopacklimit 10

Then every day or week, depending on the frequency of check-ins, run
git gc

This will stuff loose objects into a pack file if there are more than 1000, and also combine older packs if there are too many total pack files. Then http will work well if users update frequently and moderately well if users update occasionally, while at the same time smart transports will benefit from pack files. The command also will pack tags and branches into more a efficient format, maintain reference logs, etc.

To synchronize a packed repo, rsync as usual:
rsync -avz foo/.git/ ssh://server:/path/to/repo.git/
Wait a sufficient grace period for existing users to fetch the packs they need, then
rsync -avz --delete foo/.git/ ssh://server:/path/to/repo.git/
to remove old and pruned objects. The paranoid among us check a --dry-run version of the latter before actually using it.


   

 

Back to the top


Powered by Savane 3.1-cleanup