Hello. Hi everyone.
Jan 12, 2015
git repository hosting
git repository hosting.
That’s what we wanted to do: give us and our friends a place to share git repositories.
I had seen Torvalds’ talk on YouTube about git.
But it wasn’t really about git - it was more about distributed version control.
It answered many of my questions and clarified DVCS ideas.
I still wasn’t sold on the whole idea, and I had no idea what it was good for.
Right after I had seen the Torvalds video, the god project was posted up on repo.or.cz
I was interested in the project so I finally got a chance to try it out with some other people.
I managed to make a few contributions to god before realizing that repo.or.cz was not different.
git was not different.
Just more of the same - centralized, inflexible code hosting.
This is what I always imagined.
No rules. Project belongs to you, not the site. Share, fork, change - do what you want.
Give people tools and get out of their way. Less ceremony.
What’s special about GitHub is that people use the site in spite of git.
Many git haters use the site because of what it is - more than a place to host git repositories, but a place to share code with others.
a briefhistory
So that’s how it all started.
Now I want to (briefly) cover some milestones and events.
2008 january
We launched the beta in January at Steff’s on 2nd street in San Francisco’s SOMA district.
The first non-github user was wycats, and the first project was merb-core.
They wanted to use the site for their refactoring and 0.9 branch.
2009 april
Then in April we were featured as some of the best young tech entrepreneurs in BusinessWeek.
(Finally something to show mom)
2009 june
Our Firewall Install, something we’d been talking about since practicallyday one, was launched in June of 2009.
2009 september
And in September we moved to Rackspace, our current hosting provider.
(Which some of you may have noticed.)
github.com
That’s where we’re at today.
So let’s talk about the technical details of the website: github.com
.com as opposed to fi, which I’m not going to get into today.
You’ll have to invite PJ out if you want to hear about that.
theweb app
As everyone knows, a web “site” is really a bunch of different components.
Some of them generate and deliver HTML to you, but most of them don’t.
Either way, let’s start with the HTMLy parts.
rails
We use Ruby on Rails 2.2.2 as our web framework.
It’s kept up to date with all the security patches and includes custom patches we’ve addedourselves, as well as patches we’ve cherry-picked from more recent versions of Rails.
We found out Rails was moving to GitHub in March 2008, after we had reached out tothem and they had turned us down.
So it was a bit of a surprise.
rails
But there are entire presentations on Rails, so I’m not going to get further into it here.
As for whether it scales or not, we’ll let you know when we find out. Because so farit hasn’t come close to presenting a problem.
We badly wanted this, but didn’t want to invest the time upgrading.
So using a few open source libraries we’ve wrapped our Rails 2.2.2 instance in Rack.
In fact, the Coderack competition is about to open voting to the public this week.
Coders created and submitted dozens of Rack middleware for the competition.
I was a judge so I got the see the submissions already. Some of my favoritewere
unicorn
- 0 downtime deploys- protects against bad rails startup- migrations handled old fashioned way
nginx
For serving static content and slow clients, we use nginx
nginx is pretty much the greatest http server ever
it’s simple, fast, and has a great module system
mojombo / grit
you can get it here
it originally shelled out to git and just parsed the responses.
which worked well for a long time.
One of the first things Scott worked on was rewriting the core parts of Gritto be pure Ruby
Basically a Ruby implementation of Git
smoke
Kinda.
Eventually we needed to move of our git repositories off of our web servers
Today our HTTP servers are distinct from our git servers. The two communicate using smoke
smoke
“Grit in the cloud”
Instead of reading and writing from the disk, Grit makes Smoke calls
The reading and writing then happens on our file servers
bert-rpcbert : erlang ::json javascript:
BERT is an erlang-based protocol
BERT-RPC is really great at dealing with large binariesWhich is a lot of what we do
bert-rpc
we have four file servers, each running bert-rpc servers
our front ends and job queue make RPC calls to the backend servers
chimney
All user routes are kept in Redis
Chimney is how our BERT-RPC clients know which server to hit
It falls back to a local cache and auto-detection if Redis is down
chimney
It can also be told a backend is down.
Optimized for connection refused but in reality that wasn’t the real problem.
proxymachine
All anonymous git clones hit the front end machines
the git-daemon connects to proxymachine, which uses chimney to proxy yourconnection between the front end machine and the back end machine (which holdsthe actual git repository)
very fast, transparent to you
ssh
Sometimes you need to access a repository over ssh
In those instances, you ssh to an fe and we tunnel your connection tothe appropriate backend
To figure that out we use chimney
resque
- dealing with pushes- web hooks- creating events in the database- generating GitHub Pages- clearing & warmingcaches- search indexing
queues
In Resque, a queue is used as both a priority and a localization technique
By localization I mean, “where your workers live”
queuescritical,high,low
these three run on our front end servers
Resque processes them in this order
queuesarchive
And tarball and zip downloads are created on the fly using the `archive` queue on our archiving machines
solr
Solr is basically an HTTP interface on top of Lucene. This makes it pretty simpleto use in your code.
We use solr because of its ability to incrementally add documents toan index.
solr
We’ve had some problems making it stable but luckily the guys at Pivotalhave given us some tips
Like bumping the Java heap size.
Whatever that means
fragments
Formerly we invalidated most of our fragments using a generation scheme,where you put a number into a bunch of related keys and increment itwhen you want all those caches to be missed (thus creating new cache entries with fresh data)
fragments
But we had high cache eviction due to low ram and hardware constraints, and foundthat scheme did more harm than good.
We also noticed some cached data we wanted to remain forever was being evicted due to the slabs with generational keys filling up fast
page
We cache entire pages using nginx’s memcached module
Lots of HTML, but also other data which gets hit a lot and changes rarely:
object
We do basic object caching of ActiveRecord objects such as repositories and users all over the place
Caches are invalidated whenever the objects are saved
associations
We also cache associations as arrays of IDs
Grab the array, then do a get_multi on its contents to get a list of objects
That way we don’t have to worry about caching stale objects
walker
It originally walked trees and cached them when someone pushed
But now it caches everything related to git:
walker
For most big apps, you need to write a caching layerthat knows your business domain
Generic, catch-all caching libraries probably won’t do
sha asset id
Instead of using timestamps for asset ids, which may end up hitting the diskmultiple times on each request, we set the asset id to be the sha of the last commitwhich modified a javascript or css file
sha asset id/css/bundle.css?197d742e9fdec3f7
/js/bundle.js?197d742e9fdec3f7
Now simple code changes won’t force everyone to re-download the css or js bundles
bundling
google’s closure compiler for javascript
we don’t use the most aggressive setting because it means changingyour javascript to appease the compression gods, which we haven’t committed to yet
scripty 301
Again, for most of these tricks you need to really pay attention to your app.
One example is scriptaculous’ wiki
scripty 301
When we changed our wiki URL structure, we setup dynamic 301 redirects for the old urls.
Scriptaculous’ old wiki was getting hit so much we put the redirect into nginx itself -this took strain off our web app and made the redirects happen almost instantly
ajax loading
We also load data in via ajax in many places.
Sometimes a piece of information will just take too long to retrieve
In those instances, we usually load it in with ajax
If Walker sees that it doesn’t have all the information it needs, it kicks off a jobto stick that information in memcached.
We then periodically hit a URL which checks if the information is in memcached or not. If it is, we get it and rewrite the page with the new information.
nagios
Our support team monitors the health of our machines and coreservices using nagios.
I don’t really touch the thing.
test unit
We mostly use Ruby’s test/unit.
We’ve experimented with other libraries including test/spec, shoulda, and RSpec, but in the endwe keep coming back to test/unit
git fixtures
As many of our fixtures are git repositories, we specify in the test what sha we expect to be the HEAD of that fixture.
This means we can completely delete a git repository in one test, then have it back inpristine state in another. We plan to move all our fixtures to a similar git-system in the future.
ci joe
We use ci joe, a continuous integration server, to run on tests after each push.
He then notifies us if the tests fail.
staging
We also always deploy the current branch to staging
This means you can be working on your branch, someone else can be working on theirs,and you don’t need to worry about reconciling the two to test out a feature
One of the best parts of Git
we get weekly emails to our security email (that people find on the security page)
and people are always grateful when we can reassure them or a answer their question
consultant
if you can, find a security consultant to poke your site for XSS vulnerabilities
having your target audience be developers helps, too
backups
backups are incredibly important
don’t just make backups: ensure you can restore them, as well