Plone Conference 2011, San Francisco
Who here uses buildout? And how many doesn't?
I've been subcontrascting as a consultant for many of the famous Plone companies, such as Jarn, Jazkarta, Sixfeetup and Hexagon IT. And each of them has had a new tool or technique to add to the buildout. So I'm here going to mention the ones I think are best and most useful. "Best practices" if you wish.
When I worked at Nuxeo I was often thrown in to fix projects that was being launched or just had launched or were so delayed the other programmers had to leave to start on new projects, or those would get delayed as well.
If they did this because I was good enough to pull it off, or because they hated me and wanted to make my life miserable, I don't know.
The process to start a project was that I read through one or often several README files that outlined what I needed to install, and then I followed those instructions, installed all the software that was needed.
It took one day if you were lucky, if you were unlucky the instructions were crap, or you got version conflicts with some library somewhere or something else meant it ended up taking you two days instead.
Buildout was originally made to solve this, and enable you to check out an old project and get up and running with it quickly.
Buildout can download and install all the dependencies your project has. This means MySQL/nGinX/Varnish/libraries etc.
For simple projects the buildout process can take just minutes. But for the type of project I mentioned before, with several big dependencied that need to be downloaded and compiled, and most time-consuming: hundreds of Python eggs, which is what you get when installing Plone, it can take and hour. I have encountered a project that would take over two hours to run the first buildout on.
Of course, it's common that you get an error, especially if your operating system is different from the one the buildout is written for. And yes, OS X, I'm looking at you.
The natural thing to do next is to use this for production setups as well, to simplify not only the setup for developers, but for servers. This has the benefit that you can set things up to easily add more instances for the load balancer, or get up and running quickly from a backup.
Buildout uses a configuration file that describes your configuration in sections.
[buildout] parts = python test eggs = zope.event [python] recipe = zc.recipe.egg eggs = ${buildout:eggs} interpreter = python scripts = python [test] recipe = zc.recipe.testrunner eggs = ${buildout:eggs}
Each section is controlled by a recipe, that will take the configuration of that section and do something with it.
Each recipe is a python module. A rough estimate is that there are around 200 recipies on the cheeseshop at the moment.
So we can use buildout in both development and production modes, but the requirements of development and production are quite different. That's easily fixed by having multiple buildout config files and using the extends feature of buildout.
[buildout] extends = buildout1.cfg parts += zopeskel [zopeskel] recipe = zc.recipe.egg eggs = PasteScript [python] interpreter = devpython
The extends feature let's a buildout configuration so to speak "subclass" another configuration. We can both add sections and override parts of sections. There is also loads of features to get variables in from otehr section names, etc, that's all in the buildout documentation.
This enables you to set up a base.cfg with most configuration, and then add development, staging and production setups that extends this base setup.
So this is the typical setup.
Note that none of them is called buildout.cfg. This is because that name is the default name for buildout, so if you run buildout without specifying which configuration you want, it will run buildout.cfg. If that's not the one you wanted, it might install loads of things you don't want, and then you have to re-run buildout with the correct cfg file, and that will take a long time again.
[buildout] extends = development.cfg
Instead you create a buildout.cfg that specifies which config file you want to run by default.
Don't check this into the version control system! Always create it locally, this is local configuration!
Let's take a look at the development buildout, and some tools to help us blast away the bugs.
[buildout] parts = instance extends = http://dist.plone.org/release/4.1.2/versions.cfg find-links = http://dist.plone.org/release/4.1.2 http://dist.plone.org/thirdparty eggs = PIL collective.blog.star [instance] recipe = plone.recipe.zope2instance user = admin:admin eggs = Zope2 Plone ${buildout:eggs}
This, more or less is a minimal plone buildout. I'm going to use it as a base for a long series of buildout configs here. That doesn't mean I recommend you to make loads of small buildouts that add one featuren to your buildout. In fact, I recommend that you don't do that. I've done it in this talk because that makes it possible for me to let each slide be a working buildout so I can test that the configurations I show here actually work.
[buildout] extends = base.cfg eggs += Products.PDBDebugMode Products.ZMIntrospection Products.PrintingMailHost Products.Clouseau Products.DocFinderTab plone.reload
I personally don't find DocFinderTab or plone.reload very useful, but they are popular and don't hurt, so you should try them out.
[buildout] extends = devessentials.cfg parts += zopeskel i18ndude [zopeskel] recipe = zc.recipe.egg eggs = ZopeSkel ${instance:eggs} [i18ndude] recipe = zc.recipe.egg eggs = i18ndude
The scaffolding templates that exist in ZopeSkel really help in starting up your typical Plone projects. And you might even have used it to create your basic buildout.
But, ZopeSkel doesn't always work if it doesn't have access to the eggs of the project, because it looks for entry_points and similar. So the solution is to install ZopeSkel on a per-project basis as well.
i18ndude is a tool useful when you have multilingual sites.
[buildout] extends = zopeskel.cfg parts += zopepy [zopepy] recipe = zc.recipe.egg eggs = ${buildout:eggs} interpreter = zopepy scripts = zopepy
Having a python prompt that has access to all the products you install can be handy. This is often installed under the name of zopepy. This section creates that, and you can get a python prompt by running the resulting bin/zopepy script.
[buildout] extends = zopeskel.cfg parts += python [python] recipe = zc.recipe.egg eggs = ${buildout:eggs} interpreter = python scripts = python
Although personally I don't like the name zopepy, and usually install it with the name "python". It's a matter of taste.
[buildout] extends = python.cfg parts += omelette [omelette] recipe = collective.recipe.omelette eggs = ${buildout:eggs} packages = ./
Omelette is also a popular tool. It creates a directory structure of all the code in all the eggs.
This is useful if your editor doesn't have a feature to go to the definition of identifiers that works across files and supports eggs. Mine does, so I don't use Omelette, but most people do.
One development tool that is highly useful is mr.developer. It enables you to switch packages to "develop mode".
[buildout] extends = omelette.cfg extensions = mr.developer [sources] collective.blog.star = svn https://svn.plone.org/svn/collective/collective.blog.star/trunk collective.blog.view = svn https://svn.plone.org/svn/collective/collective.blog.view/trunk collective.blog.portlets = svn https://svn.plone.org/svn/collective/collective.blog.portlets/trunk collective.blog.feeds = svn https://svn.plone.org/svn/collective/collective.blog.feeds/trunk
activate, a Add packages to the list of development packages. checkout, co Checkout packages deactivate, d Remove packages from the list of development packages. help, h Show help info Lists informations about packages. list, ls Lists tracked packages. rebuild, rb Run buildout with the last used arguments. reset Resets the packages develop status. status, stat, st Shows the status of tracked packages. update, up Updates all known packages currently checked out.
$ bin/develop checkout collective.blog.star INFO: Queued 'collective.blog.star' for checkout. INFO: Checked out 'collective.blog.star' with svn. INFO: Activated 'collective.blog.star'. WARNING: Don't forget to run buildout again, so the checked out packages are used as develop eggs. $ bin/develop rebuild Last used buildout arguments: -c autocheckout.cfg -N INFO: Running buildout. Develop: '../src/collective.blog.star' Updating _mr.developer. Updating instance. Updating zopeskel. Updating python. Updating omelette.
[buildout] extends = mrdeveloper.cfg auto-checkout = collective.blog.star collective.blog.view collective.blog.feeds collective.blog.portlets
[buildout] extends = autocheckout.cfg parts += zest.releaser find-links += http://dist.colliberty.com/ [zest.releaser] recipe = zc.recipe.egg eggs = zest.releaser
zest.releaser will give you a set of commands, the most important one is called "fullrelease" that will help you make a release of your products, with svn tags etc, and uploading to the cheeseshop.
I've also added a private repository as a find-link.
[buildout] extends = release.cfg extensions += lovely.buildouthttp
This extension allows you to define passwords in a .httpauth config file, both per buildout and per user.
It also supports provate github reposiroties via the global github configuration, but I've never used that.
[buildout] extends = private.cfg extensions += buildout.threatlevel
[buildout] extends = threatlevel.cfg parts += test [test] recipe = zc.recipe.testrunner eggs = ${buildout:auto-checkout}
[buildout] extends = testing.cfg parts += coverage-test coverage-report [coverage-test] recipe = zc.recipe.testrunner eggs = ${test:eggs} defaults = ['--coverage', '../../coverage'] [coverage-report] recipe = zc.recipe.egg eggs = z3c.coverage arguments = ('coverage', 'coverage/report')
Buildout will by default chose the latest version it can find of any packages you install. That's not very repeatable, and to make a buildout repeatable you have to make sure it installs specific versions. This is known as pinning the versions.
[buildout] extends = coverage.cfg extensions = buildout.dumppickedversions
The first step in pinning is to install this dumppickedversions extension. It will print out a list of all packages installed that doesn't have it's version pinned.
This is how the output looks it comes at the end of the buildout run.
[versions] Cheetah = 2.2.1 Products.Clouseau = 1.0 Products.DocFinderTab = 1.0.5 Products.PDBDebugMode = 1.3.1 Products.PrintingMailHost = 0.7 Products.ZMIntrospection = 0.3.0 ZopeSkel = 3.0a1 collective.blog.star = 1.0 #Required by: #templer.plone 1.0a1 #ZopeSkel 3.0a1 templer.zope = 1.0a2
And you dump that into a file called versions.cfg.
[buildout] extends = dumppickedversions.cfg versions.cfg
And you make your buildout extend that versions.cfg.
[buildout] parts = instance extends = http://dist.plone.org/release/4.1.2/versions.cfg find-links = http://dist.plone.org/release/4.1.2 http://dist.plone.org/thirdparty eggs = PIL collective.blog.star [instance] recipe = plone.recipe.zope2instance user = admin:admin eggs = Zope2 Plone ${buildout:eggs}
You see that this extends a version.cfg for Plone. That file is just a version pin, file just like our versions.cfg. So it would make sense to merge this.
[buildout] extends = http://dist.plone.org/release/4.1.2/versions.cfg find-links = http://dist.plone.org/release/4.1.2 http://dist.plone.org/thirdparty [sources] collective.blog.star = svn https://svn.plone.org/svn/... collective.blog.view = svn https://svn.plone.org/svn/... collective.blog.portlets = svn https://svn.plone.org/... collective.blog.feeds = svn https://svn.plone.org/svn... [versions] Cheetah = 2.2.1 Products.Clouseau = 1.0 Products.DocFinderTab = 1.0.5 Products.PDBDebugMode = 1.3.1 Products.PrintingMailHost = 0.7
So let's move these version pin parts from base.cfg into versions.cfg.
[buildout] extends = plone-4.1.2.cfg
[buildout] extends = zopeapp-1.0.4.cfg zope-2.13.10.cfg [versions] etc...
Remember that these versions files in turn may have links to others. They need to be copied locally, all of them, and the links in all of them must be changed.
[buildout] extends = versions.cfg extensions = buildout.dumppickedversions buildout.threatlevel lovely.buildouthttp parts = instance eggs = PIL collective.blog.star [instance] recipe = plone.recipe.zope2instance user = admin:admin eggs = Zope2 Plone ${buildout:eggs}
[buildout] extends = base.cfg parts = zeo instance1 [zeo] recipe = plone.recipe.zeoserver zeo-address = 8000 eggs = ${buildout:eggs} [instance1] recipe = plone.recipe.zope2instance user = admin:admin zeo-client = True zeo-address = ${zeo:zeo-address} shared-blob = True http-address = 8081 eggs = Zope2 Plone ${buildout:eggs}
zserver-threads = 2 zodb-cache-size = 10000
[buildout] extends = zeo.cfg parts += instance2 instance3 instance4 [instance2] <= instance1 http-address = 8082 [instance3] <= instance1 http-address = 8083 [instance4] <= instance1 http-address = 8084
[buildout] extends = instances.cfg parts += debug-instance [debug-instance] <= instance1 zserver-threads = 1 http-address = 8080
[buildout] extends = debug-instance.cfg parts += haproxy-build haproxy-conf [haproxy-build] recipe = plone.recipe.haproxy target = linux26 pcre = 1 [haproxy-conf] recipe = collective.recipe.template input = ${buildout:directory}/templates/haproxy.conf output = ${buildout:directory}/etc/haproxy.conf maxconn = 12000 ulimit-n = 65536 bind = 0.0.0.0:8180
global log 0.0.0.0 local6 maxconn ${haproxy-conf:maxconn} nbproc 1 ulimit-n ${haproxy-conf:ulimit-n} [...] frontend zopecluster bind ${haproxy-conf:bind} default_backend zope # Load balancing over the zope instances backend zope [...] server plone0101 127.0.0.1:${instance1:http-address} ... server plone0102 127.0.0.1:${instance2:http-address} ... [...]
[buildout] extends = haproxy.cfg parts += varnish-build varnish [varnish-build] recipe = zc.recipe.cmmi url = ${varnish:download-url} [varnish] recipe = plone.recipe.varnish daemon = ${buildout:parts-directory}/varnish-build/sbin/varnishd bind = 127.0.0.1:8280 backends = 127.0.0.1:8180 cache-size = 128M mode = foreground
The section section isn't called varnish-conf, because the script created to run varnish gets the same name as the section, and then the script to run varnish would be called varnish-conf, and that would be weird.
[buildout] extends = varnish.cfg parts += supervisor supervisor-conf [supervisor] recipe = zc.recipe.egg eggs = supervisor [supervisor-conf] recipe = collective.recipe.template input = ${buildout:directory}/templates/supervisord.conf output = ${buildout:directory}/etc/supervisord.conf
[program:1] command = ${buildout:directory}/bin/instance1 console redirect_stderr = true autostart= true autorestart = true directory = ${buildout:directory} stdout_logfile = ${buildout:directory}/var/log/instance1-stdout.log stderr_logfile = ${buildout:directory}/var/log/instance1-stderr.log [group:instance] programs = 1,2,3,4
$ bin/supervisorctl stop instance:4 $ bin/supervisorctl restart instance:*
[buildout] extends = supervisord.cfg parts += supervisor-crontab packcronjob [supervisor-crontab] recipe = z3c.recipe.usercrontab times = @reboot command = ${buildout:bin-directory}/supervisord -c ${supervisor-conf:output} [packcronjob] recipe = z3c.recipe.usercrontab times = 0 1 * * 7 command = ${buildout:directory}/bin/zeopack
[buildout] extends = crontab.cfg parts += backup backupcronjob [backup] recipe = collective.recipe.backup [backupcronjob] recipe = z3c.recipe.usercrontab times = 0 12 * * * command = ${buildout:directory}/bin/backup
[buildout] extends = backup.cfg parts += logrotate-conf logrotate [logrotate-conf] recipe = collective.recipe.template input = ${buildout:directory}/templates/logrotate.conf output = ${buildout:directory}/etc/logrotate.conf [logrotate] recipe = z3c.recipe.usercrontab times = 0 6 * * * status = ${buildout:directory}/var/logrotate.status command = /usr/sbin/logrotate --state ${logrotate:status} ${logrotate-conf:output}
Lennart Regebro
Table of Contents | t |
---|---|
Exposé | ESC |
Full screen slides | e |
Presenter View | p |
Source Files | s |
Slide Numbers | n |
Toggle screen blanking | b |
Show/hide slide context | c |
Notes | 2 |
Help | h |