From Wed Aug 27 21:45:30 2003 Date: Wed, 27 Aug 2003 16:15:15 -0400 (EDT) From: R P Herrold <> Reply-To: rhel-rebuild list <> To: rhel-rebuild list <> Subject: rhel-r] Re: build systems

On Wed, 27 Aug 2003, Michael Redinger wrote:

>Also, as of the autobuild system: is it available for
>download or did anybody ask them about it?

It is in the netwinder CVS and freely available. Nothing succeeds like success, and the netwinder autobuilder is a running system which produces real installable results with a little help 'out of band'. The product of autobuilder code is generally reflected at:
and the progress table:

If a person or packaging collective were to give it a front end of SRPMs and hints (more later); or cvs checkouts of .spec files sources and patches, some more neat stuff can happen.

Ralph at seems to be a truly nice person (I called him a couple months ago looking for advice on picking up a used netwinder box for devel work. I am up to a Cobalt, 2 netwinders and an old magnum at this point - the computers are taking over here ;) ) -- CVS access is restricted due to bandwidth concerns, but if someone gets a consent, I can mirror in less limited space easily. I have spoken offlist with one of the netwinder team, and the request is in process.

I have proposed similarly a build instruction hinting mechanism underlying the .spec file, to some criticism. The absence of determinism in building rpms is one of those great fictions in this area. Really and truly, BuildRequires alone cannot (as it was not designed to, and lacks information to correctly) answer build order dependency and build environment content questions. [Build dependencies *can* completely check a build system, but specifying completeness has never been attempted, hence weaker "hints". Also inter- and intra- distribution package re-naming gets in the way of portability]

Some developers approach the build environment by pre-installing everything, and getting really fat and dependency laden binaries out of it. I think the principle of having a defined build envirponment, to meet desired function criteria and NOT carry in other stray 'features' which autoconf finds, is important to keep a distribution manageable.

I mentioned this at item S-2 and S-5 of:

> Regarding beehive - I think it not even worth asking Red Hat to make
> it available (for now) ...

I guess I do not understand why not. The worst that can happen is you will get a Not Yet. To my understanding it is highly site specific, and it is rather ... fluid in form over time, and demanding of care to produce results.

The competing Florian system is freely available but has disappeared from his people.redhat tree: he now points to Mach.

The hard drive with my copy of Florian's code is offline. I can resurrect it and hang Florian's code (it carried a GPL, I think) at the Owl River FTP site if people are interested. Lemme know off list if so.

(later revision): I poked around a bit: it is in - it may be obtained thus:

cd cvsroot mkdir rpmutils cvs -d login # no password -- just tap 'enter' here cvs -d get rpmutils
and the bits will be in ./rpmutils

Having done a couple of runs at automated build systems (the last time I turned one of mine loose against Raw Hide on my Aurora box), I got something more than 70% unattended and with NO intervention.

[as an aside: What are the design goals of a build system? at least two extremes of use come to mind]

The side issue of a 'completeness' analysis appears -- Is the packageset (and indeed the build environment itself) present 'strong' enough to be able to completely rebuild itself. (shades of the EE 'Doc' Smith "Skylark" science fiction series -- bueautiful damsel and strong and smart young man marooned on an alien planet -- can he build everything from 'first principles' while fighting off bug eyed monsters -- recommended)

I'll tie this to trpm in a bit
[end aside]

The 'special sauce' of build order and bootstrapping in new build requirements is one challenge. Normal people with a few packages to port over have it easy and can avoid the pain of bootstrapping, starting on the (easy) side of the build system design goals. They avoid this pain.

Another (somewhat an artifact of doing package based, rather than 'make world' builds, as the BSD's have for years) is the circular build dependency problem -- 'make' cannot solve the build order issue.

I believe from observation that RH beehive probably avoids this in most cases with the 'rpmbuild --nodeps' option to avoid missing versioned BuildRequires. Alternatively, an occasional 'make world' to 'prime' the buildsystem after major changes is a sane and easy workaround.

The 'proper' next step on the packages based rebuilder which has completed a '--nodeps' build, which I follow, is to use that intermediate product to satisfy a blocking package; Once it is building, removing the --nodeps and doing a round or two more builds in the NEW environment on those 'bootstrap' packages, to make sure there are not hidden effects.

This means several reinstalls in the bootstrap process -- In my infrastructure, I use a more site specific variant of my outline at:
several times a day on a lot of boxes to control my build environment. Yum has also greatly simplified my life by freeing me of autorpm and some really gnarly in house update scripts. Thanks, seth.

Another approach on circular build dependencies to boostrap the temporary build environment from tarballs on the blocking items, and build a differing element of the circular dependency. At the end of that process, wiping and reinstalling the build environment is still important.

I was talking with another developer at lunch earlier this week on this topic, and I know he has some thoughts on this matter as well. Also, this is as area which the cAos Linux variant will address.

Building as root is strongly disfavored, for security reasons. Even so, I understand there was an interesting patch on remounting a chroot into a build filesystem over top of / to avoid the false security of vservers and UML build pools, which patch my lunch partner was enthusiastic on; There is, of course, the ability for root to evade chroot -- see man 2 chroot, and contemplate the section containing: "super-user can escape" There is an LJ article on this a couple years back.

From man 8 mount:
Since Linux 2.4.0 it is possible to remount part of the file hierarchy somewhere else.
By using "mount --bind" to establish secure jail, we can avoid problems with chroot (i.e. as noted above, if root, chroot can be escaped with relative paths). As I understand this approach, almost all pieces are in current kernels, such that a small change to the clone system call can and will provide a mechanism better than chroot, essentially
mount --bind /path/to/chroot/ /
When done as side effect of clone(2) [clone(2) == fork(2) in an NTPL aware, O(1) scheduler variant], such a overmount would then be inherited by all children, and one could safely build as root.

The present three canonical builder approaches (and a couple others) are to:

(1) ptrace(2) intercepts -- run in an InDependence like wrapper to the builder, and note the dependencies (InDependence has not aged well, and needs to be cleaned up in its dependency parsing ),

(2) LD_PRELOAD intercepts (like fakeroot) -- hook with a library call watcher for each open and inventory what is called

(3) open(2) intercepts -- Poldek mixes this with 'mount --bind' into a build environment essentially trapped in a loopback mount,

(4, 5) chrooted, and vserverish variants; This approach is to trust and build in a polite environment (no potentially hostile content and no coding or build artifacts with soemthing like autoconf or some other tool using relative ../ paths to reach outside the chroot) build as root, in a clean chroot image, or vserver/UML build as root and post-process (diff) the post-build image to note changed atimes for functional dependency determination;

[It seems to me that Thomas VS's 'mach' works this way -- but 'mach' requires an admin to trust sudo more than I do, and the mach system accepts that an rpm -e is a complete inverse operation to installing a temporary build requirement; I know that this is not a safe assumption, for it depends on the skill of the upstream packager in the uninstall post script area -- not a well tested or defined area.]

Caveat, amplified from a reviewer's comment: Again, just having reliable build dependenciess are not necessarily a sign of reliable build system -- The bloat issue, and more importantly performance issues -- raise their heads.

The timings report of Gentoo, vs, RH, Debian, and SuSe was most clear. Anyone can compile (build) a distribution; Making it work, and work stablely, and perform well, are completely different issues than compiling local processor optimizations.

I am certain (well, I hope) the Gentoo folks will (have already) turn to library order, and optimization for speed. The Linux breed will benefit. I respect their willingness to re-examine assupmtions and enthusiasm.

> An interesting mail from Russ Herrold regarding "reproduceable builds":
> (What is trpm? I found that it is in /usr/lib/rpm/ but did not yet have
> time to look at this script to find out what it actually does ...)

trpm is JBJ's testing shim to set up a chrooted test environment with defined package sets present for testing item-X in a given release/version -- I have some notes, but it is really a Emacs-ish thought macro for doing defined tasks well and quickly and repeatably.

I had corresponded privately with him on trpm back in February, and indeed had generated my own documentation on trpm, for using it with more facility.

One of JBJ's comment on it was:

> Hmmm, the only redeemin feature of trpm is the globs that
> match packages in sub-sets that are known to be closed wrto
> dependencies.

And of course, this is really a different problem than using a build system. In response, I had written a tool to reverse these out by walking the Requires/Provides tree. see, e.g.,
for a copy of Leonard den Ottolander <leonardjo> LGPL snippet as well. I manually solved the set for the early cAos ISO at:
for the script. The cAos goals mesh well with those of this list; come join the fun at:

Several folks on this list are already there. Like the guy whose comment I quote in a moment ;)

> > > skvidal observed:
> > > The build system that yellowdog uses. But I think it,
> > > like beehive, is considered too important to release
> > > sort of thing so it might not ever see the light of day.
> > >
> > > no harm in asking, though.

As I said before. I think the reason is more benign. The reason that many of large scale builders are shy about releasing the buildfarm code is that is so resembles the process of making sausage.

I mentioned earlier in this post the importance of setting design goals for a build system implementation; the Gentoo timings deminstrate that just building is not enough.

As the old saw goes, the ISO stack really has 9 layers -- and 8 and 9 are Finance and Politics -- part of the reason for this list, and for cAos, and seth's univ-linux is a response to the rational decisions by RH that it cannot subsidize with boxed sets and an overly long support 'tail' the whole world forever and still meet its ficuciary duties to its shareholders and keep the lights on and the staff showing up for work.

Are they right? Who knows? A healthy and mature ecology of build systems (I did not yet even mention Dag's and Conectiva's and Mandrake's) seems to me to be a 'Good Thing' and so I went on, perhaps too long, with this piece.

There is, and can be, no one size fits all, all singing, all dancing, build system, OSS or not, basically because the problem space is fluidly defined.

-- Russ Herrold

(large parts of this are culled from my development notes, and mailing list participation, and some private correspondence -- any remaining error is solely my own poor re-expression - RPH)

rhel-rebuild mailing list
Hosted at the University of Innsbruck, Austria