dependencies of a port:
neededlibs.sh
Extract direct library dependencies (filenames) from binaries.
resolveportsfromlibs.sh
Prints the name(s) of ports(s) given a library filename,
suitable for direct use (copy&paste) in LIB_DEPENDS.
Example usage is included in the scripts. The following combined usage may
be helpful for further porting/testing automation:
resolveportsfromlibs.sh -b /usr/local $(neededlibs.sh /test/bin/*)
Requested by: kris, lofi (sort of)
bsd.commands.mk and can be easily reused within the infrastructure.
- Revert old DESTDIR implementation.
- Add a new, fully chrooted DESTDIR implementation as bsd.destdir.mk.
Sponsored by: Google Summer of Code 2007
Approved by: portmgr (pav)
zfs:
* Enabled by use_zfs=1 in portbuild.conf
* Populate build chroots by cloning a zfs snapshot instead of maintaining
many duplicate copies. In principle this is very efficient since
everything is copy-on-write and zfs snapshot creation is almost
instantaneous. There might be additional overheads from building on zfs
though. Currently the snapshot base is hard-wired to y/${branch}@base
but should be parametrized. This also must be populated beforehand, e.g.
during machine startup
* Clean build chroots by just destroying the snapshot.
tmpfs:
* Enabled by use_tmpfs=1 and tmpfs_size in portbuild.conf
* The previous md strategy of mounting in used/, populating and then
remounting (to avoid possible races from multiple builds claiming the
same chroot) doesn't work here because tmpfs instances are destroyed at
umount. I am not entirely sure the simpler approach is free from races.
order to run certain host binaries that were kernel-dependent. We
now seem to be able to rely on the /rescue versions (and killall(1)
seems to be unused).
* Allow for ccache directories to be shared over NFS via the ccache_dir_nfs
portbuild.conf boolean
* Populate BSD.local.dist from ${PORTSDIR}/Templates and remove population
of BSD.x11-4.dist and support for XFree86 3.x
machine with the lowest number of running jobs. This worked when the
clients were all roughly equivalent, but schedules poorly when there
are some that are much more powerful (e.g. 8-core machines vs UP machines)
* We now compute the ratio of running jobs to maximum jobs and schedule on
the machine with lowest occupation fraction. This populates the machines
to equal fractions of their capacity.
* Only hardlink the old log files instead of anything else that might be
in the directories
* Add comment that old logfiles should be removed as well as packages, to
avoid duplicate versions of the same port log
do it in portbuild from outside the jail thesedays
* Ignore /var/db/fontconfig which does not get restored to pristine state
* Save copies of master.passwd and groups and check them after the build
for changes, to look for user/group additions that may not be correctly
registered in UIDs/GIDs. Future work will hopefully automatically
check against those files and make unregistered IDs a fatal condition
* Correct logic mistake that was keeping distfiles for collection when
the checksum mismatched
with very long arguments (>400000 characters).
The problem reveals for example if
/usr/ports/Tools/scripts/rmport -d print/ghostscript-gnu
is executed - it does
printf "%s\n" "... 451109 chars ..."
Spotted by: rafan
packages due to packages being trimmed by RESTRICTED.
While here, note that the 'missing' column will be off by the number of
duplicates in the other columns. This happens when partial builds are
restarted.
on a machine that has use_md_swap=1, allow for the possibility of reusing
a md between builds if md_persistent=1. This requires a patch from pjd
to support BIO_DELETE in md devices, but it is a big optimization when
it can be used.
There is no change in any of the individual terms; this is merely a
rearrangement.
This change undoes what I was trying to do back in 2004 of breaking up
each individual test into a grep, for readability. The performance of
the script has continued to suffer as new greps were added over time,
to the point where this is now a bad tradeoff.
directories, but a 5% loss on smaller ones.
No code changes (yet) except for the deletion of one duplicate
("fetch: transfer timed out" -> "fetch_timeout".
and continue with removal anyway. Requested by miwi@
* Pipe dependencies information (if any) through a PAGER because INDEX lines
are very long and hard to read when wrapped
This script can sometimes take several hours to run on builder,
and thus leading to confusing of why it still reports an error
that was fixed in cvs some time ago. Including the time when
the ports tree was updated should reduce some of this confusion.
to be removed, possibly with the expiration date and deprecated reason
* If port is not marked for expiration than put "Removed" in ports/MOVED
entry instead of "Has expired"
* Implement -a option to remove all expired ports
* Ask if the cvs diff output should be recreated/reviewed again thus
giving the committer a chance to edit files by hand and view diff
results afterwards
* Cosmetic changes
* Add more XXX comments for future work
* Only record a cvsdone timestamp if we updated cvs
* When building with -trybroken, it's safe (and desirable) to run the
prunefailure script
* Reorganise a few things for better parallelism
* Instead of keeping a duplicate copy of the previous logs and errors
under bak/, just store a symlink to the archival location
* When doing an incremental build, also cycle out the old logs to avoid
broken links on the website (the logs from the previous build are
removed until the packages are rebuilt). Use cpio to create
hardlinked copies of the previous logs. XXX when these are bzipped
by cron to save space the links will be broken and it might actually
take more space.
* Don't bother bunzipping old logs, now that the processlogs scripts
can handle it. This was a waste of time anyway since they'd all be
rebzipped by the next nightly cron job.
* When the build is complete, stash a copy of the restricted ports in
bak/restricted/ before deleting them from packages/, and restore from
here when doing an incremental build to avoid needlessly rebuilding
them each time.
* Increase sparc64 build timeout to 24 hours (we have so few build
machines that we cannot afford to tie them up for longer)
* Increase other arch build timeout to 100 hours (hello openoffice!)
* If we successfully build a formerly broken package, touch errors/.force
which will kick off a rebuild of the html files
* Use a generation number for the bindist tarballs, with compatibility
symlink. Eventually we'll use this to avoid building in a "stale"
chroot (i.e. populated by old world).
* Don't bother running ldconfig on i386, it is evidently not needed since
the other arches work fine without it
* Don't try and mount/umount procfs, it won't work when we build inside a
jail.
* Report the uname -mr of the build environment, to ease confusion of
people reading the error logs by mail.
This commit should largele be a NOOP as it only adds support
for DESTDIR undefined. This does allow us to start testing
ports with DESTDIR set, but this is as of yet not supported.
Although this has been extensively tested on pointyhat, this
is a very intrusive change and some cases may have been
overlooked. Please contact Gabor and me if you find any.
PR: 100555
Submitted by: gabor
Sponsored by: Google Summer of Code 2006
${FILESDIR} which look like patches be treated as binary files. This
prevents RCS tags in patch fragments causing a problem for CVS.
Approved by: garga (maintainer),
ahze (mentor, implicit)
- Remove obsolete explanations which are no longer seen, for speed:
ELF, MOTIF, MOTIFLIB, X_manpage, awk, bison, ffs_conflict, forbidden,
getopt, getopt.h, imake, lc_r, malloc.h, pod2man, sed, stl, soundcard.h,
texinfo, union_wait, values.h
- Add more cases to: arch, bad_c++, compiler_error, depend_object,
install_error, linker_error, mtree, perl5
These changes reduce many dozens of false positives; add a few dozen
true positives; and for certain directories, improve the speed about 10%
(a few drop by 15%).
It turns out that the performance issues are mainly due to the multiple
greps. If performance is an issue we need to go back to the moderately-
unreadable, everything-on-one-line paradigm. Before that happens, I would
like to experiment with some refactoring, so that the patterns are built up
in the shell line-by-line, so you could still be able to read it.
Tested on: pointyhat
Hat: portmgr
When copying INDEX to the server, copy it first to a staging area and
first then to the real location. The copying can take long enough for
users to get a truncated file when downloading during the upload.
time to add a new module. If you want to still use old way, just use
"-M freefall.FreeBSD.org" option
- Take addport maintainership
- When modulesupdate fail, ask user to retry
- Change modulesupdate to work fine with addport
Approved by: will (maintainer)
When removing category/port - look if other ports' Makefiles contain
`/port' rather than `category/port', since the later misses things
like `${.CURDIR}/../port'
script, i.e. so they can be moved back into place before start the next
incremental build so they won't be needlessly rebuilt every time (jdk, I'm
looking at you). It is a bit of a hack since it relies on assumptions
about the structure of that shell script, but for now it's the best we
can do.
server. Error conditions are flagged by other processes by creating
a named dotfile in ${scratchdir}. If these files are found, report the
error status instead of the number of running jobs. Currently report "ERR"
for all error conditions; I will probably change this to a per-condition
message.
Currently only "squid not running" and "disk space low" conditions are
reported.
If the package copy fails, bail out immediately instead of later on when
we try to pkg_add it. Also trap signals and bail out.
Both conditions will cause a retry of the package build.
If portbuild bailed out unexpectedly, mail the log to ${mailto}.
Add some XXX comments about improving robustness of this script.
Sleep for 2 minutes before retrying builds, to avoid spamming ${mailto}
with a high rate of failure logs. In future we might be smarter about
attempting to automatically correct common failure modes.
* Test whether squid is running. If not, try to kick off
the rc script in the background in case it can be restarted
cleanly.
* Test for at least 100MB of free space on the scratch partition.
If either condition fails, set an exception flag and bail out. This
will be reported back to the server via reportload.
cause is because it was specified in the list twice)
* Don't panic when the list of packages to delete becomes empty
* When unexpected filesystem changes are detected, bail immediately
instead of proceeding and hiding the error in the middle of
the log
with all the errors from broken pkg_delete scripts
* As threatened in previous commit, move the pristine mtree spec
generation to phase 1, and avoid having to delete and re-add the
FETCH_DEPENDS. We still have to keep them installed until after
'make extract' though
arguments (cosmetic)
* Detect if a chroot was used to run a jailed build, and first attempt
to gracefully shut it down by killing everything within using pgrep(1)
This has a much higher chance of succeeding that relying on fstat to
identify processes that might interfere with our attempts to clean up
mountpoints, which is fragile (libkvm-dependent), and inherently
unreliable at best.
in portbuild.conf (or per-machine .conf), then construct a 127.0.0.0/8
IP address based on the build directory ID (i.e. unique for each
build instance). This is bound to the lo0 interface for the duration
of the 'phase 2' build.
We cannot build 'phase 1' in a jail since 'make fetch' doesn't always
work through a proxy (e.g. squid sometimes mangles files fetched through
FTP, I think by performing CR/LF translation in FTP ASCII mode).
Pass in the HTTP_PROXY variable to the jail, if set. This allows FTP/HTTP
access from within the jail if the proxy is suitably configured (some ports
legitimately need to fetch additional files during the build, e.g. if they
have a BUILD_DEPENDS=...:configure target that needs to fetch additional
distfiles).
Not all ports can be built in jails (most notably the linux_base ports
since they want to mount/umount linprocfs), so we will need to come up
with a way to deal with this.
Some ports require SYSV IPC, so security.jail.sysvipc_allowed=1 might be
required. Some other ports attempt to perform DNS lookups, ping, or
outbound TCP connections during the build.
When it works, this provides better compartmentalization of package builds,
e.g. easier termination of builds without the possibility of daemonized
processes staying active; no possibility of accidental interference
between jails, etc. It also allows for admin monitoring using jls(1).
* Remove old logs and possible compressed logs before attempting the build
Requested by: lofi [1]
Submitted by: linimon [1]
No more accidental portbuild spam: kris and krion [1]
* Only keep distfiles if the port passes 'make fetch', so we don't
accidentally keep files with invalid checksums
* Use cleanup() instead of directly exiting in some error conditions
* When cleanup() is called indicating an unexpected error (possibly
leaving the filesystem in an inconsistent state), mark the chroot
as dirty so it will not be reused by another build
* Remove packages in dependency order instead of with pkg_delete -f in
possibly incorrect order. This paves the way for focusing on errors
generated by pkg_delete (e.g. @dirrm that should be @dirrmtry) in the
future. [1]
* Detect when packages were left behind because they were still in use
by other packages, indicating an incorrect or incomplete port
dependency list
* Partial support for ccache builds (not yet complete)
* Support non-standard LOCALBASE/X11BASE settings
* Delete FETCH_DEPENDS after the 'make fetch' stage. We have to add
them again before 'make extract' since, due to a lack of a 'fetch
cookie', 'make extract' actually *always* runs 'make fetch' again,
even when distfiles have already been fetched. We need to delete
them in order to:
* Record an mtree spec of the 'pristine' filesystem state, for later
comparison.
# XXX Perhaps this can be done in stage 1 before the
# 'make fetch', removing the need to delete-and-readd.
* Also record an mtree spec of the filesystem state prior to the
build phase. Compare this to the state of the filesystem
immediately before running the install phase, to detect files
that were inappropriately installed during the build phase.
Doing so is a fatal error.
* Prior to installing, try to run a 'regression-test' port makefile
target, if it exists. This allows ports to hook their internal
regression suites into the package build. This needs further
infrastructure support, e.g. a default NOP target in bsd.port.mk.
For now this is run with 'make -k', so regression failures will
not yet actually cause package build failures.
* Separate the 'make install' from 'make package' phases rather than
let the latter implicitly do the install.
* After the newly packaged port has been deleted, compare the state
of the filesystem to the state before 'make install'.
* After removing BUILD and RUN dependencies, compare the filesystem
state to the pristine state before the start of the build. This
also detects package dependencies that did not clean themselves up
properly when deinstalling. It also detects dependencies that were
'missing' from the port INDEX: these were not pkg_added into place,
so the package build had to compile them from scratch (a big waste
of time and effort), so this is now also a fatal error.
PR: ports/85746 (inspired by) [1]
Submitted by: Boris B. Samorodov <bsam@ipt.ru> [1]