Discussion:
New FreeBSD package system (a.k.a. Daemon Package System (dps))
(too old to reply)
David Naylor
2007-05-10 19:05:21 UTC
Permalink
Dear Jordan

Recently I stumbled across a document you wrote in 2001, entitled "FreeBSD
installation and package tools, past, present and future". I find FreeBSD
appealing and I would like to contribute it its success, and as your article
describes, the installation and packaging system is lacking. Since the
installation system is being tackled under a SoC project I am hoping to give
the packaging system a go.

I was hoping you could help me with an update about the situation with pkg. I
have searched the FreeBSD mailing lists and have found little information on
the package system. Once I have a (much more) complete understanding of the
packaging system (and providing there is work to be done) I would like to
write up a proposal to solve the problems, and perhaps provide some
innovating new capabilities.

After that I will gladly contribute what I can to this (possible) project and
hopefully further and improve FreeBSD. Any assistance or information you can
give I will be greatly appreciate.

I look forward to your reply.

David
Ivan Voras
2007-05-11 00:10:05 UTC
Permalink
Ivan Voras
2007-05-11 08:28:16 UTC
Permalink
Ivan Voras
2007-05-11 09:19:13 UTC
Permalink
Ivan Voras
2007-05-11 09:59:41 UTC
Permalink
Ivan Voras
2007-05-11 15:54:09 UTC
Permalink
Ivan Voras
2007-05-11 17:00:41 UTC
Permalink
Kris Kennaway
2007-05-11 19:10:57 UTC
Permalink
Post by Mike Meyer
Post by Kris Kennaway
The point is that the real problem is: "how do you arrange the bits on
disk", not "how do you wrap that in a package system". Until you
figure out a workable on-disk arrangement for the files, questions
about packaging are not relevant. And it seems that option 1 is the
only workable one in practise (unless you have some other idea), which
can easily be achieved today with a couple of hours of kernel hacking.
There are clearly other workable ideas - as I said, the linux folks
managed to make it work. But it's not an easy problem. I certainly
wouldn't suggest rebuilding the packaging system to deal with this,
except as part of a larger effort. On the other hand, since people are
working on the ports/package system (I see port/pkg database and some
ports infrastructure work in the current SoC projects list), not
keeping this goal in mind would seem to be a bit short-sighted. I
wouldn't be surprised if your option #1 could benefit from this as
well.
But you said you were interested in working on it...so what is your
idea?

Kris
Eric Anderson
2007-05-12 01:19:51 UTC
Permalink
Post by David Naylor
Dear Jordan
Recently I stumbled across a document you wrote in 2001, entitled "FreeBSD
installation and package tools, past, present and future". I find FreeBSD
appealing and I would like to contribute it its success, and as your article
describes, the installation and packaging system is lacking. Since the
installation system is being tackled under a SoC project I am hoping to give
the packaging system a go.
I've just read the document, and since I was also thinking about the
- I think it's time to give up on using BDB+directory tree full of text
files for storing the installed packages database, and I propose all of
this be replaced by a single SQLite database. SQLite is public domain
(can be slurped into base system), embeddable, stores all data in a
single file, lightweight, fast, and can be used to do fancy things such
as reporting. The current pkg_info's behaviour that takes *noticable*
*time* to generate 1 line of output per package is horrible in this day
and age. And the upcoming X11/X.Org 7.2 with cca 400 packages (which is
in itself horrible just for a X11 system) will make it unbearable (also
portupgrade which already looks slower by the day).
I don't think it would be a good idea to use SQLite for this purpose.
First of all using the file system is the Unix way of doing things. It's
efficient and easy to use, it transparent and user friendly. You can
simply run vi to inspect a text file but you can't do this which an
sqlite database. You have to learn sqlite to do it.
Furthermore I don't think the pkg_* tools are slow. They are quite fast
IMO. If you let pkg_info print the entire list of installed ports it's
only slow because of your line-buffered console. Just redirect the
output to a file and you'll see that it's blazing fast. If I compare it
Hmm.. Tried to stay out of this whole 'discussion', but, I just disagree
here flat out.

# time pkg_info > /dev/null

real 0m16.952s
user 0m0.751s
sys 0m0.163s

During that entire time, my hard disk was being hammered. Blazing
fast.. ? Maybe I need a 15k RPM SCSI disk, or something.

Anyway, to me, a file system *IS* a database, and if it's slow, it
probably could be used better, and get much better performance. I've
seen some very ingenious file system layouts that have several million
records easily used on slow systems, and it works pretty well.

[..snip warm fuzzies..]
I think the best way to go would be to use only folder hierarchies and
text files and write a libary in C that provides portupgrade
functionality. The code under src/usr.sbin/pkg_install/lib/ would be a
good base for this. Then you could use a frontend program that makes use
of this library. This frontend could be a CLI program or a GUI based
program.
That's not a bad idea, if it isn't already available somewhere.

Eric
Ivan Voras
2007-05-11 09:44:16 UTC
Permalink
Mike Meyer
2007-05-12 04:55:44 UTC
Permalink
Post by Kris Kennaway
Post by Mike Meyer
There are clearly other workable ideas - as I said, the linux folks
managed to make it work. But it's not an easy problem. I certainly
wouldn't suggest rebuilding the packaging system to deal with this,
except as part of a larger effort. On the other hand, since people are
working on the ports/package system (I see port/pkg database and some
ports infrastructure work in the current SoC projects list), not
keeping this goal in mind would seem to be a bit short-sighted. I
wouldn't be surprised if your option #1 could benefit from this as
well.
But you said you were interested in working on it...so what is your
idea?
Actually, I said it's on my list of things to do. That's a rather long
list. I'm more interested in port building, but the dependencies to
that one look like:

1) Get a gcc build whose driver correctly handles "-m32" for
cross-compiling. The system gcc can do cross-compilation, but the
driver doesn't know where the various bits it needs to link the
generated objects with live, so you have to tell it about those
things by hand. Possibly one of the versions in ports does this,
but it would be nice if the one in the base system did this. I
believe it's mostly a matter of properly configuring the gcc build.

2) Rework the ports dependency/conflict detection facilities. Mostly,
figuring out the architecture of the current build vs. any
previously installed packages or ports it depends on or conflicts
with. There are also some things that could be done to make the
dependencies more reliable, like auto-detecting shared library
dependencies and adding those. Having the target platform for an
installed package in the package database would be a nice jump on
this.

3) Now we get to the interesting part: figuring out where the bits for
the cross-installation are going to land. A lot depends on how far
apart we want to keep the two worlds. I'd like them to be
integrated, as the point of being able to install 32-bit binaries
on a 64-bit system is to run applications that aren't yet 32-bit
clean, so the only collisions should be shared libraries common to
applications from both sets. A lot will depend on how badly things
collide. The other factor is that I'd like the build platform to be
tranparent to the end user: you should be able to build 32-bit
packages on either 32 or 64 bit platforms, and use them on
either. This ties back to the "How do we figure out where our
libraries are" issue. I don't know if all these goals can be met.

4) Once that's done, start working on getting cross-building and
installing ports working. This is a long way from any work being
done on it, so I don't have any details. Doing step #2 would help
with fleshing out these step.

5) Now make make building packages from those ports and installing
from those packages work.

<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Mike Meyer
2007-05-12 04:55:48 UTC
Permalink
Post by Kris Kennaway
Post by Mike Meyer
There are clearly other workable ideas - as I said, the linux folks
managed to make it work. But it's not an easy problem. I certainly
wouldn't suggest rebuilding the packaging system to deal with this,
except as part of a larger effort. On the other hand, since people are
working on the ports/package system (I see port/pkg database and some
ports infrastructure work in the current SoC projects list), not
keeping this goal in mind would seem to be a bit short-sighted. I
wouldn't be surprised if your option #1 could benefit from this as
well.
But you said you were interested in working on it...so what is your
idea?
Actually, I said it's on my list of things to do. That's a rather long
list. I'm more interested in port building, but the dependencies to
that one look like:

1) Get a gcc build whose driver correctly handles "-m32" for
cross-compiling. The system gcc can do cross-compilation, but the
driver doesn't know where the various bits it needs to link the
generated objects with live, so you have to tell it about those
things by hand. Possibly one of the versions in ports does this,
but it would be nice if the one in the base system did this. I
believe it's mostly a matter of properly configuring the gcc build.

2) Rework the ports dependency/conflict detection facilities. Mostly,
figuring out the architecture of the current build vs. any
previously installed packages or ports it depends on or conflicts
with. There are also some things that could be done to make the
dependencies more reliable, like auto-detecting shared library
dependencies and adding those. Having the target platform for an
installed package in the package database would be a nice jump on
this.

3) Now we get to the interesting part: figuring out where the bits for
the cross-installation are going to land. A lot depends on how far
apart we want to keep the two worlds. I'd like them to be
integrated, as the point of being able to install 32-bit binaries
on a 64-bit system is to run applications that aren't yet 32-bit
clean, so the only collisions should be shared libraries common to
applications from both sets. A lot will depend on how badly things
collide. The other factor is that I'd like the build platform to be
tranparent to the end user: you should be able to build 32-bit
packages on either 32 or 64 bit platforms, and use them on
either. This ties back to the "How do we figure out where our
libraries are" issue. I don't know if all these goals can be met.

4) Once that's done, start working on getting cross-building and
installing ports working. This is a long way from any work being
done on it, so I don't have any details. Doing step #2 would help
with fleshing out these step.

5) Now make make building packages from those ports and installing
from those packages work.

<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Tim Kientzle
2007-05-12 05:46:06 UTC
Permalink
Post by Joerg Sonnenberger
Post by Tim Kientzle
3) As DES pointed out, the package tools must be able
to read the metadata before the files.
Actually, the argument is pretty weak. Being able to extract them
streamable and access the meta-data easily is fine. The remote access
argument is very weak as it doesn't allow e.g. signature checks.
I presume you mean that you have to scan the entire
package to verify the signature before doing installation?

I don't think you do, really. If you can roll back an
installation, then you can verify the signature during
a streaming install; if the signature fails, you roll back.
A good package installer needs to support rollback anyway
to do robust dependency handling.

I know two relatively straightforward ways to structure
the installation process to support rollback. <sigh>
So many ideas, so little time... ;-)

Tim Kientzle
Ivan Voras
2007-05-12 11:27:45 UTC
Permalink
Philippe Laquet
2007-05-12 12:14:39 UTC
Permalink
On Fri, 11 May 2007 02:10:05 +0200
- I think it's time to give up on using BDB+directory tree full of text
files for storing the installed packages database, and I propose all of
this be replaced by a single SQLite database. SQLite is public domain
(can be slurped into base system), embeddable, stores all data in a
single file, lightweight, fast, and can be used to do fancy things such
as reporting.
What is the reason to use SQL-based database? You'll perform direct
queries to database? The packaging system is for ordinal users, not sql
geeks, so they should not have to use sql for managing packages. So a
simple set of hashes will suffer or needs. I agree with Julian that we
should have a backup of packaging database in plain text format, and
utility to rebuild it. This way we can always restore the database if
something goes wrong. Furhtermore, that should not make a great impact
on performance, since we don't have to rebuild it every day.
I agree with Stan ;)

"fast and improved" package utilities uses mainly some indexed berkeley
DB combined with flat files, aren't they? I, and may be many other
FreeBSD users use light systems for efficiency and eaiser management, if
we use some database system it will require Disk Space, ressources for
the DB to run, dependencies and so on... And we also may be exposed to a
"that DB is better" war ;)
- A quick test confirms that the current bsdtar will happily ignore any
extra data at the end of a tgz/tbz archive, so package metadata can be
embedded there, thus conserving existing infrastructure and being fast
to parse. I suggest encoding this metadata in a sane and easy to parse
XML structure.
I cannot currently actively participate in implementing proposed things,
but I can give advice on sqlite, database and xml schemas if anyone
wants to...
Why use XML for that? It's hard to parse and hard to read format, and I
personally see no benefits of using it. If you're suggesting XML a
simple bracket-structure format (like bind's config) will fit our needs
much better (easier to parse and read and same benefits as XML). Also
we might consider YAML, thought I like this idea much fewer.
XML could be an altertative to order packages, it can be parsed with
some limited dependencies like PERL. The userland tools to manage
packages could be based on that language? It is well known by many
users, quite simple, required by many other packages so the whole system
won't be much heavier. PERL XML Parser can't be a good choice?

* PERL-DB for managing packages databases
* PERL-XML for parsing categories, dependencies ...

PERL also give , in most cases, good performance issues.

This is solely ma humble opinion ;)
--
Stanislav Sedov
ST4096-RIPE
Ivan Voras
2007-05-12 14:26:53 UTC
Permalink
Garrett Cooper
2007-05-15 05:03:06 UTC
Permalink
Post by Bert JW Regeer
Post by Philippe Laquet
On Fri, 11 May 2007 02:10:05 +0200
- I think it's time to give up on using BDB+directory tree full of text
files for storing the installed packages database, and I propose all of
this be replaced by a single SQLite database. SQLite is public domain
(can be slurped into base system), embeddable, stores all data in a
single file, lightweight, fast, and can be used to do fancy things such
as reporting.
What is the reason to use SQL-based database? You'll perform direct
queries to database? The packaging system is for ordinal users, not sql
geeks, so they should not have to use sql for managing packages. So a
simple set of hashes will suffer or needs. I agree with Julian that we
should have a backup of packaging database in plain text format, and
utility to rebuild it. This way we can always restore the database if
something goes wrong. Furhtermore, that should not make a great impact
on performance, since we don't have to rebuild it every day.
I agree with Stan ;)
"fast and improved" package utilities uses mainly some indexed
berkeley DB combined with flat files, aren't they? I, and may be many
other FreeBSD users use light systems for efficiency and easier
management, if we use some database system it will require Disk Space,
resources for the DB to run, dependencies and so on... And we also
may be exposed to a "that DB is better" war ;)
SQLite is compiled inside a program, and as such does not require any
resources other than one file handle and some CPU time when querying.
The file is stored on disk, and requires no separate process to be
running to query. Maybe I misunderstood what you were trying to say.
SQLite will require less resources than flat text files, since SQLite is
a one time open then process, instead of what is currently happening,
having to open and close hundreds of files depending on how many ports
are installed. With this regard, SQLite is like BDB. Where SQLite uses
standards compliant SQL statements to get data.
Correct. From what I was reading shared memory read access and locking
are two available features of BDB databases.

The only thing is that I do agree that there should be a dumping and
importing mechanism of some kind for semi-formatted text files, for
backup, debugging, and modification purposes. That's just my personal
idea on the topic though :).
Post by Bert JW Regeer
Post by Philippe Laquet
--
Stanislav Sedov
ST4096-RIPE
I am able to understand many of the gripes with using a databases, and
have to import yet another code base into the FreeBSD base, however as
one of the young ones, and knowing sed/awk/grep and SQL, I prefer SQL
over having to process hundreds of text files using text processing
tools. It saddens me each time I run one of the pkg_* tools that needs
to parse the flat file structure since it takes so long. I have friends
running Ubuntu and their apt-get returns results much faster.
In a world where hard drives are becoming more reliable, and are
automatically relocating sectors that go bad, do we really have to worry
about database corruption as much? I feel that many of the fears that
are being put forward will do harm to a text based "storage" system as
well. If one block drops out, it can cause tools to not be able to parse
the files. Create a backup copy of the database after each successful
transaction? There are ways to battle data corruption.
True. I was thinking of backup, and recreation from scratch, considering
that the database wouldn't be more than a few megs. In place replacement
just seems like a hairy situation sometimes..
Post by Bert JW Regeer
Using BDB is not an real option either. I can not even count the amount
of times that the BDB database that portupgrade created has become
corrupt because I accidently ran two portupgrades at the same time, or
even remembered that I did not want to upgrade something and hit Ctrl+C.
I'm sorry but nothing's completely solid in that respect, AFAIK. In
terms of the first problem you mentioned, Wade is working on the locking
<http://wiki.freebsd.org/WadeWesolowsky>.

In terms of transactions, maybe we should take a look at Subversion for
inspiration: <http://svn.haxx.se/dev/archive-2005-03/0301.shtml>. I'm a
firm believer that it's easier to incorporate code than it is to remove it.
Post by Bert JW Regeer
The experience I got from running SVN with BDB as the back-end database
to store my data, I say no thanks. In that case I would much rather
stick with the flat text files than go with a database.
Well, a few comments:

-Text files are bloated. Although many people are for XML, it takes much
longer to parse than binary databases.
-Custom text files require custom format capable parsers, no matter what
the format, and the less coverage a parser has, the more probable the
likelihood of bugs IMO.
-In the event that features changed or were added, some required
modifications to the parser could be trivial to major. With databases
you can get away from that mentality to some degree IMHO.

-Garrett
Bert JW Regeer
2007-05-15 06:34:52 UTC
Permalink
Post by Garrett Cooper
Post by Bert JW Regeer
Post by Philippe Laquet
On Fri, 11 May 2007 02:10:05 +0200
- I think it's time to give up on using BDB+directory tree full of text
files for storing the installed packages database, and I
propose all of
this be replaced by a single SQLite database. SQLite is public domain
(can be slurped into base system), embeddable, stores all data in a
single file, lightweight, fast, and can be used to do fancy things such
as reporting.
What is the reason to use SQL-based database? You'll perform direct
queries to database? The packaging system is for ordinal users, not sql
geeks, so they should not have to use sql for managing packages. So a
simple set of hashes will suffer or needs. I agree with Julian that we
should have a backup of packaging database in plain text format, and
utility to rebuild it. This way we can always restore the
database if
something goes wrong. Furhtermore, that should not make a great impact
on performance, since we don't have to rebuild it every day.
I agree with Stan ;)
"fast and improved" package utilities uses mainly some indexed
berkeley DB combined with flat files, aren't they? I, and may be
many other FreeBSD users use light systems for efficiency and
easier management, if we use some database system it will require
Disk Space, resources for the DB to run, dependencies and so
on... And we also may be exposed to a "that DB is better" war ;)
SQLite is compiled inside a program, and as such does not require
any resources other than one file handle and some CPU time when
querying. The file is stored on disk, and requires no separate
process to be running to query. Maybe I misunderstood what you
were trying to say. SQLite will require less resources than flat
text files, since SQLite is a one time open then process, instead
of what is currently happening, having to open and close hundreds
of files depending on how many ports are installed. With this
regard, SQLite is like BDB. Where SQLite uses standards compliant
SQL statements to get data.
Correct. From what I was reading shared memory read access and
locking are two available features of BDB databases.
The only thing is that I do agree that there should be a dumping
and importing mechanism of some kind for semi-formatted text files,
for backup, debugging, and modification purposes. That's just my
personal idea on the topic though :).
Post by Bert JW Regeer
Post by Philippe Laquet
--
Stanislav Sedov
ST4096-RIPE
I am able to understand many of the gripes with using a databases,
and have to import yet another code base into the FreeBSD base,
however as one of the young ones, and knowing sed/awk/grep and
SQL, I prefer SQL over having to process hundreds of text files
using text processing tools. It saddens me each time I run one of
the pkg_* tools that needs to parse the flat file structure since
it takes so long. I have friends running Ubuntu and their apt-get
returns results much faster.
In a world where hard drives are becoming more reliable, and are
automatically relocating sectors that go bad, do we really have to
worry about database corruption as much? I feel that many of the
fears that are being put forward will do harm to a text based
"storage" system as well. If one block drops out, it can cause
tools to not be able to parse the files. Create a backup copy of
the database after each successful transaction? There are ways to
battle data corruption.
True. I was thinking of backup, and recreation from scratch,
considering that the database wouldn't be more than a few megs. In
place replacement just seems like a hairy situation sometimes..
Post by Bert JW Regeer
Using BDB is not an real option either. I can not even count the
amount of times that the BDB database that portupgrade created has
become corrupt because I accidently ran two portupgrades at the
same time, or even remembered that I did not want to upgrade
something and hit Ctrl+C.
I'm sorry but nothing's completely solid in that respect, AFAIK. In
terms of the first problem you mentioned, Wade is working on the
locking <http://wiki.freebsd.org/WadeWesolowsky>.
In terms of transactions, maybe we should take a look at Subversion
for inspiration: <http://svn.haxx.se/dev/
archive-2005-03/0301.shtml>. I'm a firm believer that it's easier
to incorporate code than it is to remove it.
I am unable to see any references to transaction support for BDB
databases, maybe I am missing something. Subversion in that thread is
suggesting SQL for a totally different reason. fsfs is what most
people are using as a subversion backend to help avoid BDB
corruption. From the many people I have talked to that used to use
Subversion with BDB have had major issues, whereas fsfs has not had
any issues at all.

Just what I have experienced myself as a Subversion repository
administrator.
Post by Garrett Cooper
Post by Bert JW Regeer
The experience I got from running SVN with BDB as the back-end
database to store my data, I say no thanks. In that case I would
much rather stick with the flat text files than go with a database.
-Text files are bloated. Although many people are for XML, it takes
much longer to parse than binary databases.
/var/db/pkg/ are all plain flat text files. I am not a supporter of
XML at all.
Post by Garrett Cooper
-Custom text files require custom format capable parsers, no matter
what the format, and the less coverage a parser has, the more
probable the likelihood of bugs IMO.
We already have these in the pkg_* functions, so i'd hope they are
fairly solid!
Post by Garrett Cooper
-In the event that features changed or were added, some required
modifications to the parser could be trivial to major. With
databases you can get away from that mentality to some degree IMHO.
Changing an SQL query versus re-writing a parser for text files is a
huge difference.
Post by Garrett Cooper
-Garrett
I am not opposed to text files, other than that they can be slow. I
am against BDB because over the years, in my experience they have
shown to be extremely unreliable and easily corrupted. If we are
going to be making changes to the way the ports/packages store the
information about what exists, it should be done in such a way that
it is scalable and at the same time extensible (is this a word?).

Bert JW Regeer
Ivan Voras
2007-05-12 14:25:06 UTC
Permalink
Bert JW Regeer
2007-05-12 18:50:35 UTC
Permalink
Post by Philippe Laquet
On Fri, 11 May 2007 02:10:05 +0200
- I think it's time to give up on using BDB+directory tree full
of text
files for storing the installed packages database, and I propose
all of
this be replaced by a single SQLite database. SQLite is public
domain
(can be slurped into base system), embeddable, stores all data in a
single file, lightweight, fast, and can be used to do fancy
things such
as reporting.
What is the reason to use SQL-based database? You'll perform direct
queries to database? The packaging system is for ordinal users,
not sql
geeks, so they should not have to use sql for managing packages. So a
simple set of hashes will suffer or needs. I agree with Julian
that we
should have a backup of packaging database in plain text format, and
utility to rebuild it. This way we can always restore the database if
something goes wrong. Furhtermore, that should not make a great
impact
on performance, since we don't have to rebuild it every day.
I agree with Stan ;)
"fast and improved" package utilities uses mainly some indexed
berkeley DB combined with flat files, aren't they? I, and may be
many other FreeBSD users use light systems for efficiency and
eaiser management, if we use some database system it will require
Disk Space, ressources for the DB to run, dependencies and so on...
And we also may be exposed to a "that DB is better" war ;)
SQLite is compiled inside a program, and as such does not require any
resources other than one file handle and some CPU time when querying.
The file is stored on disk, and requires no separate process to be
running to query. Maybe I misunderstood what you were trying to say.
SQLite will require less resources than flat text files, since SQLite
is a one time open then process, instead of what is currently
happening, having to open and close hundreds of files depending on
how many ports are installed. With this regard, SQLite is like BDB.
Where SQLite uses standards compliant SQL statements to get data.
Post by Philippe Laquet
--
Stanislav Sedov
ST4096-RIPE
I am able to understand many of the gripes with using a databases,
and have to import yet another code base into the FreeBSD base,
however as one of the young ones, and knowing sed/awk/grep and SQL, I
prefer SQL over having to process hundreds of text files using text
processing tools. It saddens me each time I run one of the pkg_*
tools that needs to parse the flat file structure since it takes so
long. I have friends running Ubuntu and their apt-get returns results
much faster.

In a world where hard drives are becoming more reliable, and are
automatically relocating sectors that go bad, do we really have to
worry about database corruption as much? I feel that many of the
fears that are being put forward will do harm to a text based
"storage" system as well. If one block drops out, it can cause tools
to not be able to parse the files. Create a backup copy of the
database after each successful transaction? There are ways to battle
data corruption.

Using BDB is not an real option either. I can not even count the
amount of times that the BDB database that portupgrade created has
become corrupt because I accidently ran two portupgrades at the same
time, or even remembered that I did not want to upgrade something and
hit Ctrl+C. The experience I got from running SVN with BDB as the
back-end database to store my data, I say no thanks. In that case I
would much rather stick with the flat text files than go with a
database.

Bert JW Regeer
Bert JW Regeer
2007-05-12 18:31:59 UTC
Permalink
Post by Philippe Laquet
On Fri, 11 May 2007 02:10:05 +0200
- I think it's time to give up on using BDB+directory tree full
of text
files for storing the installed packages database, and I propose
all of
this be replaced by a single SQLite database. SQLite is public
domain
(can be slurped into base system), embeddable, stores all data in a
single file, lightweight, fast, and can be used to do fancy
things such
as reporting.
What is the reason to use SQL-based database? You'll perform direct
queries to database? The packaging system is for ordinal users,
not sql
geeks, so they should not have to use sql for managing packages. So a
simple set of hashes will suffer or needs. I agree with Julian
that we
should have a backup of packaging database in plain text format, and
utility to rebuild it. This way we can always restore the database if
something goes wrong. Furhtermore, that should not make a great
impact
on performance, since we don't have to rebuild it every day.
I agree with Stan ;)
"fast and improved" package utilities uses mainly some indexed
berkeley DB combined with flat files, aren't they? I, and may be
many other FreeBSD users use light systems for efficiency and
eaiser management, if we use some database system it will require
Disk Space, ressources for the DB to run, dependencies and so on...
And we also may be exposed to a "that DB is better" war ;)
SQLite is compiled inside a program, and as such does not require any
resources other than one file handle and some CPU time when querying.
The file is stored on disk, and requires no separate process to be
running to query. Maybe I misunderstood what you were trying to say.
SQLite will require less resources than flat text files, since SQLite
is a one time open then process, instead of what is currently
happening, having to open and close hundreds of files depending on
how many ports are installed. With this regard, SQLite is like BDB.
Where SQLite uses standards compliant SQL statements to get data.
Post by Philippe Laquet
--
Stanislav Sedov
ST4096-RIPE
I am able to understand many of the gripes with using a databases,
and have to import yet another code base into the FreeBSD base,
however as one of the young ones, and knowing sed/awk/grep and SQL, I
prefer SQL over having to process hundreds of text files using text
processing tools. It saddens me each time I run one of the pkg_*
tools that needs to parse the flat file structure since it takes so
long. I have friends running Ubuntu and their apt-get returns results
much faster.

In a world where hard drives are becoming more reliable, and are
automatically relocating sectors that go bad, do we really have to
worry about database corruption as much? I feel that many of the
fears that are being put forward will do harm to a text based
"storage" system as well. If one block drops out, it can cause tools
to not be able to parse the files. Create a backup copy of the
database after each successful transaction? There are ways to battle
data corruption.

Using BDB is not an real option either. I can not even count the
amount of times that the BDB database that portupgrade created has
become corrupt because I accidently ran two portupgrades at the same
time, or even remembered that I did not want to upgrade something and
hit Ctrl+C. The experience I got from running SVN with BDB as the
back-end database to store my data, I say no thanks. In that case I
would much rather stick with the flat text files than go with a
database.

Bert JW Regeer
Garrett Cooper
2007-05-14 05:16:41 UTC
Permalink
Post by Jos Backus
[snip]
How robust is it - can a corrupt block fry the entire database?
Dunno, but "Transactions are atomic, consistent, isolated, and durable (ACID)
even after system crashes and power failures.". So it appears to try hard to
minimize the chance of corruption.
How about portability - can I move the file to a completely
different architecture and still get the data from it?
"Database files can be freely shared between machines with different byte
orders."
(Quotes taken from http://www.sqlite.org/)
Also, the code is in the public domain.
Be wary of the possible fine print:

"Portions of the documentation and some code used as part of the build
process might fall under other licenses."

From: <http://www.sqlite.org/copyright.html>.

-Garrett
Julian Elischer
2007-05-11 01:33:15 UTC
Permalink
Post by David Naylor
Dear Jordan
Recently I stumbled across a document you wrote in 2001, entitled "FreeBSD
installation and package tools, past, present and future". I find FreeBSD
appealing and I would like to contribute it its success, and as your article
describes, the installation and packaging system is lacking. Since the
installation system is being tackled under a SoC project I am hoping to give
the packaging system a go.
I've just read the document, and since I was also thinking about the
- I think it's time to give up on using BDB+directory tree full of text
files for storing the installed packages database, and I propose all of
this be replaced by a single SQLite database. SQLite is public domain
(can be slurped into base system), embeddable, stores all data in a
single file, lightweight, fast, and can be used to do fancy things such
as reporting. The current pkg_info's behaviour that takes *noticable*
*time* to generate 1 line of output per package is horrible in this day
and age. And the upcoming X11/X.Org 7.2 with cca 400 packages (which is
in itself horrible just for a X11 system) will make it unbearable (also
portupgrade which already looks slower by the day).
ok, let me give you some words that come from the wisdom of having done this
for 30 years.

"Use an SQL DB file for this over my dead body"..

Now having said that, I need to modify it a bit and explain.

Explanation:
The number of times I've gone into /var/db/pkg to read the files to
figure out what the "fsck" is going on, is very large.
It is also possible to delete files and edit them.
This is very important to me in this imperfect world when the pkg
system gets things wrong (oh no, don't tell me that the new pkg system never
gets things wrong.. like including a file in 2 packages).

Modification:
Now, I have no objection to a DB file of some sort IF (and only if)
it is somehow rebuildable from a text base which is kept somewhere,
like the password database is done. but PLEASE, NO SQL in the base system!
- A quick test confirms that the current bsdtar will happily ignore any
extra data at the end of a tgz/tbz archive, so package metadata can be
embedded there, thus conserving existing infrastructure and being fast
to parse. I suggest encoding this metadata in a sane and easy to parse
XML structure.
I cannot currently actively participate in implementing proposed things,
but I can give advice on sqlite, database and xml schemas if anyone
wants to...
Mike Meyer
2007-05-11 01:47:49 UTC
Permalink
I cannot currently actively participate in implementing proposed things,
but I can give advice on sqlite, database and xml schemas if anyone
wants to...
One of the things that would be nice for a replacement to do would be
to correctly install i386 packages on amd64 platforms (and similar
things). The binaries will run if we have all the libraries installed,
but getting them installed is painful. At the very least, an installed
package needs to know what platform it was built for, so that packages
being installed cross-platform have a chance of figuring out whether
or not the installed version of FOO will work for them, or they need
to get a version for their specific platform. If the new version can't
make cross-platform installs work, hopefully that goal can be kept in
mind while the work is beign done.

Personally, I'd still like LOCALBASE to move out of /usr/local. Maybe
it's time to reconsider that.

<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Kris Kennaway
2007-05-11 01:51:56 UTC
Permalink
Post by Mike Meyer
I cannot currently actively participate in implementing proposed things,
but I can give advice on sqlite, database and xml schemas if anyone
wants to...
One of the things that would be nice for a replacement to do would be
to correctly install i386 packages on amd64 platforms (and similar
things).
This has nothing to do with a new packaging system and can be done
today if someone cares enough to work on it.
Post by Mike Meyer
The binaries will run if we have all the libraries installed,
but getting them installed is painful. At the very least, an installed
package needs to know what platform it was built for, so that packages
being installed cross-platform have a chance of figuring out whether
or not the installed version of FOO will work for them, or they need
to get a version for their specific platform. If the new version can't
make cross-platform installs work, hopefully that goal can be kept in
mind while the work is beign done.
Personally, I'd still like LOCALBASE to move out of /usr/local. Maybe
it's time to reconsider that.
Not gonna happen as a default, but you can change it on your systems
if you like.

Kris
Kris Kennaway
2007-05-12 05:20:38 UTC
Permalink
Post by Mike Meyer
Post by Kris Kennaway
Post by Mike Meyer
There are clearly other workable ideas - as I said, the linux folks
managed to make it work. But it's not an easy problem. I certainly
wouldn't suggest rebuilding the packaging system to deal with this,
except as part of a larger effort. On the other hand, since people are
working on the ports/package system (I see port/pkg database and some
ports infrastructure work in the current SoC projects list), not
keeping this goal in mind would seem to be a bit short-sighted. I
wouldn't be surprised if your option #1 could benefit from this as
well.
But you said you were interested in working on it...so what is your
idea?
Actually, I said it's on my list of things to do.
You might have meant this, but to quote what you actually said:

--
But hey, if there's a document listing what needs to be done
somewhere and it's really relatively minor, I need this bad enough to
deal with that.
--

I replied with an extremely minor way to solve the problem (adjust
freebsd32 file lookups to check /compat/ia32 first), but apparently
you don't really need it bad enough to actually go and solve the
problem in a day when you could not-really-solve it in a year instead
:)
Post by Mike Meyer
That's a rather long
list. I'm more interested in port building, but the dependencies to
1) Get a gcc build whose driver correctly handles "-m32" for
cross-compiling. The system gcc can do cross-compilation, but the
driver doesn't know where the various bits it needs to link the
generated objects with live, so you have to tell it about those
things by hand. Possibly one of the versions in ports does this,
but it would be nice if the one in the base system did this. I
believe it's mostly a matter of properly configuring the gcc build.
Yes, this would be nice. To a first approximation it's not needed
though: you can either use the precompiled i386 packages, or build in
your /compat/ia32 chroot (cf Gabor's work to automatically install
ports inside a chroot, which would extend trivially to this case to
work automatically).
Post by Mike Meyer
2) Rework the ports dependency/conflict detection facilities. Mostly,
figuring out the architecture of the current build vs. any
previously installed packages or ports it depends on or conflicts
with. There are also some things that could be done to make the
dependencies more reliable, like auto-detecting shared library
dependencies and adding those. Having the target platform for an
installed package in the package database would be a nice jump on
this.
This was part of my "option 2"
Post by Mike Meyer
3) Now we get to the interesting part: figuring out where the bits for
the cross-installation are going to land. A lot depends on how far
apart we want to keep the two worlds. I'd like them to be
integrated, as the point of being able to install 32-bit binaries
on a 64-bit system is to run applications that aren't yet 32-bit
clean, so the only collisions should be shared libraries common to
applications from both sets. A lot will depend on how badly things
collide. The other factor is that I'd like the build platform to be
tranparent to the end user: you should be able to build 32-bit
packages on either 32 or 64 bit platforms, and use them on
either. This ties back to the "How do we figure out where our
libraries are" issue. I don't know if all these goals can be met.
So was this. As I discussed, my experience suggests that it is
unlikely this can be solved without imposing severe limitations on the
packages that may be concurrently installed, so in practise I expect
it to be an insufficiently general solution and not worth pursuing, as
opposed to the trivially implemented, completely general "Option 1".

Anyway, either you're interested in doing option 1 or you're not...if
you are, I'll be happy to support your work and shepherd it into the
tree. Otherwise, best of luck with that to-do list and we'll chat
some more in a year or two :)

Kris
Mike Meyer
2007-05-12 05:54:43 UTC
Permalink
Post by Mike Meyer
Post by Mike Meyer
Post by Kris Kennaway
But you said you were interested in working on it...so what is your
idea?
Actually, I said it's on my list of things to do.
--
But hey, if there's a document listing what needs to be done
somewhere and it's really relatively minor, I need this bad enough to
deal with that.
--
Ah, sorry - I thought you were referring to my comment when I first
brought this up. Yeah, if there were a minor fix that gave me the
integrated environment I want, that I'd do.
Post by Mike Meyer
I replied with an extremely minor way to solve the problem (adjust
freebsd32 file lookups to check /compat/ia32 first), but apparently
you don't really need it bad enough to actually go and solve the
problem in a day when you could not-really-solve it in a year instead
:)
There are already solutions using an emulated environment that don't
require any work on my part.
Post by Mike Meyer
Post by Mike Meyer
That's a rather long
list. I'm more interested in port building, but the dependencies to
1) Get a gcc build whose driver correctly handles "-m32" for
cross-compiling. The system gcc can do cross-compilation, but the
driver doesn't know where the various bits it needs to link the
generated objects with live, so you have to tell it about those
things by hand. Possibly one of the versions in ports does this,
but it would be nice if the one in the base system did this. I
believe it's mostly a matter of properly configuring the gcc build.
Yes, this would be nice. To a first approximation it's not needed
though: you can either use the precompiled i386 packages, or build in
your /compat/ia32 chroot (cf Gabor's work to automatically install
ports inside a chroot, which would extend trivially to this case to
work automatically).
Actually, I need this more than I need 32 bit packages to install.
Post by Mike Meyer
So was this. As I discussed, my experience suggests that it is
unlikely this can be solved without imposing severe limitations on the
packages that may be concurrently installed, so in practise I expect
it to be an insufficiently general solution and not worth pursuing, as
opposed to the trivially implemented, completely general "Option 1".
You may well be right. On the other hand, the linux folks seem to have
come up with something they thought was worthwhile, which to me
indicates otherwise.
Post by Mike Meyer
Anyway, either you're interested in doing option 1 or you're not...if
you are, I'll be happy to support your work and shepherd it into the
tree. Otherwise, best of luck with that to-do list and we'll chat
some more in a year or two :)
Not interested in doing option 1. There are easier ways to get a
segregated 32-bit environment. Yeah, it'd be a little bit better
coupled, and some people might think that's worth spending a day or
two on, but I'm not one of them.

Personally, I think a year or two might be a bit optimistic.

Thanks,
<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Mike Meyer
2007-05-11 02:03:22 UTC
Permalink
Post by Kris Kennaway
Post by Mike Meyer
I cannot currently actively participate in implementing proposed things,
but I can give advice on sqlite, database and xml schemas if anyone
wants to...
One of the things that would be nice for a replacement to do would be
to correctly install i386 packages on amd64 platforms (and similar
things).
This has nothing to do with a new packaging system and can be done
today if someone cares enough to work on it.
Well, yeah - *anything* can be done if someone cares enough to work on
it - it's all just SMOP. You could definitely put enough smarts into
the package installer do this without actually changing the packaging
system. But if we're gonna change the packaging system anyway, why not
consider adding information that the package building software already
has so that the package installer software doesn't have to try and
figure it out?
Post by Kris Kennaway
Post by Mike Meyer
The binaries will run if we have all the libraries installed,
but getting them installed is painful. At the very least, an installed
package needs to know what platform it was built for, so that packages
being installed cross-platform have a chance of figuring out whether
or not the installed version of FOO will work for them, or they need
to get a version for their specific platform. If the new version can't
make cross-platform installs work, hopefully that goal can be kept in
mind while the work is beign done.
Personally, I'd still like LOCALBASE to move out of /usr/local. Maybe
it's time to reconsider that.
Not gonna happen as a default, but you can change it on your systems
if you like.
Not very reliably.

<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Kris Kennaway
2007-05-11 02:12:49 UTC
Permalink
Post by Mike Meyer
Post by Kris Kennaway
Post by Mike Meyer
I cannot currently actively participate in implementing proposed things,
but I can give advice on sqlite, database and xml schemas if anyone
wants to...
One of the things that would be nice for a replacement to do would be
to correctly install i386 packages on amd64 platforms (and similar
things).
This has nothing to do with a new packaging system and can be done
today if someone cares enough to work on it.
Well, yeah - *anything* can be done if someone cares enough to work on
it - it's all just SMOP. You could definitely put enough smarts into
the package installer do this without actually changing the packaging
system. But if we're gonna change the packaging system anyway, why not
consider adding information that the package building software already
has so that the package installer software doesn't have to try and
figure it out?
Sure, we could pile on some more features onto a 6 year old design
document that never got out of the design phase, or someone could just
go and make the relatively minor changes to support i386 packages on
amd64 now. I guess it's always more fun to build dream castles though
:)
Post by Mike Meyer
Post by Kris Kennaway
Not gonna happen as a default, but you can change it on your systems
if you like.
Not very reliably.
Best I can offer ;)

Kris
Mike Meyer
2007-05-11 03:15:37 UTC
Permalink
Post by Kris Kennaway
Post by Mike Meyer
Post by Kris Kennaway
Post by Mike Meyer
I cannot currently actively participate in implementing proposed things,
but I can give advice on sqlite, database and xml schemas if anyone
wants to...
One of the things that would be nice for a replacement to do would be
to correctly install i386 packages on amd64 platforms (and similar
things).
This has nothing to do with a new packaging system and can be done
today if someone cares enough to work on it.
Well, yeah - *anything* can be done if someone cares enough to work on
it - it's all just SMOP. You could definitely put enough smarts into
the package installer do this without actually changing the packaging
system. But if we're gonna change the packaging system anyway, why not
consider adding information that the package building software already
has so that the package installer software doesn't have to try and
figure it out?
Sure, we could pile on some more features onto a 6 year old design
document that never got out of the design phase, or someone could just
go and make the relatively minor changes to support i386 packages on
amd64 now. I guess it's always more fun to build dream castles though
:)
Last time I looked into this, it didn't look relatively minor to
me. But hey, if there's a document listing what needs to be done
somewhere and it's really relatively minor, I need this bad enough to
deal with that. On the other hand, if no one has actually done the
work to figure out what this would really take, is wishful thinking
really enough to keep a very desirable feature (well, it's desirable
enough that most Linux platforms seem to offer it) from even being
considered?
Post by Kris Kennaway
Post by Mike Meyer
Post by Kris Kennaway
Not gonna happen as a default, but you can change it on your systems
if you like.
Not very reliably.
Best I can offer ;)
Is this the new motto for FreeBSD? Good QA practices would have the
ports built with that knob set to something something other than the
default at regular intervals. Lately things have been better, but
I've found broken things in the last twelve months.

<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Garrett Cooper
2007-05-11 03:20:42 UTC
Permalink
Post by David Naylor
Dear Jordan
Recently I stumbled across a document you wrote in 2001, entitled "FreeBSD
installation and package tools, past, present and future". I find FreeBSD
appealing and I would like to contribute it its success, and as your article
describes, the installation and packaging system is lacking. Since the
installation system is being tackled under a SoC project I am hoping to give
the packaging system a go.
I was hoping you could help me with an update about the situation with pkg. I
have searched the FreeBSD mailing lists and have found little information on
the package system. Once I have a (much more) complete understanding of the
packaging system (and providing there is work to be done) I would like to
write up a proposal to solve the problems, and perhaps provide some
innovating new capabilities.
After that I will gladly contribute what I can to this (possible) project and
hopefully further and improve FreeBSD. Any assistance or information you can
give I will be greatly appreciate.
I look forward to your reply.
David
Yipes. The name of the game is to get something working in the base
system, instead of dragging in multiple 3rd party packages, with
licensing schemes that may not be aligned with the BSD license.

SQL's great, SQL's wonderful for db use, but the problem is that
supporting it from my POV would cause a lot more grief and waiting than
having me wait a few months to get a BDB compatible scheme out the door.

If only Oracle didn't make BDB 3.x non-BSD license friendly though..
that would be nice..

-Garrett
Tim Kientzle
2007-05-14 04:13:51 UTC
Permalink
Post by Duane Whitty
I'm a little out of practice, however, perhaps the routines
that manipulate the ports meta-data could be sufficiently
agnostic about how the data is being manipulated that it
would facilitate experimentation with different
back-ends at a later time....
Yes. This is an excellent idea. I wrote up some of my own
ideas in this direction a few years ago:

http://people.freebsd.org/~kientzle/libarchive/libpkg.3.txt

The basic idea was, as you say, to provide an abstract interface
that separates the data storage from what the tools require.
Unfortunately, libarchive (which started as part of a package
tools overhaul) has absorbed more time than I expected, so I've
not had a chance to get back to these ideas.

Tim Kientzle
Tim Kientzle
2007-05-14 04:37:17 UTC
Permalink
Post by David Naylor
Since the installation system is being tackled under a SoC project I am
hoping to give the packaging system a go.
Wonderful! Be careful about one point: The packaging system
as a whole is a big system; much bigger than many people
believe. A lot of people (including myself) have set out
to rebuild the package system. Few of us have gotten very
far. (For example, I built libarchive in order to rework
pkg_add. But changing pkg_add's install logic required
rethinking dependency handling, which turned out to be a
lot more complex than I thought.)

A good place to start is with the existing tools. Search
the bug reports, work up fixes to some of them and submit
those fixes as follow-ups. Start conservatively; don't
break compatibility with existing tools until you understand
the system more completely.

In particular, expect a lot of skepticism about major
format changes until you have some actual numbers to
back you up. A lot of people on this mailing list have
looked into performance issues with the package system;
the real problems may not be where you think they are.
Post by David Naylor
I have searched the FreeBSD mailing lists and have found little
information on the package system.
The people who maintain the current codebase are right on
this mailing list. Ask away! What do you want to know?

Tim Kientzle
Kris Kennaway
2007-05-11 05:18:52 UTC
Permalink
Yoshihiro Ota
2007-05-11 07:10:48 UTC
Permalink
On Thu, 10 May 2007 22:03:22 -0400
Post by Mike Meyer
Post by Kris Kennaway
Post by Mike Meyer
Personally, I'd still like LOCALBASE to move out of /usr/local. Maybe
it's time to reconsider that.
Not gonna happen as a default, but you can change it on your systems
if you like.
Not very reliably.
<mike
I've done it about 1 year ago.
It is working very good.
When I encounter a problem with upgrading ports, I can rollback to old one by swapping a partition.

I put everything under /ports partition and use mount_nullfs.
Here is how it looks like:

% df | grep port
/dev/ad3s1e 5583486 3841966 1294842 75% /ports
/ports/db/pkg 5583486 3841966 1294842 75% /var/db/pkg
/ports/X11R6 5583486 3841966 1294842 75% /usr/X11R6
/ports/local 5583486 3841966 1294842 75% /usr/local
/ports/compat 5583486 3841966 1294842 75% /usr/compat
/dev/md0.uzip 886686 391416 424336 48% /usr/ports

Hiro
Dag-Erling Smørgrav
2007-05-11 08:18:19 UTC
Permalink
- A quick test confirms that the current bsdtar will happily ignore any
extra data at the end of a tgz/tbz archive, so package metadata can be
embedded there, thus conserving existing infrastructure and being fast
to parse. I suggest encoding this metadata in a sane and easy to parse
XML structure.
This has been discussed a million times before. The metadata needs to
be at the *start* of the package file, otherwise networked installs
(such as pkg_add -r) of large packages will require huge amounts of
memory and / or temporary disk storage.

DES
--
Dag-Erling Smørgrav - ***@des.no
Dag-Erling Smørgrav
2007-05-11 08:19:46 UTC
Permalink
Post by Kris Kennaway
Post by Mike Meyer
Personally, I'd still like LOCALBASE to move out of /usr/local. Maybe
it's time to reconsider that.
Not gonna happen as a default, but you can change it on your systems
if you like.
Not if you want to use pre-built packages. You made sure of that when
you decided (against my objections) to include .la files in packages.

DES
--
Dag-Erling Smørgrav - ***@des.no
Kris Kennaway
2007-05-11 08:26:57 UTC
Permalink
Dag-Erling Smørgrav
2007-05-11 08:33:29 UTC
Permalink
Post by Dag-Erling Smørgrav
Not if you want to use pre-built packages. You made sure of that when
you decided (against my objections) to include .la files in packages.
I have a suspicion you're never going to let that go, but it's not
relevant here anyway. Binaries have been hardcoding their build
location (e.g. /usr/local) since the dawn of time.
Most don't.
The best you can
do is to binary edit everything to a string of the same length, and
that works for .la files too.
The existence of .la files is a bug.

We already have a mechanism for recording dependencies between
libraries; it's built into the ELF format, and does not require
hardcoding any directories. Introducing .la files which override the
existing mechanism and *do* hardcode directories is a regression.

I don't buy the argument that "KDE won't build without them", or
whatever it was you used to justify this. There is nothing an .la file
does which can't be done more properly by adding the correct directory
to your ldconfig path.

DES
--
Dag-Erling Smørgrav - ***@des.no
Joerg Sonnenberger
2007-05-11 08:38:56 UTC
Permalink
- I think it's time to give up on using BDB+directory tree full of text
files for storing the installed packages database, and I propose all of
this be replaced by a single SQLite database.
Take a look at the pkgjam presentation during the last pkgsrcCon:
http://www.pkgsrcCon.org/2007/presentations.html

It has merits and disadvantages.

Joerg
Kris Kennaway
2007-05-11 08:40:27 UTC
Permalink
Post by Dag-Erling Smørgrav
Post by Dag-Erling Smørgrav
Not if you want to use pre-built packages. You made sure of that when
you decided (against my objections) to include .la files in packages.
I have a suspicion you're never going to let that go, but it's not
relevant here anyway. Binaries have been hardcoding their build
location (e.g. /usr/local) since the dawn of time.
Most don't.
Assertion without proof. In fact a quick survey shows that 90% of my
/usr/local/bin references /usr/local.
Post by Dag-Erling Smørgrav
The best you can
do is to binary edit everything to a string of the same length, and
that works for .la files too.
The existence of .la files is a bug.
We already have a mechanism for recording dependencies between
libraries; it's built into the ELF format, and does not require
hardcoding any directories. Introducing .la files which override the
existing mechanism and *do* hardcode directories is a regression.
I don't buy the argument that "KDE won't build without them", or
whatever it was you used to justify this.
I can't help it that you weren't paying attention.

Kris
Joerg Sonnenberger
2007-05-11 08:52:28 UTC
Permalink
Post by Dag-Erling Smørgrav
We already have a mechanism for recording dependencies between
libraries; it's built into the ELF format, and does not require
hardcoding any directories. Introducing .la files which override the
existing mechanism and *do* hardcode directories is a regression.
That is because FreeBSD doesn't use the ELF mechanisms to specify search
directories in most cases. It still depends on the a.out orignating
ld.so.conf mechanism.

Joerg
Peter Jeremy
2007-05-11 09:01:18 UTC
Permalink
Dag-Erling Smørgrav
2007-05-11 09:02:37 UTC
Permalink
Post by Kris Kennaway
Post by Dag-Erling Smørgrav
The existence of .la files is a bug.
We already have a mechanism for recording dependencies between
libraries; it's built into the ELF format, and does not require
hardcoding any directories. Introducing .la files which override the
existing mechanism and *do* hardcode directories is a regression.
I don't buy the argument that "KDE won't build without them", or
whatever it was you used to justify this.
I can't help it that you weren't paying attention.
My point is that the argument is bogus. I am positively certain that
the root issue was not the absence of .la files; installing them merely
served as a workaround for the actual problem, which was most likely
related to ldconfig and / or LD_LIBRARY_PATH.

This is like the (apocryphal) story of the car that wouldn't start on
the way back from the store when the owner went to buy ice cream - but
only if he bought vanilla. The correct fix is not to buy a different
flavor. Once you realize that the vanilla ice cream is right next to
the check-out register while the others are deeper within the store and
therefore take considerably longer to get, you start looking for vapor
lock.

DES
--
Dag-Erling Smørgrav - ***@des.no
Dag-Erling Smørgrav
2007-05-11 09:03:45 UTC
Permalink
Post by Kris Kennaway
Post by Dag-Erling Smørgrav
Post by Dag-Erling Smørgrav
Not if you want to use pre-built packages. You made sure of that when
you decided (against my objections) to include .la files in packages.
I have a suspicion you're never going to let that go, but it's not
relevant here anyway. Binaries have been hardcoding their build
location (e.g. /usr/local) since the dawn of time.
Most don't.
Assertion without proof. In fact a quick survey shows that 90% of my
/usr/local/bin references /usr/local.
I forgot to answer this bit: of course they do, if they were linked in
the presence of .la files. Please understand that they are the problem,
not the solution.

DES
--
Dag-Erling Smørgrav - ***@des.no
Dag-Erling Smørgrav
2007-05-11 09:34:59 UTC
Permalink
You can inspect s sqlite database with the provided utility. Unless the
database gets corrupted (which it tries to avoid by respecting ACID),
ACID is not something a database "respects", it is a set of guarantees
that it provides to the application. Avoiding database corruption is
a necessary requirement for, rather than a consequence of, ACID.

Perhaps you mean that SQLite tries to avoid database corruption by using
locks, and either scatter-gather writes or copy-on-write, and flushing
the file between transactions, to ensure consistency?

DES
--
Dag-Erling Smørgrav - ***@des.no
Dag-Erling Smørgrav
2007-05-11 09:41:53 UTC
Permalink
don't see how this would help. The most common operation with binary
packages *over the network* is "pkg_add -r", which will need to read
it whole anyway, and it would help greatly for things such as
installers from CD media. (Querying a bunch of packages over the
network for their properties, one by one, is not a good idea, but it
is on a local media).
Having the metadata in front means you don't need to store a temporary
copy of the package in memory or on disk; you extract the metadata in
memory, and the rest of the package directly in its final location.

AFAIK, pkg_create makes sure that +CONTENTS is always the first file in
the archive, precisely to make this possible. The fact that pkg_add
doesn't take advantage of it is a bug.

DES
--
Dag-Erling Smørgrav - ***@des.no
Dag-Erling Smørgrav
2007-05-11 09:51:48 UTC
Permalink
Post by Dag-Erling Smørgrav
You can inspect s sqlite database with the provided utility. Unless the
database gets corrupted (which it tries to avoid by respecting ACID),
ACID is not something a database "respects", it is a set of guarantees
that it provides to the application. Avoiding database corruption is
a necessary requirement for, rather than a consequence of, ACID.
I'm thinking of ACID as a set of ideas / procedures, the consequence
of which is avoiding corruption. Of course, there's a "hierarchy of
reliability" - the db relies on the file system to meet the
requirements, the file system relies on the hardware, etc. but if the
db doesn't make use of those, it's all for nothing.
The world would be a much nicer place if people would stop redefining
technical terms to mean whatever suits them.

DES
--
Dag-Erling Smørgrav - ***@des.no
Joerg Sonnenberger
2007-05-11 10:10:54 UTC
Permalink
I think you're overreacting. You say: if the database is consistent,
it's ACID ("Avoiding database corruption is a necessary requirement for,
rather than a consequence of, ACID") and I say: if the database is ACID,
it's consistent. Aren't we both right?
Database corruption violate I (Integrety) and D (Durability). So you
can't have a database providing ACID without protecting against database
corruption. Consistency is actually the second letter of ACID.

That said, there are limits. Enough issues that can corrupt a sqlite
database can also corrupt the current pkgdb tree. In fact, a number of
issues are less likely to hit sqlite. It doesn't mean that either is
perfect.

Joerg
Dag-Erling Smørgrav
2007-05-11 10:37:08 UTC
Permalink
Post by Dag-Erling Smørgrav
The world would be a much nicer place if people would stop redefining
technical terms to mean whatever suits them.
I think you're overreacting. You say: if the database is consistent,
it's ACID ("Avoiding database corruption is a necessary requirement
for, rather than a consequence of, ACID") and I say: if the database
is ACID, it's consistent. Aren't we both right?
First, that's not what I'm saying. Look up "requirement" and
"consequence" in a dictionary (and read up on Leibniz if you feel like
it)

Second, ACID is much more than not corrupting the database. It also
means returning consistent results for the duration of a transaction,
for instance.

DES
--
Dag-Erling Smørgrav - ***@des.no
Mike Meyer
2007-05-11 14:35:41 UTC
Permalink
There are a few ways you can go. The simplest is to install a
complete i386 world in e.g. /compat/ia32 and have i386 packages
installed there, and change the kernel to do a "magic directory
lookup" for i386 binaries that does path munging like the linuxulator
does with /compat/linux.
Actually, the simplest is to install a virtual machine and just use
that. But none of these is quite ideal.
If you want the i386 packages to live in /usr/local alongside the
directive to the +CONTENTS file recorded in /var/db/pkg, and add some
checking to pkg_add to ensure that when you install a package,
everything it depends on was built for the same architecture.
Which is pretty much what I pointed out in my original request. Except
you don't really need everything it depends on to be built for the
same architecture. If your port needs gmake to build, or uses
ghostscript to do ps->jpeg conversion or some such, it won't really
care what architecture the executables were built for - just that they
run. It would help if requirements were flagged with whether or not
they required the same architecture.
You'd need to also add these checks to bsd.port.mk so it exits
gracefully when someone tries to natively compile a port for which
non-native dependencies are installed (e.g. it's going to be really
unhappy if it finds i386 headers or libraries). This method might be
more trouble than it's worth.
Again, it might or might not be unhappy. But now you're venturing into
ports as opposed to the package system. I have a whole other list of
things to consider there.
Post by Mike Meyer
On the other hand, if no one has actually done the
work to figure out what this would really take, is wishful thinking
really enough to keep a very desirable feature (well, it's desirable
enough that most Linux platforms seem to offer it) from even being
considered?
You misunderstand me: by all means people should work on improving our
package infrastructure -- but it's wrong to pin your hopes on a
possibly mythical future event when you can easily solve a problem
today.
Oh, I've been dealing with BSD long enough to know better than to pin
my hopes on such things. Even doing it may not be enough - you still
have to get someone to *commit* it. On the other hand, someone stepped
up and said "We're going to rework the package system." I don't think
it's to much to ask that they keep this requirement in mind.
Post by Mike Meyer
Post by Kris Kennaway
Post by Mike Meyer
Post by Kris Kennaway
Not gonna happen as a default, but you can change it on your systems
if you like.
Not very reliably.
Best I can offer ;)
Is this the new motto for FreeBSD? Good QA practices would have the
ports built with that knob set to something something other than the
default at regular intervals. Lately things have been better, but
I've found broken things in the last twelve months.
You'll be delighted to know that I have been doing precisely this for
the past few years. If you're interested I'll CC you the errors from
my next build so you can help work on improving compliance.
Actually, what i'm delighted about is that I actually noticed things
were getting better - the last time I did a near-complete set of
upgrades, nothing broke because of LOCALBASE having a non-default
setting. I had figured this was just do to people reporting such bugs
- I know I always do. It seems that you're responsible. Thank you.

I still think we ought to quit pretending that ports/packages aren't
part of BSD, and default LOCALBASE to /usr. But if changing it is
being tested, that's a big help.

Thanks,
<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Alexander Leidinger
2007-05-11 14:12:00 UTC
Permalink
Post by Dag-Erling Smørgrav
The existence of .la files is a bug.
I fully agree.
Post by Dag-Erling Smørgrav
We already have a mechanism for recording dependencies between
libraries; it's built into the ELF format, and does not require
hardcoding any directories. Introducing .la files which override the
existing mechanism and *do* hardcode directories is a regression.
I don't buy the argument that "KDE won't build without them", or
whatever it was you used to justify this. There is nothing an .la file
does which can't be done more properly by adding the correct directory
to your ldconfig path.
Unfortunately you are addressing this to the wrong people. You need to
talk with the libtool people. Trying to use a libtool version which is
as closest as possible to the version distributed by the authors is a
very sane requirement for our ports collection and the users which
expect a sane behavior of libtool when they want to create a tarball
which is supposed to be cross-platform (for an appropriate value of
cross-platform).

So you need to address the issues upstream. As soon as a new release
comes out with the improvements you suggest, we will get them in
FreeBSD. The problem is that a lot of users of libltdl try to open the
.la file instead of the lib (AFAIR libltdl tries to first open the .la
then the .so if you don't have a .la or .so ending in the dlopen()
string, but most people specify the .la... I could misremember this,
it's been a while since I looked at it).

The way to go is to teach libtool about the ELF features and to not
produce the .la files on ELF systems. Additionally a warning in
libltdl needs to printed in case the .la file is used (as a second
step maybe adding a long delay to the loading in case a .la file is
used, to provoke the transition).

While it may be more easy to talk about doing something like this in
our ports collection, the way to go is to address this upstream with
the libtool authors, and not with portmgr/ade or on hackers/ports.

Bye,
Alexander.
--
Law of the Yukon:
Only the lead dog gets a change of scenery.

http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137
Mike Meyer
2007-05-11 15:23:00 UTC
Permalink
Perhaps this is a good time I should mention that I think sqlite would
also be good for the password and login databases? :)
Someone has already pointed out the horror that is the Windows
registry. IIUC, even MS has figured out this is a bad idea, and gotten
away from it with Vista. But it's been tried on Unix systems before as
well. People who've dealt with AIX tend to feel the same way about
it's config system as they do about the Windows Registry.

Having a magic tool to edit these files is a pain - it means you now
have two sets of things to learn: the magic tool, and the specific
details of the file you're dealing with. On a Unix system, knowing how
to tweak text files is pretty much a given. So all you need is the
details for that file.

Using a binary format brings it's own problems. How hard is it to fix
a corrupt database? How hard is it to figure out that the database is
so corrupt you aren't going to get anything out of it, so you might as
well give up? How robust is it - can a corrupt block fry the entire
database? How about portability - can I move the file to a completely
different architecture and still get the data from it? If any of these
are noticably worse than they are for text files, changing is probably
not a good idea.

Someone else mentioned XML. Well, it's easy to parse - assuming you
read that as "someone else wrote the parser for us". That's true for
lots of things. I also question the sanity of using it. But I wind up
doing a lot of it, because my clients like the buzzword compliance. I
regularly beat on them to take advantage of what XML can do that other
formats can't. Meaning I make them install validation software, and
pay me to write schemas for them, and get them to add hooks to the
repository so you can't check in xml files that don't validate. But if
you're not going to do that kind of thing, the major feature XML
brings is the buzzword compliance.

<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Tim Kientzle
2007-05-11 14:58:02 UTC
Permalink
- A quick test confirms that the current bsdtar will happily ignore any
extra data at the end of a tgz/tbz archive, so package metadata can be
embedded there, thus conserving existing infrastructure...
Not a good idea at all.

1) Keeping everything within the archive makes it
possible for people to do surgery on the packages.
Being able to extract/modify/rebuild packages
using just tar makes it much easier to debug and
experiment with new techniques.

2) The existing metadata format is not very pretty,
but it can be reasonably extended. (Personally,
I don't find XML very pretty, either. ;-)

3) As DES pointed out, the package tools must be able
to read the metadata before the files. If you really
need a completely separate metadata file, make it
the second file in the archive.

Tim Kientzle
Joerg Sonnenberger
2007-05-11 15:34:48 UTC
Permalink
Post by Tim Kientzle
3) As DES pointed out, the package tools must be able
to read the metadata before the files. If you really
need a completely separate metadata file, make it
the second file in the archive.
Actually, the argument is pretty weak. Being able to extract them
streamable and access the meta-data easily is fine. The remote access
argument is very weak as it doesn't allow e.g. signature checks.

Joerg
Mike Meyer
2007-05-11 16:44:39 UTC
Permalink
Post by Mike Meyer
Perhaps this is a good time I should mention that I think sqlite would
also be good for the password and login databases? :)
Someone has already pointed out the horror that is the Windows
registry. IIUC, even MS has figured out this is a bad idea, and gotten
away from it with Vista. But it's been tried on Unix systems before as
This is the first time I hear about Vista - AFAIK it relies even more on
registry, to the point that the boot process also uses it.
That may be so. I haven't dealt with Vista, but had heard it dropped
the registry dependencies.
Registry was pretty bad in Win9x, but AFAIK most of those issues were
because FAT32 is a bad file system. I never had a registry problem (on
many machines) running Win2k and WinXP.
No, that only deals with it getting corrupted by the file system. I've
had registery problems as late as XP/SP2. And of course, you can't
blame the issues with the AIX object manager on FAT32.
Post by Mike Meyer
Using a binary format brings it's own problems. How hard is it to fix
a corrupt database? How hard is it to figure out that the database is
so corrupt you aren't going to get anything out of it, so you might as
well give up? How robust is it - can a corrupt block fry the entire
database? How about portability - can I move the file to a completely
different architecture and still get the data from it? If any of these
are noticably worse than they are for text files, changing is probably
not a good idea.
Most of those issues are valid, but don't strongly advocate against
databases, because the same issues (corruption, rebuilding, manual
inspection) are present for directories of text files. Endian issues are
solved in sqlite.
Yes, they are present no matter what representation you use. The
question is - how do the answers change if you change the
format. These days, cross-platform means you deal with length as well
as endian issues. Or maybe you don't, depending on the db. I know the
answers for text files (easy, easy, very, yes). Can you propose a db
scheme that gets has the same answers?
Post by Mike Meyer
Someone else mentioned XML. Well, it's easy to parse - assuming you
read that as "someone else wrote the parser for us". That's true for
lots of things. I also question the sanity of using it. But I wind up
doing a lot of it, because my clients like the buzzword compliance. I
regularly beat on them to take advantage of what XML can do that other
formats can't. Meaning I make them install validation software, and
pay me to write schemas for them, and get them to add hooks to the
repository so you can't check in xml files that don't validate. But if
you're not going to do that kind of thing, the major feature XML
brings is the buzzword compliance.
Bofore I get misunderstood, I'd like to say that I'm not actually such a
big fan of XML, but I think it's currently the lesser of evils. The
alternative is either to create a one-off file format for each and every
purpose (aka "the unix way"), or use some XML-replacement (JSON &
others) which has most of the evils of XML and introduce lack of support
from existing tools.
I hate to tell you this, but your XML solution would still consist of
a bunch of one-of file formats for each and every purpose. Using XML
just fixes the syntax for the file, not the semantics. Settling on XML
(or JSON, or INI, or cap files, or ...) is sort of like settling on
UTF, only less obviously a win. Sure, you get to use canned code that
will turn you text file into a structure in memory. But you still have
to figure out what it all means.

As you say, the XML toolset is the real win. Smart editors,
validators, schemas (which make the editors and validators even more
powerful) are all good things. Most people don't really seem
interested in this beyond editors. That's not really much of a win.

<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Garrett Cooper
2007-05-14 05:13:58 UTC
Permalink
Post by Mike Meyer
Yes, they are present no matter what representation you use. The
question is - how do the answers change if you change the
format. These days, cross-platform means you deal with length as well
as endian issues. Or maybe you don't, depending on the db. I know the
answers for text files (easy, easy, very, yes). Can you propose a db
scheme that gets has the same answers?
I think I don't understand the question. If the database contains number
"42" in a field typed "int32", in a row, and handles endianess well, why
would I get a different number on different platforms?
(A side note about sqlite: it's actually weakly typed - you store and
receive strings).
Post by Mike Meyer
I hate to tell you this, but your XML solution would still consist of
a bunch of one-of file formats for each and every purpose. Using XML
just fixes the syntax for the file, not the semantics. Settling on XML
(or JSON, or INI, or cap files, or ...) is sort of like settling on
UTF, only less obviously a win. Sure, you get to use canned code that
will turn you text file into a structure in memory. But you still have
to figure out what it all means.
As you say, the XML toolset is the real win. Smart editors,
validators, schemas (which make the editors and validators even more
powerful) are all good things. Most people don't really seem
interested in this beyond editors. That's not really much of a win.
I agree that validation in XML is a strong point - but one of the reason
people like text files is that they DON'T usually have validation
features :)
| pro | contra
----------------------------------------------------------------------
XML | standard tools, validation, | evil manual parsing, bad rep
| can embed multiple data |
| structures in a standard way |
----------------------------------------------------------------------
text | standard tools, sometimes | no validation, manual parsing,
| human readable | usually one data structure per
| | file
I assume that many Database formats have functionality to convert to
'system independent' endianized fields when flushing the database to
disk. That's what BDB does at least (I think that the endianness used is
little endian).

-Garrett
Jos Backus
2007-05-11 16:56:12 UTC
Permalink
On Fri, May 11, 2007 at 11:23:00AM -0400, Mike Meyer wrote:
[snip]
How robust is it - can a corrupt block fry the entire database?
Dunno, but "Transactions are atomic, consistent, isolated, and durable (ACID)
even after system crashes and power failures.". So it appears to try hard to
minimize the chance of corruption.
How about portability - can I move the file to a completely
different architecture and still get the data from it?
"Database files can be freely shared between machines with different byte
orders."

(Quotes taken from http://www.sqlite.org/)

Also, the code is in the public domain.
--
Jos Backus
jos at catnook.com
Mike Meyer
2007-05-11 17:27:01 UTC
Permalink
Post by Mike Meyer
Yes, they are present no matter what representation you use. The
question is - how do the answers change if you change the
format. These days, cross-platform means you deal with length as well
as endian issues. Or maybe you don't, depending on the db. I know the
answers for text files (easy, easy, very, yes). Can you propose a db
scheme that gets has the same answers?
I think I don't understand the question. If the database contains number
"42" in a field typed "int32", in a row, and handles endianess well, why
would I get a different number on different platforms?
The question is "How hard is it to move a db to a radically different
platform and still use it?" Endianness is a biggie; world length (what
does int32 turn into on a platform with 40 bit words and 20 bit
half-words?) is less of a problem than it used to be; pointer sizes
may be an issue, etc. Text files work on every Unix platform that uses
the character set of your system. For the application at hand, working
on any FreeBSD system would be equivalent.
(A side note about sqlite: it's actually weakly typed - you store and
receive strings).
Which is part of why answering the questions needs a detailed
proposal.
Post by Mike Meyer
I hate to tell you this, but your XML solution would still consist of
a bunch of one-of file formats for each and every purpose. Using XML
just fixes the syntax for the file, not the semantics. Settling on XML
(or JSON, or INI, or cap files, or ...) is sort of like settling on
UTF, only less obviously a win. Sure, you get to use canned code that
will turn you text file into a structure in memory. But you still have
to figure out what it all means.
As you say, the XML toolset is the real win. Smart editors,
validators, schemas (which make the editors and validators even more
powerful) are all good things. Most people don't really seem
interested in this beyond editors. That's not really much of a win.
I agree that validation in XML is a strong point - but one of the reason
people like text files is that they DON'T usually have validation
features :)
No, it's that the validation is usually simple enough that it can be
done manually. Of course, for both XML and other text formats, people
generally don't bother - they feed the data to the program, and let it
blow up if the file is invalid. I suspect it's because the validation
is generally a separate step. If we changed the config files to XML,
and vi were configured to interactively underline in red everything
that wasn't valid according to the schema, admins would almost
certainly *love* it.(*)
| pro | contra
----------------------------------------------------------------------
XML | standard tools, validation, | evil manual parsing, bad rep
| can embed multiple data |
| structures in a standard way |
bloated compared to alternatives,
one document element per file.

Since XML is text, you really want this row to be named someething
like "ad-hoc".
----------------------------------------------------------------------
text | standard tools, sometimes | no validation, manual parsing,
| human readable | usually one data structure per
| | file
Choosing a specific format - XML, JSON, S-expressions, cap, INI,
etc. - just wires down the tokenizer (trivial) and structuring (varies
from trivial to complex). If you're not taking advantage of anything
beyond that that XML offers, there's not much point in picking XML
over anything else - including ad-hoc formats.

<mike

*) This requires that 1) someone write the schemas, 2) vi is bright
enough to deal with them, and 3) someone configures the stock vi to
do so.
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Mike Meyer
2007-05-11 17:34:07 UTC
Permalink
Post by Jos Backus
[snip]
How robust is it - can a corrupt block fry the entire database?
Dunno, but "Transactions are atomic, consistent, isolated, and durable (ACID)
even after system crashes and power failures.". So it appears to try hard to
minimize the chance of corruption.
Right. This is a good thing. However, the db *will* become corrupt. A
disk block will fail to read, whatever. The question is asking how
much data will be lost outside the corrupt data block?
Post by Jos Backus
How about portability - can I move the file to a completely
different architecture and still get the data from it?
"Database files can be freely shared between machines with different byte
orders."
That sounds like a "somewhat". The desired answer is "If the version
of sqllite runs on a platform, all database files will work on it."
That they felt the need to point out that they are byte order
independent implies that other architectural issues may be a
problem. Of course, it could be that nobody has asked the right people
that question.
Post by Jos Backus
Also, the code is in the public domain.
Wow. That's everylicensecompliant.


<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Freddie Cash
2007-05-11 17:11:51 UTC
Permalink
Post by Mike Meyer
I still think we ought to quit pretending that ports/packages aren't
part of BSD, and default LOCALBASE to /usr. But if changing it is
being tested, that's a big help.
Personally, this is the one thing I like *most* about BSD. There is a
clear separation between what ships as part of the OS, and what apps I
install on it later. There's a consistency to things, that you just
can't find anywhere else.

/ and /usr are the OS.

/usr/local is what the ports tree installs.

/whatever/i/want/ is where I install things from source to keep them
separate.

One could make the case for /usr to be the OS, /usr/pkg (or whatever) for
port installs, and /usr/local for local source installs. So long as the
OS is separate from the apps.

With the OS and apps separate, you can upgrade them asynchronously.
There's a nice feeling to running the latest version of appX on FreeBSD
5.3. Or an older version of appY on FreeBSD 6-STABLE.

Try getting something similar on a Linux system.
--
Freddie Cash
***@ocis.net
Kris Kennaway
2007-05-11 18:42:59 UTC
Permalink
Post by Mike Meyer
There are a few ways you can go. The simplest is to install a
complete i386 world in e.g. /compat/ia32 and have i386 packages
installed there, and change the kernel to do a "magic directory
lookup" for i386 binaries that does path munging like the linuxulator
does with /compat/linux.
Actually, the simplest is to install a virtual machine and just use
that. But none of these is quite ideal.
If you want the i386 packages to live in /usr/local alongside the
directive to the +CONTENTS file recorded in /var/db/pkg, and add some
checking to pkg_add to ensure that when you install a package,
everything it depends on was built for the same architecture.
Which is pretty much what I pointed out in my original request. Except
you don't really need everything it depends on to be built for the
same architecture. If your port needs gmake to build, or uses
ghostscript to do ps->jpeg conversion or some such, it won't really
care what architecture the executables were built for - just that they
run. It would help if requirements were flagged with whether or not
they required the same architecture.
You have to be more careful than this, because a lot of ports have
architecture-dependent on-disk formats for their data files (obviously
not the jpeg example, but there are lots of other cases that use a
non-portable storage format). In practise avoiding all these
conflicts will be very restrictive: as soon as you have installed a
moderate number of amd64 packages, you lose the ability to install
most i386 packages because of these header/library/data file conflicts
(e.g. you can't install an i386 gtk app if you are running a gnome
desktop). Which means you're back to installing the i386 packages in
a separate prefix, i.e. option 1.
Post by Mike Meyer
You'd need to also add these checks to bsd.port.mk so it exits
gracefully when someone tries to natively compile a port for which
non-native dependencies are installed (e.g. it's going to be really
unhappy if it finds i386 headers or libraries). This method might be
more trouble than it's worth.
Again, it might or might not be unhappy. But now you're venturing into
ports as opposed to the package system. I have a whole other list of
things to consider there.
The point is that the real problem is: "how do you arrange the bits on
disk", not "how do you wrap that in a package system". Until you
figure out a workable on-disk arrangement for the files, questions
about packaging are not relevant. And it seems that option 1 is the
only workable one in practise (unless you have some other idea), which
can easily be achieved today with a couple of hours of kernel hacking.

Kris
Mike Meyer
2007-05-11 18:34:33 UTC
Permalink
Post by Freddie Cash
Post by Mike Meyer
I still think we ought to quit pretending that ports/packages aren't
part of BSD, and default LOCALBASE to /usr. But if changing it is
being tested, that's a big help.
Personally, this is the one thing I like *most* about BSD. There is a
clear separation between what ships as part of the OS, and what apps I
install on it later. There's a consistency to things, that you just
can't find anywhere else.
/ and /usr are the OS.
/usr/local is what the ports tree installs.
Moving the OS into the package system has been on the "todo" list for
a long time (assuming it's still there - there are people opposed to
that). What happens to your distinction in that case? This is why I
think the distinction is an illusion.
Post by Freddie Cash
One could make the case for /usr to be the OS, /usr/pkg (or whatever) for
port installs, and /usr/local for local source installs. So long as the
OS is separate from the apps.
I think that would be an improvement. There's a real distinction
between "things installed from ports/packages" and "things I built and
installed myself". The former I expect to reinstall from FreeBSD
media; the latter I can't. Since most packages in the latter install
in /usr/local, using that for the former makes life a bit more painful
if you want to keep them separate. The downside is that making the
default something else makes things a bit harder for people doing
ports, which we promptly throw away by providing a settable LOCALBASE.
Post by Freddie Cash
With the OS and apps separate, you can upgrade them asynchronously.
There's a nice feeling to running the latest version of appX on FreeBSD
5.3. Or an older version of appY on FreeBSD 6-STABLE.
How would setting LOCALBASE=/usr break this? Of course, equally valid
is the question "what will break if I set LOCALBASE=/usr"? Hmm. I
think I may found out....

Personally, what I like about FreeBSD is that it provides flexibility
for things like this. If I'd rather have ports out of /usr/local, it's
not really hard to do. Not as easy as doing things the default way,
but not any worse than anyone else who wants to build from
source. Other systems seem intent on making me do things their way, no
matter what I may think of it, and trying to change something like
this is a major change.

<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Mike Meyer
2007-05-11 19:07:27 UTC
Permalink
Post by Kris Kennaway
The point is that the real problem is: "how do you arrange the bits on
disk", not "how do you wrap that in a package system". Until you
figure out a workable on-disk arrangement for the files, questions
about packaging are not relevant. And it seems that option 1 is the
only workable one in practise (unless you have some other idea), which
can easily be achieved today with a couple of hours of kernel hacking.
There are clearly other workable ideas - as I said, the linux folks
managed to make it work. But it's not an easy problem. I certainly
wouldn't suggest rebuilding the packaging system to deal with this,
except as part of a larger effort. On the other hand, since people are
working on the ports/package system (I see port/pkg database and some
ports infrastructure work in the current SoC projects list), not
keeping this goal in mind would seem to be a bit short-sighted. I
wouldn't be surprised if your option #1 could benefit from this as
well.


<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Jos Backus
2007-05-11 19:16:12 UTC
Permalink
Post by Mike Meyer
Post by Jos Backus
[snip]
How robust is it - can a corrupt block fry the entire database?
Dunno, but "Transactions are atomic, consistent, isolated, and durable (ACID)
even after system crashes and power failures.". So it appears to try hard to
minimize the chance of corruption.
Right. This is a good thing. However, the db *will* become corrupt. A
disk block will fail to read, whatever. The question is asking how
much data will be lost outside the corrupt data block?
It's a trade-off. Use SMART to monitor your disks, or whatever. Dealing with a
gazillion different file formats is a PITA from an automation perspective.
Post by Mike Meyer
Post by Jos Backus
How about portability - can I move the file to a completely
different architecture and still get the data from it?
"Database files can be freely shared between machines with different byte
orders."
That sounds like a "somewhat". The desired answer is "If the version
of sqllite runs on a platform, all database files will work on it."
That they felt the need to point out that they are byte order
independent implies that other architectural issues may be a
problem. Of course, it could be that nobody has asked the right people
that question.
Probably. Why don't you? :-)
Post by Mike Meyer
Post by Jos Backus
Also, the code is in the public domain.
Wow. That's everylicensecompliant.
I sense some sarcasm. Would the GPL have been an improvement in the context of
this discussion?
--
Jos Backus
jos at catnook.com
Freddie Cash
2007-05-11 19:21:56 UTC
Permalink
Post by Mike Meyer
Post by Freddie Cash
Post by Mike Meyer
I still think we ought to quit pretending that ports/packages
aren't part of BSD, and default LOCALBASE to /usr. But if changing
it is being tested, that's a big help.
Personally, this is the one thing I like *most* about BSD. There is
a clear separation between what ships as part of the OS, and what
apps I install on it later. There's a consistency to things, that
you just can't find anywhere else.
/ and /usr are the OS.
/usr/local is what the ports tree installs.
Moving the OS into the package system has been on the "todo" list for
a long time (assuming it's still there - there are people opposed to
that). What happens to your distinction in that case? This is why I
think the distinction is an illusion.
Hmmm, I might be misremembering things, but I always thought the plan was
to break out some bits of the base OS into ports/packages (like
sendmail/bind/etc) but have them installed-by-default. Either way, you
are correct that the distinction between "base OS" and "installed apps"
would blur or even disappear.
Post by Mike Meyer
Post by Freddie Cash
One could make the case for /usr to be the OS, /usr/pkg (or whatever)
for port installs, and /usr/local for local source installs. So long
as the OS is separate from the apps.
I think that would be an improvement. There's a real distinction
between "things installed from ports/packages" and "things I built and
installed myself". The former I expect to reinstall from FreeBSD
media; the latter I can't. Since most packages in the latter install
in /usr/local, using that for the former makes life a bit more painful
if you want to keep them separate. The downside is that making the
default something else makes things a bit harder for people doing
ports, which we promptly throw away by providing a settable LOCALBASE.
Post by Freddie Cash
With the OS and apps separate, you can upgrade them asynchronously.
There's a nice feeling to running the latest version of appX on
FreeBSD 5.3. Or an older version of appY on FreeBSD 6-STABLE.
How would setting LOCALBASE=/usr break this? Of course, equally valid
is the question "what will break if I set LOCALBASE=/usr"? Hmm. I
think I may found out....
I was thinking more along the lines of how Linux distros operate, where
all apps are installed under /usr, and where the only way to get the
latest version of an app was to upgrade the entire OS and switch package
repositories to the ones for that new version of the OS. Something that
irks me to no end when using Linux systems. Not quite the same as what
you were getting at, I see, so we can just ignore this entire little bit
of the thread. :)
Post by Mike Meyer
Personally, what I like about FreeBSD is that it provides flexibility
for things like this. If I'd rather have ports out of /usr/local, it's
not really hard to do. Not as easy as doing things the default way,
but not any worse than anyone else who wants to build from
source. Other systems seem intent on making me do things their way, no
matter what I may think of it, and trying to change something like
this is a major change.
Yes, flexibility and customisability is very nice, and definitely one of
FreeBSD's strength.
--
Freddie Cash
***@ocis.net
Peter Jeremy
2007-05-11 19:43:30 UTC
Permalink
Jona Joachim
2007-05-12 00:48:33 UTC
Permalink
Post by David Naylor
Dear Jordan
Recently I stumbled across a document you wrote in 2001, entitled "FreeBSD
installation and package tools, past, present and future". I find FreeBSD
appealing and I would like to contribute it its success, and as your article
describes, the installation and packaging system is lacking. Since the
installation system is being tackled under a SoC project I am hoping to give
the packaging system a go.
I've just read the document, and since I was also thinking about the
- I think it's time to give up on using BDB+directory tree full of text
files for storing the installed packages database, and I propose all of
this be replaced by a single SQLite database. SQLite is public domain
(can be slurped into base system), embeddable, stores all data in a
single file, lightweight, fast, and can be used to do fancy things such
as reporting. The current pkg_info's behaviour that takes *noticable*
*time* to generate 1 line of output per package is horrible in this day
and age. And the upcoming X11/X.Org 7.2 with cca 400 packages (which is
in itself horrible just for a X11 system) will make it unbearable (also
portupgrade which already looks slower by the day).
I don't think it would be a good idea to use SQLite for this purpose.
First of all using the file system is the Unix way of doing things. It's
efficient and easy to use, it transparent and user friendly. You can
simply run vi to inspect a text file but you can't do this which an
sqlite database. You have to learn sqlite to do it.
Furthermore I don't think the pkg_* tools are slow. They are quite fast
IMO. If you let pkg_info print the entire list of installed ports it's
only slow because of your line-buffered console. Just redirect the
output to a file and you'll see that it's blazing fast. If I compare it
for example to Debians apt-get/apt-cache commands it's much faster.
portupgrade is very slow, that's true. First of all it's written in Ruby
which is not one of the fastest languages but there is another thing
that slows it down considerably, which is rebuilding its database.
Furthermore I think it would be a very bad idea to include sqlite in
base. There is already a lot of third party stuff in base. The
philosophy of the BSDs is to provide and maintain an entire OS. This is
quite the opposite of how a GNU/Linux system is designed. Both ways have
their pros and cons. An advantage of the BSD way of doing things is that
the developers know the code very well and have control over the quality
of the code. If you include 3rd party software into the FreeBSD base
system you make the FreeBSD project depend on the people that wrote that
code. Of course you could fork it but the FreeBSD developers are not
necessarily familiar with the code. Security patches would have to be
merged all the time and a lot of communication between the two projects
is needed.
I think the best way to go would be to use only folder hierarchies and
text files and write a libary in C that provides portupgrade
functionality. The code under src/usr.sbin/pkg_install/lib/ would be a
good base for this. Then you could use a frontend program that makes use
of this library. This frontend could be a CLI program or a GUI based
program.

Best regards,
Jona
Jos Backus
2007-05-12 06:32:04 UTC
Permalink
[snip]
Post by Mike Meyer
How about portability - can I move the file to a completely
different architecture and still get the data from it?
[snip]

The answer ("Yes") can be found in this thread:

http://www.mail-archive.com/sqlite-***@sqlite.org/msg00120.html

As another data point, `yum' has been using SQLite for quite some time now.
--
Jos Backus
jos at catnook.com
Stanislav Sedov
2007-05-12 11:50:59 UTC
Permalink
Roman Divacky
2007-05-12 12:06:43 UTC
Permalink
cyclic dependancies in it!), but still, in itself, I think the choice of
Ruby isn't performance-critical.
ruby2.0 will come with a virtual machine which should speed up things. ruby2.0
is expected "soon enough" (2008?)
Michel Talon
2007-05-12 12:37:44 UTC
Permalink
In my (limited) experience, this sort of task should not depend much on
the speed of the language. The most CPU-intensive task portupgrade does
is resolving dependencies, and on a running system this is a DAG forest
of about 500 nodes. I know portupgrade has some highly unoptimal code in
it (if I understand the code correctly, there's a brute force check for
cyclic dependancies in it!), but still, in itself, I think the choice of
Ruby isn't performance-critical.
If i remember well, portupgrade uses a reasonable algorithm to sort the
DAG in question. But from time to time, it runs
make depends
or something of the sort in some ports directory, and this is the slow
step which kills any package manager whatsoever, be it written in super
fast C or superslow ruby. This is because as soon as you run "make" in
such a directory it has to read and interpret the 4000 lines file
bsd.ports.mk. This takes 1/10 s on my old laptop and perhaps 5 times
faster on a powerful machine.

Add to that the natural slowness of ruby, the fact that portupgrade
constantly does queries in the Berkeley pkgdb.db, portsdb.db etc. while
it could cache the whole content in memory without any problem in modern
machines, and you have something which is slow as a mollasse.

Then you have more structural problems such as the "guessing facilities"
of portupgrade, which aim at guessing what is the new origin of a port
whose old origin has disappeared. To do that it counts similarities on
names, adds some snake oil and other satanic ingredients and comes out
with a guess. For some time i was impressed by the exactness of these
guesses, but recently i have seen incredibly hilarious results. As a
consequence i think that portupgrade is a completely inadequate tool
to maintain a FreeBSD machine in an automated way in spite of the
remarkable insight which is coded in it, and that Debian like systems
run circles around FreeBSD for ease of maintenance. Yes the FreeBSD
system is very good for initial installation of software and the
flexibility it allows to do that. But it sucks completely as soon as one
wants to upgrade stuff. The best way by far is wipe out every port and
reinstall. This will take 1/3 of the time to run portupgrade -a and
with a much better chance of success and coherency at the end.

Note that these innocent looking facts have little consequences:
FreeBSD was in position 11 on Distrowatch in 2005, it is now in position
17 and falling like a brick.
--
Michel TALON
Stanislav Sedov
2007-05-12 12:50:53 UTC
Permalink
Duane Whitty
2007-05-13 04:22:54 UTC
Permalink
Post by Garrett Cooper
Post by David Naylor
Dear Jordan
Recently I stumbled across a document you wrote in 2001, entitled "FreeBSD
installation and package tools, past, present and future". I find FreeBSD
appealing and I would like to contribute it its success, and as your
article describes, the installation and packaging system is lacking.
Since the installation system is being tackled under a SoC project I am
hoping to give the packaging system a go.
I was hoping you could help me with an update about the situation with
pkg. I have searched the FreeBSD mailing lists and have found little
information on the package system. Once I have a (much more) complete
understanding of the packaging system (and providing there is work to be
done) I would like to write up a proposal to solve the problems, and
perhaps provide some innovating new capabilities.
After that I will gladly contribute what I can to this (possible) project
and hopefully further and improve FreeBSD. Any assistance or information
you can give I will be greatly appreciate.
I look forward to your reply.
David
Yipes. The name of the game is to get something working in the base
system, instead of dragging in multiple 3rd party packages, with
licensing schemes that may not be aligned with the BSD license.
SQL's great, SQL's wonderful for db use, but the problem is that
supporting it from my POV would cause a lot more grief and waiting than
having me wait a few months to get a BDB compatible scheme out the door.
I'm a little out of practice, however, perhaps the routines that manipulate
the ports meta-data could be sufficiently agnostic about how the data is
being manipulated that it would facilitate experimentation with different
back-ends at a later time. Just a thought and perhaps I'm way off.

Duane
Post by Garrett Cooper
If only Oracle didn't make BDB 3.x non-BSD license friendly though..
that would be nice..
-Garrett
_______________________________________________
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
Stanislav Sedov
2007-05-13 09:20:02 UTC
Permalink
Stanislav Sedov
2007-05-13 09:38:00 UTC
Permalink
Stanislav Sedov
2007-05-13 09:45:42 UTC
Permalink
Dag-Erling Smørgrav
2007-05-13 18:32:31 UTC
Permalink
Post by Roman Divacky
ruby2.0 will come with a virtual machine which should speed up things. ruby2.0
is expected "soon enough" (2008?)
Sure, just like Perl 6...

DES
--
Dag-Erling Smørgrav - ***@des.no
Dag-Erling Smørgrav
2007-05-13 18:25:55 UTC
Permalink
Post by Mike Meyer
Moving the OS into the package system has been on the "todo" list for
a long time (assuming it's still there - there are people opposed to
that).
It has *never* been on the todo list.
Post by Mike Meyer
How would setting LOCALBASE=/usr break this? Of course, equally valid
is the question "what will break if I set LOCALBASE=/usr"? Hmm. I
think I may found out....
For one, man pages for ports will end up in the wrong place (/usr/man
instead of /usr/share/man).

DES
--
Dag-Erling Smørgrav - ***@des.no
Wilko Bulte
2007-05-13 19:08:58 UTC
Permalink
On Sun, May 13, 2007 at 08:25:55PM +0200, Dag-Erling Smrgrav wrote..
Post by Dag-Erling Smørgrav
Post by Mike Meyer
Moving the OS into the package system has been on the "todo" list for
a long time (assuming it's still there - there are people opposed to
that).
It has *never* been on the todo list.
And lets hope that will never change either. If you want to install
a gazillion different components check out Linux.
--
Wilko Bulte ***@FreeBSD.org
Mike Meyer
2007-05-14 04:55:08 UTC
Permalink
Post by Dag-Erling Smørgrav
Post by Mike Meyer
Moving the OS into the package system has been on the "todo" list for
a long time (assuming it's still there - there are people opposed to
that).
It has *never* been on the todo list.
Might depend on whose list you're thinking about. I'm pretty sure it
was on the list for the sysinstall rewrite, but that project has been
dead long enough that I can't find any of the docs to check if my
memory was playing tricks on me or not.
Post by Dag-Erling Smørgrav
Post by Mike Meyer
How would setting LOCALBASE=/usr break this? Of course, equally valid
is the question "what will break if I set LOCALBASE=/usr"? Hmm. I
think I may found out....
For one, man pages for ports will end up in the wrong place (/usr/man
instead of /usr/share/man).
Is this really "broken"? If so, are you sure it's not ports installing
in ${LOCALBASE}/man instead of ${LOCALBASE}/share/man that's broken?

A number of ports seem to depend on the directory tree in ${LOCALBASE}
existing - ${LOCALBASE}/man/... and ${LOCALBASE}/etc, in
particular. They use the INSTALL macros to point single files at
directories, which macro will quite happily create a file with the
target name if it's not a directory. This creates a number of
interesting problems later on.

Trying to use WITH_OPENSSL_BASE on ports that need an SSL library is
interesting. I wouldn't be surprised if there were other, similar
problems elsewhere.

<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Dag-Erling Smørgrav
2007-05-14 06:34:29 UTC
Permalink
Post by Mike Meyer
Post by Dag-Erling Smørgrav
Post by Mike Meyer
How would setting LOCALBASE=/usr break this? Of course, equally valid
is the question "what will break if I set LOCALBASE=/usr"? Hmm. I
think I may found out....
For one, man pages for ports will end up in the wrong place (/usr/man
instead of /usr/share/man).
Is this really "broken"? If so, are you sure it's not ports installing
in ${LOCALBASE}/man instead of ${LOCALBASE}/share/man that's broken?
It doesn't really matter which is right and which is wrong, it's the
inconsistency that is the problem.
Post by Mike Meyer
A number of ports seem to depend on the directory tree in ${LOCALBASE}
existing - ${LOCALBASE}/man/... and ${LOCALBASE}/etc, in
particular. They use the INSTALL macros to point single files at
directories, which macro will quite happily create a file with the
target name if it's not a directory. This creates a number of
interesting problems later on.
They can correctly assume that the directories exist because we always
run 'mtree -f /etc/mtree/BSD.local.dist -P ${PREFIX}' before installing
a port.

DES
--
Dag-Erling Smørgrav - ***@des.no
Dag-Erling Smørgrav
2007-05-14 09:31:25 UTC
Permalink
Post by Dag-Erling Smørgrav
The existence of .la files is a bug.
I fully agree [but this needs to be addressed upstream]
Note that we are apparently not the only ones dissatisfied with this
state of affairs. The following code is commonly found in rpm specs for
Fedora (and, I suspect, for RedHat and CentOS as well):

sed -i 's|^hardcode_libdir_flag_spec=.*|hardcode_libdir_flag_spec=""|g' libtool
sed -i 's|^runpath_var=LD_RUN_PATH|runpath_var=DIE_RPATH_DIE|g' libtool

Perhaps we could make common cause with RH to apply pressure on the
libtool maintainers?

DES
--
Dag-Erling Smørgrav - ***@des.no
Alexander Leidinger
2007-05-14 11:19:54 UTC
Permalink
Post by Dag-Erling Smørgrav
Post by Dag-Erling Smørgrav
The existence of .la files is a bug.
I fully agree [but this needs to be addressed upstream]
Note that we are apparently not the only ones dissatisfied with this
state of affairs. The following code is commonly found in rpm specs for
sed -i
's|^hardcode_libdir_flag_spec=.*|hardcode_libdir_flag_spec=""|g'
libtool
sed -i 's|^runpath_var=LD_RUN_PATH|runpath_var=DIE_RPATH_DIE|g' libtool
Perhaps we could make common cause with RH to apply pressure on the
libtool maintainers?
Isn't this a property which can be set at build time? I mean: isn't
there a $OSNAME case where this can be set for a specific OS? So it
boil down to just set those two variables accordingly in the FreeBSD
case and to send a patch to the libtool maintainers. For Linux this
isn't doable, as there are many Linux distros out there, but for
FreeBSD we can do this. But this should be tested on pointyhat first,
I think.

Bye,
Alexander.
--
I'm not sure I've even got the brains to be President.
-- Barry Goldwater, in 1964

http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137
Dag-Erling Smørgrav
2007-05-14 11:44:37 UTC
Permalink
Post by Alexander Leidinger
Isn't this a property which can be set at build time? I mean: isn't
there a $OSNAME case where this can be set for a specific OS? So it
boil down to just set those two variables accordingly in the FreeBSD
case and to send a patch to the libtool maintainers. For Linux this
isn't doable, as there are many Linux distros out there, but for
FreeBSD we can do this. But this should be tested on pointyhat first,
I think.
I'm not sure I understand your question, but libtool sets the rpath if
1) an .la file is present and 2) hardcode_into_libs is defined. The
latter happens at the configure stage, the shell code that selects the
default value is in libtool.m4:

--- libtool.m4.orig Sun Dec 18 22:53:17 2005
+++ libtool.m4 Mon May 14 13:43:46 2007
@@ -1442,7 +1442,7 @@
;;
freebsd*) # from 4.6 on
shlibpath_overrides_runpath=yes
- hardcode_into_libs=yes
+ hardcode_into_libs=no
;;
esac
;;

I've chosen not to change the default for versions older than 4.6.

DES
--
Dag-Erling Smørgrav - ***@des.no
Dag-Erling Smørgrav
2007-05-14 12:27:33 UTC
Permalink
If this works well, you just have to submit this patch to the libtool
maintainers and wait until all (relevant) software packages are
updated to use the libtool version which comes with this change.
hahahahahahahaha

*sniff*

excuse me...

No offense, but it will take years until the change trickles down to all
the packages we care about.

While we wait, we might consider sticking this into bsd.port.mk
somewhere, conditional on ${GNU_CONFIGURE}:

${REINPLACE_CMD} -e 's/\(hardcode_into_libs\)=yes/\1=no/' \
${WRKSRC}/configure

DES
--
Dag-Erling Smørgrav - ***@des.no
Alexander Leidinger
2007-05-14 12:10:49 UTC
Permalink
Post by Dag-Erling Smørgrav
Post by Alexander Leidinger
Isn't this a property which can be set at build time? I mean: isn't
there a $OSNAME case where this can be set for a specific OS? So it
boil down to just set those two variables accordingly in the FreeBSD
case and to send a patch to the libtool maintainers. For Linux this
isn't doable, as there are many Linux distros out there, but for
FreeBSD we can do this. But this should be tested on pointyhat first,
I think.
I'm not sure I understand your question, but libtool sets the rpath if
1) an .la file is present and 2) hardcode_into_libs is defined. The
latter happens at the configure stage, the shell code that selects the
--- libtool.m4.orig Sun Dec 18 22:53:17 2005
+++ libtool.m4 Mon May 14 13:43:46 2007
@@ -1442,7 +1442,7 @@
;;
freebsd*) # from 4.6 on
shlibpath_overrides_runpath=yes
- hardcode_into_libs=yes
+ hardcode_into_libs=no
;;
esac
;;
I've chosen not to change the default for versions older than 4.6.
If this works well, you just have to submit this patch to the libtool
maintainers and wait until all (relevant) software packages are
updated to use the libtool version which comes with this change.

Bye,
Alexander.
--
Jesuit priests are DATING CAREER DIPLOMATS!!

http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137
Mike Meyer
2007-05-14 13:06:19 UTC
Permalink
Post by Dag-Erling Smørgrav
Post by Mike Meyer
A number of ports seem to depend on the directory tree in ${LOCALBASE}
existing - ${LOCALBASE}/man/... and ${LOCALBASE}/etc, in
particular. They use the INSTALL macros to point single files at
directories, which macro will quite happily create a file with the
target name if it's not a directory. This creates a number of
interesting problems later on.
They can correctly assume that the directories exist because we always
run 'mtree -f /etc/mtree/BSD.local.dist -P ${PREFIX}' before installing
a port.
Except it apparently doesn't work if PREFIX is /usr.

<mike
--
Mike Meyer <***@mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
Alexander Leidinger
2007-05-14 12:55:50 UTC
Permalink
Post by Dag-Erling Smørgrav
If this works well, you just have to submit this patch to the libtool
maintainers and wait until all (relevant) software packages are
updated to use the libtool version which comes with this change.
hahahahahahahaha
*sniff*
excuse me...
No offense, but it will take years until the change trickles down to all
the packages we care about.
I know... ;-)
Post by Dag-Erling Smørgrav
While we wait, we might consider sticking this into bsd.port.mk
${REINPLACE_CMD} -e 's/\(hardcode_into_libs\)=yes/\1=no/' \
${WRKSRC}/configure
Provide a patch, submit it to gnats, ask for an experimental run in
the description, assign it to portmgr and be prepared for requests to
fix bugs which may show up because of this patch before portmgr
commits it.

Bye,
Alexander.
--
The only real advantage to punk music is that nobody can whistle it.

http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137
Giorgos Keramidas
2007-05-15 18:36:45 UTC
Permalink
Post by Alexander Leidinger
Post by Dag-Erling Smørgrav
Note that we are apparently not the only ones dissatisfied with this
state of affairs. The following code is commonly found in rpm specs
sed -i
's|^hardcode_libdir_flag_spec=.*|hardcode_libdir_flag_spec=""|g'
libtool
sed -i 's|^runpath_var=LD_RUN_PATH|runpath_var=DIE_RPATH_DIE|g' libtool
Perhaps we could make common cause with RH to apply pressure on the
libtool maintainers?
Quite good idea, if this means the fixes will automagically work for
libtool for releases >= 1.5.X. Things may be a little trickier with
older libtool versions, but I'm not sureif I'm qualified to suggest a
*real* fix for those thirdparty packages which depend on older libtool
versions.
Post by Alexander Leidinger
Isn't this a property which can be set at build time? I mean: isn't
there a $OSNAME case where this can be set for a specific OS?
There is already precedent for OS-specific and compiler-specific
options. The 1.5.23a version includes already special support for
linking with -library=stlport4 and _excluding_ the -lCrun and -lCstd
libraries for Solaris programs compiled with Sun Studio ;-)

Loading...