libctf: make it compile for old glibc

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

libctf: make it compile for old glibc

Hans-Peter Nilsson-2
With a glibc before 2.9 (such as 2.8), there's <endian.h> but no
htole64 or le64toh, so you get, compiling binutils for any target:

libtool: link: gcc -W -Wall -Wstrict-prototypes -Wmissing-prototypes \
-Wshadow -Werror -I/x/binutils/../zlib -g -O2 -o objdump \
objdump.o dwarf.o prdbg.o rddbg.o debug.o stabs.o rdcoff.o \
bucomm.o version.o filemode.o elfcomm.o  ../opcodes/.libs/libopcodes.a \
../libctf/libctf.a ../bfd/.libs/libbfd.a -L/x/obj/b/zlib -lz ../libiberty/libiberty.a -ldl
../libctf/libctf.a(ctf-archive.o): In function `ctf_archive_raw_iter_internal':
/x/src/libctf/ctf-archive.c:543: undefined reference to `le64toh'
/x/src/libctf/ctf-archive.c:550: undefined reference to `le64toh'
/x/src/libctf/ctf-archive.c:551: undefined reference to `le64toh'
/x/src/libctf/ctf-archive.c:551: undefined reference to `le64toh'
/x/src/libctf/ctf-archive.c:554: undefined reference to `le64toh'
../libctf/libctf.a(ctf-archive.o):/x/src/libctf/ctf-archive.c:545: more undefined references to `le64toh' follow
(etc)

Also, I see no bswap_identity_64 *anywhere* except in libctf/swap.h
(including current glibc) and I don't think calling an "identity"-
function is better than just plain "#define foo(x) (x)" anyway.
(Where does the idea of a bytestap.h bswap_identity_64 come from?)

Speaking of that, I should mention that I instrumented the condition
to observe that the WORDS_BIGENDIAN case passes too for a presumed
big-endian target and glibc-2.8: there is a bswap_64 present for that
version.  Curiously, no test-case regressed with that instrumentation.

For the record, constructing binary blobs using text source to run
tests on, can be done by linking to --oformat binary (with most ELF
targets), but I guess that's seen as unnecessary roundabout perhaps
checking in binary files in the test-suite would be ok these days.

Anyway, is this ok to commit or do I have to also check for
htole64 being a function?

BTW, what's up with the ((x)) seen in the context?  Worrying about
buggy library implementations not parenthesizing arguments?

2019-07-11  Hans-Peter Nilsson  <[hidden email]>

        * ctf-endian.h: Don't assume htole64 and le64toh are always
        present if HAVE_ENDIAN_H; also check if htole64 is defined.
        [! WORDS_BIGENDIAN] (htole64, le64toh): Define as identity,
        not bswap_identity_64.

diff --git a/libctf/ctf-endian.h b/libctf/ctf-endian.h
index ec177d1..f1cc527 100644
--- a/libctf/ctf-endian.h
+++ b/libctf/ctf-endian.h
@@ -24,10 +24,10 @@
 #include <stdint.h>
 #include "swap.h"

-#ifndef HAVE_ENDIAN_H
+#if !defined (HAVE_ENDIAN_H) || !defined (htole64)
 #ifndef WORDS_BIGENDIAN
-# define htole64(x) bswap_identity_64 ((x))
-# define le64toh(x) bswap_identity_64 ((x))
+# define htole64(x) (x)
+# define le64toh(x) (x)
 #else
 # define htole64(x) bswap_64 ((x))
 # define le64toh(x) bswap_64 ((x))


brgds, H-P
Reply | Threaded
Open this post in threaded view
|

Re: libctf: make it compile for old glibc

Nick Alcock
On 11 Jul 2019, Hans-Peter Nilsson stated:

> With a glibc before 2.9 (such as 2.8), there's <endian.h> but no
> htole64 or le64toh, so you get, compiling binutils for any target:

Wow, such ancient glibcs are still in use? Aren't they a crawling
nightmare of security holes?

> libtool: link: gcc -W -Wall -Wstrict-prototypes -Wmissing-prototypes \
> -Wshadow -Werror -I/x/binutils/../zlib -g -O2 -o objdump \
> objdump.o dwarf.o prdbg.o rddbg.o debug.o stabs.o rdcoff.o \
> bucomm.o version.o filemode.o elfcomm.o  ../opcodes/.libs/libopcodes.a \
> ../libctf/libctf.a ../bfd/.libs/libbfd.a -L/x/obj/b/zlib -lz ../libiberty/libiberty.a -ldl
> ../libctf/libctf.a(ctf-archive.o): In function `ctf_archive_raw_iter_internal':
> /x/src/libctf/ctf-archive.c:543: undefined reference to `le64toh'
> /x/src/libctf/ctf-archive.c:550: undefined reference to `le64toh'
> /x/src/libctf/ctf-archive.c:551: undefined reference to `le64toh'
> /x/src/libctf/ctf-archive.c:551: undefined reference to `le64toh'
> /x/src/libctf/ctf-archive.c:554: undefined reference to `le64toh'
> ../libctf/libctf.a(ctf-archive.o):/x/src/libctf/ctf-archive.c:545: more undefined references to `le64toh' follow
> (etc)
>
> Also, I see no bswap_identity_64 *anywhere* except in libctf/swap.h
> (including current glibc) and I don't think calling an "identity"-
> function is better than just plain "#define foo(x) (x)" anyway.

We clearly need to check for the existence of htole64 etc, either at
configure time or in the configure script as you suggest.

I have a big pile of stuff built up to be pushed soon (ctf linking work,
mostly in libctf but a bit in ld): I can add your commit to it if you
like, or you can push yours independently. I'm happy with it. (But you
can probably go further and drop the bswap_identity stuff from
libctf/swap.h, as well.)

> (Where does the idea of a bytestap.h bswap_identity_64 come from?)

The uint*_identity() macros in glibc's <endian.h>.

> Speaking of that, I should mention that I instrumented the condition
> to observe that the WORDS_BIGENDIAN case passes too for a presumed
> big-endian target and glibc-2.8: there is a bswap_64 present for that
> version.  Curiously, no test-case regressed with that instrumentation.

That would be because the testcases can't really be written until the
linker is done :) (and even then, they'll get skipped if you don't have
a CTF-generating compiler).

> For the record, constructing binary blobs using text source to run
> tests on, can be done by linking to --oformat binary (with most ELF
> targets), but I guess that's seen as unnecessary roundabout perhaps
> checking in binary files in the test-suite would be ok these days.

I was planning to just compile test C programs using a suitable
CTF-generating compiler, do things that need testing (like linking and
deduplication) then objdump/readelf it and diff the results. If the user
doesn't have a suitable compiler, it doesn't matter whether CTF works or
not because the CTF code in binutils won't have anything to work on
anyway: the compiler is the ultimate upstream source of all the CTF
binutils works with, after all.

Testing cross-endianness is the one case where we might need more --
probably, as you suggest, something explicitly checked in.

> Anyway, is this ok to commit or do I have to also check for
> htole64 being a function?

If we define a macro with the same name, it will supersede it in any
case: so I don't think a functional htole64() will cause us any
problems.

> BTW, what's up with the ((x)) seen in the context?  Worrying about
> buggy library implementations not parenthesizing arguments?

Pure paranoia, yes :) it's easier to add one layer of brackets than
worry about every use of the macros. Always-parenthesised args are just
safer. :)

--
NULL && (void)
Reply | Threaded
Open this post in threaded view
|

Re: libctf: make it compile for old glibc

Carlos O'Donell-6
On 7/11/19 8:01 AM, Nick Alcock wrote:
> On 11 Jul 2019, Hans-Peter Nilsson stated:
>
>> With a glibc before 2.9 (such as 2.8), there's <endian.h> but no
>> htole64 or le64toh, so you get, compiling binutils for any target:
>
> Wow, such ancient glibcs are still in use? Aren't they a crawling
> nightmare of security holes?

I was going to say the same thing :-)

Slightly off topic, but are you able to share what product this is
that has glibc 2.8?

As an upstream glibc maintainer I always like to hear about old
uses of glibc and how they might impact our use cases for testing
and compatibility support.

--
Cheers,
Carlos.
Reply | Threaded
Open this post in threaded view
|

Re: libctf: make it compile for old glibc

Hans-Peter Nilsson-2
In reply to this post by Nick Alcock
On Thu, 11 Jul 2019, Nick Alcock wrote:
> On 11 Jul 2019, Hans-Peter Nilsson stated:
> I have a big pile of stuff built up to be pushed soon (ctf linking work,
> mostly in libctf but a bit in ld): I can add your commit to it if you
> like,

Yes, please.  Thanks.

> or you can push yours independently. I'm happy with it. (But you
> can probably go further and drop the bswap_identity stuff from
> libctf/swap.h, as well.)

Right.  If it's there the next time I step by, I'll do that.

> > (Where does the idea of a bytestap.h bswap_identity_64 come from?)
>
> The uint*_identity() macros in glibc's <endian.h>.

Ah.  (Still, there's no bswap_identity_64 to duplicate.)

> > Speaking of that, I should mention that I instrumented the condition
> > to observe that the WORDS_BIGENDIAN case passes too for a presumed
> > big-endian target and glibc-2.8: there is a bswap_64 present for that
> > version.  Curiously, no test-case regressed with that instrumentation.
>
> That would be because the testcases can't really be written until the
> linker is done :) (and even then, they'll get skipped if you don't have
> a CTF-generating compiler).

I missed that there are CTF pieces missing from binutils, but to
test the CTF-consumer (say, objdump which I assume is complete),
you can generate CTF artificially.  It makes sense to me to do
that in the presence of a CTF producer, to test that the
generated format matches the specs.

> > For the record, constructing binary blobs using text source to run
> > tests on, can be done by linking to --oformat binary (with most ELF
> > targets), but I guess that's seen as unnecessary roundabout perhaps
> > checking in binary files in the test-suite would be ok these days.
>
> I was planning to just compile test C programs using a suitable
> CTF-generating compiler, do things that need testing (like linking and
> deduplication) then objdump/readelf it and diff the results. If the user
> doesn't have a suitable compiler, it doesn't matter whether CTF works or
> not because the CTF code in binutils won't have anything to work on
> anyway: the compiler is the ultimate upstream source of all the CTF
> binutils works with, after all.

But objdump should be able to decode CTF generated by outside
producers, right?

> > BTW, what's up with the ((x)) seen in the context?  Worrying about
> > buggy library implementations not parenthesizing arguments?
>
> Pure paranoia, yes :) it's easier to add one layer of brackets than
> worry about every use of the macros. Always-parenthesised args are just
> safer. :)

Sounds better than an editing mistake. :)  Otherwise we'd do
that for all function-like system thingies that might be macros.

brgds, H-P
Reply | Threaded
Open this post in threaded view
|

Re: libctf: make it compile for old glibc

Hans-Peter Nilsson-2
In reply to this post by Carlos O'Donell-6
On Thu, 11 Jul 2019, Carlos O'Donell wrote:
> Slightly off topic, but are you able to share what product this is
> that has glibc 2.8?

Thank you for your concerns...
It's just an old installation of Fedora 9.

brgds, H-P
Reply | Threaded
Open this post in threaded view
|

Re: libctf: make it compile for old glibc

Nick Alcock
In reply to this post by Hans-Peter Nilsson-2
On 11 Jul 2019, Hans-Peter Nilsson spake thusly:

> On Thu, 11 Jul 2019, Nick Alcock wrote:
>> On 11 Jul 2019, Hans-Peter Nilsson stated:
>> I have a big pile of stuff built up to be pushed soon (ctf linking work,
>> mostly in libctf but a bit in ld): I can add your commit to it if you
>> like,
>
> Yes, please.  Thanks.

I'll add it into the linker batch's preparatory work. (16 commits
already...)

>> > Speaking of that, I should mention that I instrumented the condition
>> > to observe that the WORDS_BIGENDIAN case passes too for a presumed
>> > big-endian target and glibc-2.8: there is a bswap_64 present for that
>> > version.  Curiously, no test-case regressed with that instrumentation.
>>
>> That would be because the testcases can't really be written until the
>> linker is done :) (and even then, they'll get skipped if you don't have
>> a CTF-generating compiler).
>
> I missed that there are CTF pieces missing from binutils, but to
> test the CTF-consumer (say, objdump which I assume is complete),
> you can generate CTF artificially.  It makes sense to me to do
> that in the presence of a CTF producer, to test that the
> generated format matches the specs.

Yes.. but the spec hasn't been written yet either (though I'll probably
do that before I start work on the big pile of compactness increases
which will be format v4.)

>> > For the record, constructing binary blobs using text source to run
>> > tests on, can be done by linking to --oformat binary (with most ELF
>> > targets), but I guess that's seen as unnecessary roundabout perhaps
>> > checking in binary files in the test-suite would be ok these days.
>>
>> I was planning to just compile test C programs using a suitable
>> CTF-generating compiler, do things that need testing (like linking and
>> deduplication) then objdump/readelf it and diff the results. If the user
>> doesn't have a suitable compiler, it doesn't matter whether CTF works or
>> not because the CTF code in binutils won't have anything to work on
>> anyway: the compiler is the ultimate upstream source of all the CTF
>> binutils works with, after all.
>
> But objdump should be able to decode CTF generated by outside
> producers, right?

Yes, but right now there are only two: libctf and GCC. So it's hard to
test it against a third producer, when no third exists :) though I'd
agree that when a third *does* exist, we should generate some CTF with
it and make sure that at the very least we can read it in with libctf.

>> > BTW, what's up with the ((x)) seen in the context?  Worrying about
>> > buggy library implementations not parenthesizing arguments?
>>
>> Pure paranoia, yes :) it's easier to add one layer of brackets than
>> worry about every use of the macros. Always-parenthesised args are just
>> safer. :)
>
> Sounds better than an editing mistake. :)  Otherwise we'd do
> that for all function-like system thingies that might be macros.

Doesn't everyone do that sort of thing in macro definitions? I know I
feel guilty whenever I forget.

--
NULL && (void)
Reply | Threaded
Open this post in threaded view
|

Re: libctf: make it compile for old glibc

Hans-Peter Nilsson-2
On Sun, 14 Jul 2019, Nick Alcock wrote:

> On 11 Jul 2019, Hans-Peter Nilsson spake thusly:
> > But objdump should be able to decode CTF generated by outside
> > producers, right?
>
> Yes, but right now there are only two: libctf and GCC. So it's hard to
> test it against a third producer, when no third exists :) though I'd
> agree that when a third *does* exist, we should generate some CTF with
> it and make sure that at the very least we can read it in with libctf.

The posted links made me think CTF was in heavy use somewhere in
BSD-land and for some Solaris thingy since years already (dtrace)?

Is it, but for an incompatible version?  If so, supporting the
pre-existing versions (at least reading them) will probably be
requested sooner rather than later.

> >> > BTW, what's up with the ((x)) seen in the context?  Worrying about
> >> > buggy library implementations not parenthesizing arguments?
> >>
> >> Pure paranoia, yes :) it's easier to add one layer of brackets than
> >> worry about every use of the macros. Always-parenthesised args are just
> >> safer. :)
> >
> > Sounds better than an editing mistake. :)  Otherwise we'd do
> > that for all function-like system thingies that might be macros.
>
> Doesn't everyone do that sort of thing in macro definitions? I know I
> feel guilty whenever I forget.

Not really; this is sufficiently different to the parentheses in
e.g. "#define SQUARE(x) ((x) * (x))" which should be in the
memory of everyones fingertips.

Writing it as "#define htole64(x) bswap_64 (x)" should be
sufficient; you should not worry about definitions (system
headers, glibc) missing to parenthesize any macro arguments.

A "#define htole64(x) bswap_64 ((x))" (without a known buggy
version pointed out in a comment above the definition) just
causes confusion for the reader, QED.  I should add that's IMHO,
but I've never seen that "distrust" in use of macros around
here.  Even if so, it's missing a set of parentheses. ;-)

brgds, H-P
Reply | Threaded
Open this post in threaded view
|

Re: libctf: make it compile for old glibc

Nick Alcock
On 15 Jul 2019, Hans-Peter Nilsson stated:

> On Sun, 14 Jul 2019, Nick Alcock wrote:
>
>> On 11 Jul 2019, Hans-Peter Nilsson spake thusly:
>> > But objdump should be able to decode CTF generated by outside
>> > producers, right?
>>
>> Yes, but right now there are only two: libctf and GCC. So it's hard to
>> test it against a third producer, when no third exists :) though I'd
>> agree that when a third *does* exist, we should generate some CTF with
>> it and make sure that at the very least we can read it in with libctf.
>
> The posted links made me think CTF was in heavy use somewhere in
> BSD-land and for some Solaris thingy since years already (dtrace)?
>
> Is it, but for an incompatible version?  If so, supporting the
> pre-existing versions (at least reading them) will probably be
> requested sooner rather than later.

It is, but for an incompatible version -- and adding support for reading
that version, and indeed writing all versions that it is possible to
write without losing information, is on my todo list and isn't even all
that hard one I've done a few internal refactorings I have to do anyway.

(Their versions have an incompatible versioning scheme, but consistently
store their data in a differently-named section, so should always be
unambiguously distinguishable from ours.)
Reply | Threaded
Open this post in threaded view
|

Re: libctf: make it compile for old glibc

Nick Alcock
In reply to this post by Nick Alcock
On 14 Jul 2019, Nick Alcock verbalised:

> On 11 Jul 2019, Hans-Peter Nilsson spake thusly:
>
>> On Thu, 11 Jul 2019, Nick Alcock wrote:
>>> On 11 Jul 2019, Hans-Peter Nilsson stated:
>>> I have a big pile of stuff built up to be pushed soon (ctf linking work,
>>> mostly in libctf but a bit in ld): I can add your commit to it if you
>>> like,
>>
>> Yes, please.  Thanks.
>
> I'll add it into the linker batch's preparatory work. (16 commits
> already...)

Done, though it's not all really *applicable* yet because I noticed
while prepping it for sending that the linker changes were broken on
non-ELF, so I have to fix that first. (And for all I know, everyone else
will think my approach to the linker changes is utterly horrific and
must be totally rewritten :) I hope not, but it's a definite
possibility.)

However, hopefully the earlier parts of the series, including your
commit, will be pushable soonish.

Thanks for the fix!

--
NULL && (void)