Debugger support for __float128 type?

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Debugger support for __float128 type?

Ulrich Weigand
Hello,

I've been looking into supporting __float128 in the debugger, since we're
now introducing this type on PowerPC.  Initially, I simply wanted to do
whatever GDB does on Intel, but it turns out debugging __float128 doesn't
work on Intel either ...

The most obvious question is, how should the type be represented in
DWARF debug info in the first place?  Currently, GCC generates on i386:

        .uleb128 0x3    # (DIE (0x2d) DW_TAG_base_type)
        .byte   0xc     # DW_AT_byte_size
        .byte   0x4     # DW_AT_encoding
        .long   .LASF0  # DW_AT_name: "long double"

and

        .uleb128 0x3    # (DIE (0x4c) DW_TAG_base_type)
        .byte   0x10    # DW_AT_byte_size
        .byte   0x4     # DW_AT_encoding
        .long   .LASF1  # DW_AT_name: "__float128"

On x86_64, __float128 is encoded the same way, but long double is:

        .uleb128 0x3    # (DIE (0x31) DW_TAG_base_type)
        .byte   0x10    # DW_AT_byte_size
        .byte   0x4     # DW_AT_encoding
        .long   .LASF0  # DW_AT_name: "long double"

Now, GDB doesn't recognize __float128 on either platform, but on i386
it could at least in theory distinguish the two via DW_AT_byte_size.

But on x86_64 (and also on powerpc), long double and __float128 have
the identical DWARF encoding, except for the name.

Looking at the current DWARF standard, it's not really clear how to
make a distinction, either.  The standard has no way to specifiy any
particular floating-point format; the only attributes for a base type
of DW_ATE_float encoding are related to the size.

(For the Intel case, one option might be to represent the fact that
for long double, there only 80 data bits and the rest is padding, via
some combination of the DW_AT_bit_size and DW_AT_bit_offset or
DW_AT_data_bit_offset attributes.  But that wouldn't help for PowerPC
since both long double and __float128 really use 128 data bits,
just different encodings.)

Some options might be:

- Extend the official DWARF standard in some way

- Use a private extension (e.g. from the platform-reserved
  DW_AT_encoding value range)

- Have the debugger just hard-code a special case based
  on the __float128 name

Am I missing something here?  Any suggestions welcome ...

B.t.w. is there interest in fixing this problem for Intel?  I notice
there is a GDB bug open on the issue, but nothing seems to have happened
so far: https://sourceware.org/bugzilla/show_bug.cgi?id=14857

Bye,
Ulrich

--
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Debugger support for __float128 type?

Joseph Myers
On Wed, 30 Sep 2015, Ulrich Weigand wrote:

> - Extend the official DWARF standard in some way

I think you should do this.

Note that TS 18661-4 will be coming out very soon, and includes (optional)
types

* _FloatN, where N is 16, 32, 64 or >= 128 and a multiple of 32;

* _DecimalN, where N >= 32 and a multiple of 32;

* _Float32x, _Float64x, _Float128x, _Decimal64x, _Decimal128x

so this is not simply a matter of supporting a GNU extension (not that
it's simply a GNU extension on x86_64 anyway - __float128 is explicitly
mentioned in the x86_64 ABI document), but of supporting an ISO C
extension, in any case where one of the above types is the same size and
radix as float / double / long double but has a different representation.

(All the above are distinct types in C, and distinct from float, double,
long double even if the representations are the same.  But I don't think
DWARF needs to distinguish e.g. float and _Float32 other than by their
name - it's only the case of different representations that needs
distinguishing.  The _Float* and _Float*x types have corresponding complex
types, but nothing further should be needed in DWARF for those once you
can represent _Float*.)

--
Joseph S. Myers
[hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Debugger support for __float128 type?

Mark Kettenis
In reply to this post by Ulrich Weigand
> Date: Wed, 30 Sep 2015 19:33:44 +0200 (CEST)
> From: "Ulrich Weigand" <[hidden email]>
>
> Hello,
>
> I've been looking into supporting __float128 in the debugger, since we're
> now introducing this type on PowerPC.  Initially, I simply wanted to do
> whatever GDB does on Intel, but it turns out debugging __float128 doesn't
> work on Intel either ...
>
> The most obvious question is, how should the type be represented in
> DWARF debug info in the first place?  Currently, GCC generates on i386:
>
>         .uleb128 0x3    # (DIE (0x2d) DW_TAG_base_type)
>         .byte   0xc     # DW_AT_byte_size
>         .byte   0x4     # DW_AT_encoding
>         .long   .LASF0  # DW_AT_name: "long double"
>
> and
>
>         .uleb128 0x3    # (DIE (0x4c) DW_TAG_base_type)
>         .byte   0x10    # DW_AT_byte_size
>         .byte   0x4     # DW_AT_encoding
>         .long   .LASF1  # DW_AT_name: "__float128"
>
> On x86_64, __float128 is encoded the same way, but long double is:
>
>         .uleb128 0x3    # (DIE (0x31) DW_TAG_base_type)
>         .byte   0x10    # DW_AT_byte_size
>         .byte   0x4     # DW_AT_encoding
>         .long   .LASF0  # DW_AT_name: "long double"
>
> Now, GDB doesn't recognize __float128 on either platform, but on i386
> it could at least in theory distinguish the two via DW_AT_byte_size.
>
> But on x86_64 (and also on powerpc), long double and __float128 have
> the identical DWARF encoding, except for the name.
>
> Looking at the current DWARF standard, it's not really clear how to
> make a distinction, either.  The standard has no way to specifiy any
> particular floating-point format; the only attributes for a base type
> of DW_ATE_float encoding are related to the size.
>
> (For the Intel case, one option might be to represent the fact that
> for long double, there only 80 data bits and the rest is padding, via
> some combination of the DW_AT_bit_size and DW_AT_bit_offset or
> DW_AT_data_bit_offset attributes.  But that wouldn't help for PowerPC
> since both long double and __float128 really use 128 data bits,
> just different encodings.)
>
> Some options might be:
>
> - Extend the official DWARF standard in some way
>
> - Use a private extension (e.g. from the platform-reserved
>   DW_AT_encoding value range)
>
> - Have the debugger just hard-code a special case based
>   on the __float128 name
>
> Am I missing something here?  Any suggestions welcome ...
>
> B.t.w. is there interest in fixing this problem for Intel?  I notice
> there is a GDB bug open on the issue, but nothing seems to have happened
> so far: https://sourceware.org/bugzilla/show_bug.cgi?id=14857

Perhaps you should start with explaining what __float128 actually is
on your specific platform?  And what long double actually is.

I'm guessing long double is a what we sometimes call an IBM long
double, which is essentially two IEEE double-precision floating point
numbers packed together and that __float128 is an attempt to fix
history and have a proper IEEE quad-precision floating point type ;).
And that __float128 isn't actually implemented in hardware.

I fear that the idea that it is possible to determine the floating
point type purely from the size is fairly deeply engrained into the
GDB code base.  Fixing this won't be easy.  The easiest thing to do
would probably be to define a separate ABI where long double is IEEE
quad-precision.  But the horse is probably already out of the barn on
that one...

Making the decision based on the name is probably the easiest thing to
do.  Butq keep in mind that other OSes that currently don't support
IBM long doubles and where long double is the same as double, may want
to define long double to be IEEE quad-precision floating point on
powerpc.

The reason people haven't bothered to fix this, is probably because
nobody actually implements quad-precision floating point in hardware.
And software implementations are so slow that people don't really use
them unless they need to.  Like I did to nomerically calculate some
asymptotic expansions for my Thesis work...
Reply | Threaded
Open this post in threaded view
|

Re: Debugger support for __float128 type?

Gabriel Paubert
On Thu, Oct 01, 2015 at 12:42:05AM +0200, Mark Kettenis wrote:

> > Date: Wed, 30 Sep 2015 19:33:44 +0200 (CEST)
> > From: "Ulrich Weigand" <[hidden email]>
> >
> > Hello,
> >
> > I've been looking into supporting __float128 in the debugger, since we're
> > now introducing this type on PowerPC.  Initially, I simply wanted to do
> > whatever GDB does on Intel, but it turns out debugging __float128 doesn't
> > work on Intel either ...
> >
> > The most obvious question is, how should the type be represented in
> > DWARF debug info in the first place?  Currently, GCC generates on i386:
> >
> >         .uleb128 0x3    # (DIE (0x2d) DW_TAG_base_type)
> >         .byte   0xc     # DW_AT_byte_size
> >         .byte   0x4     # DW_AT_encoding
> >         .long   .LASF0  # DW_AT_name: "long double"
> >
> > and
> >
> >         .uleb128 0x3    # (DIE (0x4c) DW_TAG_base_type)
> >         .byte   0x10    # DW_AT_byte_size
> >         .byte   0x4     # DW_AT_encoding
> >         .long   .LASF1  # DW_AT_name: "__float128"
> >
> > On x86_64, __float128 is encoded the same way, but long double is:
> >
> >         .uleb128 0x3    # (DIE (0x31) DW_TAG_base_type)
> >         .byte   0x10    # DW_AT_byte_size
> >         .byte   0x4     # DW_AT_encoding
> >         .long   .LASF0  # DW_AT_name: "long double"
> >
> > Now, GDB doesn't recognize __float128 on either platform, but on i386
> > it could at least in theory distinguish the two via DW_AT_byte_size.
> >
> > But on x86_64 (and also on powerpc), long double and __float128 have
> > the identical DWARF encoding, except for the name.
> >
> > Looking at the current DWARF standard, it's not really clear how to
> > make a distinction, either.  The standard has no way to specifiy any
> > particular floating-point format; the only attributes for a base type
> > of DW_ATE_float encoding are related to the size.
> >
> > (For the Intel case, one option might be to represent the fact that
> > for long double, there only 80 data bits and the rest is padding, via
> > some combination of the DW_AT_bit_size and DW_AT_bit_offset or
> > DW_AT_data_bit_offset attributes.  But that wouldn't help for PowerPC
> > since both long double and __float128 really use 128 data bits,
> > just different encodings.)
> >
> > Some options might be:
> >
> > - Extend the official DWARF standard in some way
> >
> > - Use a private extension (e.g. from the platform-reserved
> >   DW_AT_encoding value range)
> >
> > - Have the debugger just hard-code a special case based
> >   on the __float128 name
> >
> > Am I missing something here?  Any suggestions welcome ...
> >
> > B.t.w. is there interest in fixing this problem for Intel?  I notice
> > there is a GDB bug open on the issue, but nothing seems to have happened
> > so far: https://sourceware.org/bugzilla/show_bug.cgi?id=14857
>
> Perhaps you should start with explaining what __float128 actually is
> on your specific platform?  And what long double actually is.
>
> I'm guessing long double is a what we sometimes call an IBM long
> double, which is essentially two IEEE double-precision floating point
> numbers packed together and that __float128 is an attempt to fix
> history and have a proper IEEE quad-precision floating point type ;).
> And that __float128 isn't actually implemented in hardware.

An IBM mainframe might want to discuss this point with you :-).

See pages 24-25 of http://arith22.gforge.inria.fr/slides/s1-schwarz.pdf

Latencies are decent, not extremely low, but we are speaking of a
processor clocked at 5GHz, so the latencies are 2.2ns for add/subtract,
4.6ns for multiplications, and ~10ns for division.

To put things in perspective, how many cycles is a memory access which
misses in both L1 and L2 caches these days?

> The reason people haven't bothered to fix this, is probably because
> nobody actually implements quad-precision floating point in hardware.
> And software implementations are so slow that people don't really use
> them unless they need to.  Like I did to nomerically calculate some
> asymptotic expansions for my Thesis work...

Which would probably run much faster if ported to a z13.

    Gabriel
Reply | Threaded
Open this post in threaded view
|

Re: Debugger support for __float128 type?

Ulrich Weigand
In reply to this post by Mark Kettenis
Mark Kettenis wrote:

> > B.t.w. is there interest in fixing this problem for Intel?  I notice
> > there is a GDB bug open on the issue, but nothing seems to have happened
> > so far: https://sourceware.org/bugzilla/show_bug.cgi?id=14857
>
> Perhaps you should start with explaining what __float128 actually is
> on your specific platform?  And what long double actually is.
>
> I'm guessing long double is a what we sometimes call an IBM long
> double, which is essentially two IEEE double-precision floating point
> numbers packed together and that __float128 is an attempt to fix
> history and have a proper IEEE quad-precision floating point type ;).
> And that __float128 isn't actually implemented in hardware.

Right, that's the current situation on PowerPC.  (On Intel, long double
is the 80-bit IEEE extended type, padded to either 12 bytes (32-bit)
or 16 bytes (64-bit), while __float128 is IEEE quad-precision.)
 
> I fear that the idea that it is possible to determine the floating
> point type purely from the size is fairly deeply engrained into the
> GDB code base.  Fixing this won't be easy.  The easiest thing to do
> would probably be to define a separate ABI where long double is IEEE
> quad-precision.  But the horse is probably already out of the barn on
> that one...

Actually, I think the GDB side should be reasonably straight-forward
to fix.  We can already decide on the correct floating-point format
right when a type is initially defined, and the lenght-based detection
of the format is only done for those types initially defined without
a format.  Currently, most of the "GDB-internal" types already provide
the format (or can be easily fixed to do so), but the types defined by
debug info do not.

However, there's no reason why e.g. dwarf2read couldn't be changed to
simply set the floating-point format directly, if there were enough
information in DWARF that could be used by some new architecture-specific
routine to detect the appropriate format.

> Making the decision based on the name is probably the easiest thing to
> do.  Butq keep in mind that other OSes that currently don't support
> IBM long doubles and where long double is the same as double, may want
> to define long double to be IEEE quad-precision floating point on
> powerpc.

Right.  So there's three somewhat separate issues:

- Code explicitly uses the new __float128 type. Since the __float128
  type can only come from debug info, once we detect the format based
  on debug info, this should be good.  It also should always be safe
  to recognize __float128 by name, since it will always be the 128-bit
  IEEE format.

- We have a "long double" type provided by debug info of the current
  executable.  Again, if we can detect the format from debug info,
  everything should work even if "long double" is defined differently
  on different OSes.  (It could be 64-bit IEEE, 128-bit IBM long double,
  or 128-bit IEEE, I guess.)  As long as we cannot reliably detect the
  format from debug info, we'll have to fall back on the built-in type
  (as below).

- We debug an executable whose debug info does *not* provide "long
  double", but the user uses the "long double" built-in type provided
  by GDB.  In this case, we'd ideally want to detect the OS/ABI and set
  the built-in type accordingly.  When we decide to switch the definition
  of long double on Linux/PowerPC at some point in the future, ideally
  there would be some way to detect this new ABI in the executable
  (some header bit, maybe).  There's still time to define this.

Bye,
Ulrich

--
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Debugger support for __float128 type?

Ulrich Weigand
In reply to this post by Joseph Myers
Joseph Myers wrote:

> On Wed, 30 Sep 2015, Ulrich Weigand wrote:
>
> > - Extend the official DWARF standard in some way
>
> I think you should do this.
>
> Note that TS 18661-4 will be coming out very soon, and includes (optional)
> types
>
> * _FloatN, where N is 16, 32, 64 or >= 128 and a multiple of 32;
>
> * _DecimalN, where N >= 32 and a multiple of 32;
>
> * _Float32x, _Float64x, _Float128x, _Decimal64x, _Decimal128x
>
> so this is not simply a matter of supporting a GNU extension (not that
> it's simply a GNU extension on x86_64 anyway - __float128 is explicitly
> mentioned in the x86_64 ABI document), but of supporting an ISO C
> extension, in any case where one of the above types is the same size and
> radix as float / double / long double but has a different representation.

Ah, thanks for pointing these out!

The _DecimalN types are already supported by DWARF using a base type with
encoding DW_ATE_decimal_float and the appropriate DW_AT_byte_size.

For the interchange type, it seems one could define a new encoding,
e.g. DW_ATE_interchange_float, and use this together with the
appropriate DW_AT_byte_size to identify the format.

However, for the extended types, the standard does not define a storage
size or even a particular encoding, so it's not quite clear how to
handle those.  In theory, two different extended types could even have
the same size ...

On the other hand, in practice on current systems, it will probably be
true that if you look at the set of all of the basic and extended (binary)
types, there will be at most two different encodings for any given size,
one corresponding to the interchange format of that size, and one other;
so mapping those to DW_ATE_float vs. DW_ATE_interchange_float might
suffice.

I'm not sure how to handle an extended decimal format that does not
match any of the decimal interchange formats.  Does this occur in
practice at all?

> (All the above are distinct types in C, and distinct from float, double,
> long double even if the representations are the same.  But I don't think
> DWARF needs to distinguish e.g. float and _Float32 other than by their
> name - it's only the case of different representations that needs
> distinguishing.  The _Float* and _Float*x types have corresponding complex
> types, but nothing further should be needed in DWARF for those once you
> can represent _Float*.)

Well, complex types have their own encoding (DW_ATE_complex_float), so we'd
have to define the corresponding variants for those as well, e.g.
DW_ATE_complex_interchange_float or the like.

Bye,
Ulrich

--
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Debugger support for __float128 type?

Joseph Myers
On Thu, 1 Oct 2015, Ulrich Weigand wrote:

> The _DecimalN types are already supported by DWARF using a base type with
> encoding DW_ATE_decimal_float and the appropriate DW_AT_byte_size.

Which doesn't actually say whether the DPD or BID encoding is used, but as
long as each architecture uses only one that's not a problem in practice.

> For the interchange type, it seems one could define a new encoding,
> e.g. DW_ATE_interchange_float, and use this together with the
> appropriate DW_AT_byte_size to identify the format.

It's not clear to me that (for example) distinguishing float and _Float32
(other than by name) is useful in DWARF (and if you change float from
DW_ATE_float to DW_ATE_interchange_float that would affect old debuggers -
is the idea to use DW_ATE_interchange_float only for the new types, not
for old types with the same encodings, so for _Float32 but not float?).  
But it's true that if you say it's an interchange type then together with
size and endianness that uniquely determines the encoding.

> I'm not sure how to handle an extended decimal format that does not
> match any of the decimal interchange formats.  Does this occur in
> practice at all?

I don't know, but I doubt it.

> Well, complex types have their own encoding (DW_ATE_complex_float), so we'd
> have to define the corresponding variants for those as well, e.g.
> DW_ATE_complex_interchange_float or the like.

And DW_ATE_imaginary_interchange_float, I suppose.

--
Joseph S. Myers
[hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Debugger support for __float128 type?

Jonas Maebe-2
In reply to this post by Ulrich Weigand

Ulrich Weigand wrote on Thu, 01 Oct 2015:

> Right, that's the current situation on PowerPC.  (On Intel, long double
> is the 80-bit IEEE extended type, padded to either 12 bytes (32-bit)
> or 16 bytes (64-bit), while __float128 is IEEE quad-precision.)

A side note here: the Free Pascal Compiler supports the 80-bit  
extended type stored in memory as either 10 bytes (for Turbo Pascal  
and Delphi compatibility) or using the ABI-specified size (which is 16  
bytes for Darwin/i386, and otherwise as you mention above).

We always output the used bytesize in the DWARF info, and in case  
there are padding bytes (i.e., the non-10-byte size cases) we also  
emit the DW_AT_bit_offset.

I don't know how important it is in this context, but it may be useful  
to keep in mind.


Jonas
Reply | Threaded
Open this post in threaded view
|

Re: Debugger support for __float128 type?

Ulrich Weigand
In reply to this post by Joseph Myers
Joseph Myers wrote:
> On Thu, 1 Oct 2015, Ulrich Weigand wrote:
>
> > The _DecimalN types are already supported by DWARF using a base type with
> > encoding DW_ATE_decimal_float and the appropriate DW_AT_byte_size.
>
> Which doesn't actually say whether the DPD or BID encoding is used, but as
> long as each architecture uses only one that's not a problem in practice.

I see.  Well, one could add a DW_ATE_decimal_interchange_float for
completeness, if necessary.
 

> > For the interchange type, it seems one could define a new encoding,
> > e.g. DW_ATE_interchange_float, and use this together with the
> > appropriate DW_AT_byte_size to identify the format.
>
> It's not clear to me that (for example) distinguishing float and _Float32
> (other than by name) is useful in DWARF (and if you change float from
> DW_ATE_float to DW_ATE_interchange_float that would affect old debuggers -
> is the idea to use DW_ATE_interchange_float only for the new types, not
> for old types with the same encodings, so for _Float32 but not float?).  
> But it's true that if you say it's an interchange type then together with
> size and endianness that uniquely determines the encoding.

So my thinking here was: today, DWARF deliberately does not specify the
details of the floating-point encoding format, so that it doesn't have
to get into all the various formats that exist on all the platforms
supported by DWARF.  That is why a DW_ATE_float encoding simply says;
this is a floating-point number of size N encoded as defined by the
platform ABI.

The new DW_ATE_interchange_float encoding would say instead; this is
a floating-point number of size N encoded as defined by the IEEE
interchange format.

On platforms where the ABI-defined format actually *is* the interchange
format, a DWARF producer would be free to use either DW_ATE_float or
DW_ATE_interchange_float.  This decision could of course take into
consideration compatibility requirements with older debuggers etc.

However, having two encoding definitions would allow platforms to use
both the interchange format and one additional platform-defined
non-interchange format of the same size, if needed.

> > Well, complex types have their own encoding (DW_ATE_complex_float), so we'd
> > have to define the corresponding variants for those as well, e.g.
> > DW_ATE_complex_interchange_float or the like.
>
> And DW_ATE_imaginary_interchange_float, I suppose.

Right.


As an alternative to specifying the well-defined interchange format,
another option might be to simply add a second DWARF attribute,
e.g. DW_AT_encoding_variant, to floating-point and related base types.
This would simply be an integer with platform-specific semantics.
So DWARF producers could simply describe a type as:
  this is a floating-point number of size N encoded as defined by
  platform ABI encoding variant #V

(If the attribute isn't present, we'd default to variant 0, which
is just the current encoding.)

This would allow an arbitrary number of platform-specific encodings,
any of which might or might not be IEEE-defined formats ...


Bye,
Ulrich

--
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Debugger support for __float128 type?

Joseph Myers
On Fri, 2 Oct 2015, Ulrich Weigand wrote:

> Joseph Myers wrote:
> > On Thu, 1 Oct 2015, Ulrich Weigand wrote:
> >
> > > The _DecimalN types are already supported by DWARF using a base type with
> > > encoding DW_ATE_decimal_float and the appropriate DW_AT_byte_size.
> >
> > Which doesn't actually say whether the DPD or BID encoding is used, but as
> > long as each architecture uses only one that's not a problem in practice.
>
> I see.  Well, one could add a DW_ATE_decimal_interchange_float for
> completeness, if necessary.

Since both DPD and BID are interchange encodings, that doesn't actually
determine things without some way to say which was used (that is, both
DW_ATE_decimal_float and DW_ATE_decimal_interchange_float would rely on
platform-specific information to determine the format).  I don't know if
DW_ATE_decimal_float is being used anywhere for something that's not an
interchange format.

> The new DW_ATE_interchange_float encoding would say instead; this is
> a floating-point number of size N encoded as defined by the IEEE
> interchange format.
>
> On platforms where the ABI-defined format actually *is* the interchange
> format, a DWARF producer would be free to use either DW_ATE_float or
> DW_ATE_interchange_float.  This decision could of course take into
> consideration compatibility requirements with older debuggers etc.
>
> However, having two encoding definitions would allow platforms to use
> both the interchange format and one additional platform-defined
> non-interchange format of the same size, if needed.

That makes sense to me.

> As an alternative to specifying the well-defined interchange format,
> another option might be to simply add a second DWARF attribute,
> e.g. DW_AT_encoding_variant, to floating-point and related base types.
> This would simply be an integer with platform-specific semantics.
> So DWARF producers could simply describe a type as:
>   this is a floating-point number of size N encoded as defined by
>   platform ABI encoding variant #V

Do you want entirely platform-specific semantics?  Or would it be better
to define standard values to mean it's an IEEE interchange format (or, for
decimal floating point, to specify whether it's DPD or BID), plus space
for future standard values and space for platform-specific values?

Would existing consumers safely ignore that attribute (so that producers
could safely specify IEEE interchange encoding for float, double etc. if
applicable, without breaking existing consumers)?

--
Joseph S. Myers
[hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Debugger support for __float128 type?

Ulrich Weigand
Joseph Myers wrote:

> On Fri, 2 Oct 2015, Ulrich Weigand wrote:
> > I see.  Well, one could add a DW_ATE_decimal_interchange_float for
> > completeness, if necessary.
>
> Since both DPD and BID are interchange encodings, that doesn't actually
> determine things without some way to say which was used (that is, both
> DW_ATE_decimal_float and DW_ATE_decimal_interchange_float would rely on
> platform-specific information to determine the format).  I don't know if
> DW_ATE_decimal_float is being used anywhere for something that's not an
> interchange format.

Ah, yes.  I missed that both DPD and BID are defined as interchange
formats.  This suggestion doesn't make sense then ...

> > As an alternative to specifying the well-defined interchange format,
> > another option might be to simply add a second DWARF attribute,
> > e.g. DW_AT_encoding_variant, to floating-point and related base types.
> > This would simply be an integer with platform-specific semantics.
> > So DWARF producers could simply describe a type as:
> >   this is a floating-point number of size N encoded as defined by
> >   platform ABI encoding variant #V
>
> Do you want entirely platform-specific semantics?  Or would it be better
> to define standard values to mean it's an IEEE interchange format (or, for
> decimal floating point, to specify whether it's DPD or BID), plus space
> for future standard values and space for platform-specific values?

Hmm, I had been thinking of leaving that entirely platform-specific.
I guess one could indeed define some values with well-defined standard
semantics; that would assume DWARF would want to start getting into the
game of defining floating-point formats -- not sure what the position
of the committee would be on this question ...

[ Back when DW_ATE_decimal_float was added, the initial proposal did
indeed specify the encoding should follow IEEE-754R, but that was
removed when the proposal was actually added to the standard.  ]

> Would existing consumers safely ignore that attribute (so that producers
> could safely specify IEEE interchange encoding for float, double etc. if
> applicable, without breaking existing consumers)?

Yes, existing consumers should simply ignore attributes they are not
aware of.

Bye,
Ulrich

--
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  [hidden email]