Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

classic Classic list List threaded Threaded
20 messages Options
Reply | Threaded
Open this post in threaded view
|

Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
Hello,

I  believe  that  the  "sw_breakpoint_from_kind()"  method  declared  in
gdbserver/target.h should not be a pure virtual method.  Please read  on
for further details.

Recently, I have rebased our code base of ARC GDB [1] onto the master of
binutils-gdb [2]. During this process, I noticed that "linux target ops"
have become methods [3].

I noticed that the proto type of "sw_breakpoint_from_kind()" has  become
[4]:

-------------------------[ gdbserver/target.h ]-------------------------
  ...
  class process_stratum_target
  {
  public:
    ...
    virtual const gdb_byte *
      sw_breakpoint_from_kind (int kind, int *size) = 0;
    ...
  };
  ...
------------------------------------------------------------------------

This means "sw_breakpoint_from_kind()" is a  pure  virtual  method  now.
Any  class  inheriting  "process_stratum_target"  or   its   derivatives
("linux_process_target"  in  particular)  MUST  implement  this  method.
Else, the compilation will fail.

In ARC's case, the GDBserver code did not have this  method  implemented
[5]:

---------------------[ gdbserver/linux-arc-low.cc ]---------------------
  ...
  struct linux_target_ops the_low_target =
  {
    arc_arch_setup,
    arc_regs_info,
    arc_cannot_fetch_register,
    arc_cannot_store_register,
    NULL,                      /* fetch_register  */
    linux_get_pc_32bit,
    linux_set_pc_32bit,
    NULL,                      /* breakpoint_kind_from_pc  */
    NULL,                      /* sw_breakpoint_from_kind  */
    NULL,                      /* get_next_pcs  */
    0,                         /* decr_pc_after_break  */
    arc_breakpoint_at,
  };
  ...
------------------------------------------------------------------------

Now, it has to provide the implementation for "sw_breakpoint_from_kind".
That is not the whole story.  I also noticed that this  piece  of  newly
implemented code never gets executed.  This makes sense because we  have
a setup  that  looks  like  below  (both  entities  are  running  inside
GNU/Linux):

 ,------------------------------------------------.
 | ARC GDB client on x86 machine (cross debugger) |
 `------------------------------------------------'
                        /\
                        ||
                 remote debugging
                        ||
                        \/
 ,------------------------------------------------.
 |   ARC GDBserver on ARC board (native server)   |
 `------------------------------------------------'

It is the "ARC GDB client" that inserts the breakpoint.  That has always
been the case.  Else, it would be impossible to  break  while  debugging
with GDBserver in older implementation (before  rebase).   It  is  worth
mentioning the "ARC GDB client" does have the  "sw_breakpoint_from_kind"
implemented [].

Last  but  not  least,  one  nitpick:  Even  though  I  have  added  the
implementation of "sw_breakpoint_from_kind", I have never  done  so  for
"breakpoint_kind_from_pc"    or    "breakpoint_kind_from_current_state".
These last two are supposed to provide  the  "kind"  that  will  be  the
input parameter for "sw_breakpoint_from_kind".  Therefore, even  if  the
new piece of "sw_breakpoint_from_kind" would be executed, that would  be
problematic.  I'm not sure what can be done about this but I think  _if_
"sw_breakpoint_from_kind"     should     be    mandatory,     so     are
"breakpoint_kind_from_pc" and "breakpoint_kind_from_current_state".


Cheers,
Shahab


[1] ARC GDB repository
https://github.com/foss-for-synopsys-dwc-arc-processors/binutils-gdb

[2] Head of the master at the time of rebase
commit: cebd6b8ac1c5a2a847a50e3efe932ff2d0867b3e
IFUNC: Update IFUNC resolver check with DT_TEXTREL

[3] Commits for converting "linux target ops" to methods
0dd7b52ede3 gdbserver/linux-low: delete 'linux_target_ops' and 'the_low_target'
fc5ecdb6304 gdbserver/linux-low: turn 'get_ipa_tdesc_idx' into a method
9eedd27d42c gdbserver/linux-low: turn 'get_syscall_trapinfo' into a method
b31cdfa69f4 gdbserver/linux-low: turn 'supports_hardware_single_step' into a method
9cfd8715514 gdbserver/linux-low: turn 'supports_range_stepping' into a method
ab64c99982e gdbserver/linux-low: turn 'emit_ops' into a method
809a0c354b9 gdbserver/linux-low: turn fast tracepoint ops into methods
13e567af27e gdbserver/linux-low: turn 'get_thread_area' into a method
47f70aa7685 gdbserver/linux-low: turn 'supports_tracepoints' into a method
a5b5da9258d gdbserver/linux-low: turn 'process_qsupported' into a method
d7599cc0826 gdbserver/linux-low: turn 'prepare_to_resume' into a method
fd000fb3dfd gdbserver/linux-low: turn process/thread addition/deletion ops into methods
cb63de7ca80 gdbserver/linux-low: turn 'siginfo_fixup' into a method
b35db73327c gdbserver/linux-low: turn '{collect, supply}_ptrace_register' into methods
ac1bbaca106 gdbserver/linux-low: turn watchpoint ops into methods
9db9aa232ac gdbserver/linux-low: turn 'insert_point' and 'remove_point' into methods
007c9b975dc gdbserver/linux-low: turn 'supports_z_point_type' into a method
d7146cda56c gdbserver/linux-low: turn 'breakpoint_at' into a method
d4807ea231e gdbserver/linux-low: turn the 'decr_pc_after_break' field into a method
7582c77c1d2 gdbserver/linux-low: turn 'supports_software_single_step' and 'get_next_pcs' into methods
3ca4edb6617 gdbserver/linux-low: turn 'sw_breakpoint_from_kind' into a method
06250e4e67c gdbserver/linux-low: turn 'breakpoint_kind_from_{pc, current_state}' into methods
bf9ae9d8c37 gdbserver/linux-low: turn 'get_pc' and 'set_pc' into methods
df95181f00d gdbserver/linux-low: turn some more static functions into private methods
bd70b1f240b gdbserver/linux-low: turn 'fetch_register' into a method
daca57a7de5 gdbserver/linux-low: turn 'cannot_{fetch/store}_register' into methods
aa8d21c9bb4 gdbserver/linux-low: turn 'regs_info' into a method
797bcff595c gdbserver/linux-low: turn 'arch_setup' into a method
ef0478f6112 gdbserver/linux-low: start turning linux target ops into methods

[4] gdbserver: turn breakpoint kind-related target ops into methods
https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=d367006fb7cf

[5] ARC's GDB server code before rebase; not providing sw_breakpoint_from_kind()
https://github.com/foss-for-synopsys-dwc-arc-processors/binutils-gdb/blob/arc-2020.03/gdbserver/linux-arc-low.cc#L311

[6] ARC's GDB client having the "sw_breakpoint_from_kind" method
https://github.com/foss-for-synopsys-dwc-arc-processors/binutils-gdb/blob/arc-2020.03/gdb/arc-linux-tdep.c#L319

Reply | Threaded
Open this post in threaded view
|

Re: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Simon Marchi-4
On 2020-06-10 1:47 p.m., Shahab Vahedi via Gdb wrote:

> Now, it has to provide the implementation for "sw_breakpoint_from_kind".
> That is not the whole story.  I also noticed that this  piece  of  newly
> implemented code never gets executed.  This makes sense because we  have
> a setup  that  looks  like  below  (both  entities  are  running  inside
> GNU/Linux):
>
>  ,------------------------------------------------.
>  | ARC GDB client on x86 machine (cross debugger) |
>  `------------------------------------------------'
>                         /\
>                         ||
>                  remote debugging
>                         ||
>                         \/
>  ,------------------------------------------------.
>  |   ARC GDBserver on ARC board (native server)   |
>  `------------------------------------------------'
>
> It is the "ARC GDB client" that inserts the breakpoint.  That has always
> been the case.  Else, it would be impossible to  break  while  debugging
> with GDBserver in older implementation (before  rebase).   It  is  worth
> mentioning the "ARC GDB client" does have the  "sw_breakpoint_from_kind"
> implemented [].

I don't understand your "This makes sense because we have a setup that looks
like below", because that looks like a standard GDB/GDBserver setup used for
other architectures.

When a breakpoint is inserted, what's the remote protocol packet used? Is it
Z0, or is it a memory write operation that writes the breakpoint's opcode?  Z0
is the "modern" way that provides more features (like target-side condition
evaluation) and a memory write is the legacy fallback.

sw_breakpoint_from_kind would be used if the Z0 packet was used, to translate
the "kind" into an opcode.  Since you claim that sw_breakpoint_from_kind is
not used, I guess that the breakpoint is inserted with a memory write operation.
I'd look into why that is the case.  GDB tries Z0 first and falls back to the
memory write if Z0 is not supported, so your GDBserver must not support it for
some reason.
> Last  but  not  least,  one  nitpick:  Even  though  I  have  added  the
> implementation of "sw_breakpoint_from_kind", I have never  done  so  for
> "breakpoint_kind_from_pc"    or    "breakpoint_kind_from_current_state".
> These last two are supposed to provide  the  "kind"  that  will  be  the
> input parameter for "sw_breakpoint_from_kind".  Therefore, even  if  the
> new piece of "sw_breakpoint_from_kind" would be executed, that would  be
> problematic.  I'm not sure what can be done about this but I think  _if_
> "sw_breakpoint_from_kind"     should     be    mandatory,     so     are
> "breakpoint_kind_from_pc" and "breakpoint_kind_from_current_state".

As mentioned above, the input for sw_breakpoint_from_kind can come from
the Z0 packet, so it may make sense to have sw_breakpoint_from_kind without
the others.  I am not sure off-hand when the others are used.

Simon
Reply | Threaded
Open this post in threaded view
|

Re: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
Hi Simon,

The ARC GDB client inserts the breakpoint by writing to memory (the
legacy way). With your explanations, I plan to add the Z0 packet
support to it.  Nevertheless, should it be still necessary to have
"sw_breakpoint_from_kind" in GDBserver as a mandatory method?

The rest follows:

On Wed, Jun 10, 2020 at 11:05:38PM -0400, Simon Marchi wrote:
> I don't understand your "This makes sense because we have a setup that looks
> like below", because that looks like a standard GDB/GDBserver setup used for
> other architectures.

I said it makes sense because the first question that pops into mind is
"If sw_breakpoint_from_kind should have been mandatory all the time,
then how come a debugging session with ARC GDBserver was able to insert
breakpoints?". The answer would be because ARC GDB client takes care of
it. In ARC case, as you concluded as well, the client asks the GDBserver
to write the opcodes to memory.

> When a breakpoint is inserted, what's the remote protocol packet used? Is it
> Z0, or is it a memory write operation that writes the breakpoint's opcode?  Z0
> is the "modern" way that provides more features (like target-side condition
> evaluation) and a memory write is the legacy fallback.
> sw_breakpoint_from_kind would be used if the Z0 packet was used, to translate
> the "kind" into an opcode.  Since you claim that sw_breakpoint_from_kind is
> not used, I guess that the breakpoint is inserted with a memory write operation.
> I'd look into why that is the case.  GDB tries Z0 first and falls back to the
> memory write if Z0 is not supported, so your GDBserver must not support it for
> some reason.

I am not sure why this could be the case. I will investigate that.

> > Last  but  not  least,  one  nitpick:  Even  though  I  have  added  the
> > implementation of "sw_breakpoint_from_kind", I have never  done  so  for
> > "breakpoint_kind_from_pc"    or    "breakpoint_kind_from_current_state".
> > These last two are supposed to provide  the  "kind"  that  will  be  the
> > input parameter for "sw_breakpoint_from_kind".  Therefore, even  if  the
> > new piece of "sw_breakpoint_from_kind" would be executed, that would  be
> > problematic.  I'm not sure what can be done about this but I think  _if_
> > "sw_breakpoint_from_kind"     should     be    mandatory,     so     are
> > "breakpoint_kind_from_pc" and "breakpoint_kind_from_current_state".
>
> As mentioned above, the input for sw_breakpoint_from_kind can come from
> the Z0 packet, so it may make sense to have sw_breakpoint_from_kind without
> the others.  I am not sure off-hand when the others are used.
>
> Simon

Shahab
Reply | Threaded
Open this post in threaded view
|

Re: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
On 6/11/20 6:40 AM, Shahab Vahedi via Gdb wrote:

> Hi Simon,
>
> The ARC GDB client inserts the breakpoint by writing to memory (the
> legacy way). With your explanations, I plan to add the Z0 packet
> support to it.  Nevertheless, should it be still necessary to have
> "sw_breakpoint_from_kind" in GDBserver as a mandatory method?
>
> The rest follows:
>
> On Wed, Jun 10, 2020 at 11:05:38PM -0400, Simon Marchi wrote:
>> I don't understand your "This makes sense because we have a setup that looks
>> like below", because that looks like a standard GDB/GDBserver setup used for
>> other architectures.
>
> I said it makes sense because the first question that pops into mind is
> "If sw_breakpoint_from_kind should have been mandatory all the time,
> then how come a debugging session with ARC GDBserver was able to insert
> breakpoints?". The answer would be because ARC GDB client takes care of
> it. In ARC case, as you concluded as well, the client asks the GDBserver
> to write the opcodes to memory.
>> When a breakpoint is inserted, what's the remote protocol packet used? Is it
>> Z0, or is it a memory write operation that writes the breakpoint's opcode?  Z0
>> is the "modern" way that provides more features (like target-side condition
>> evaluation) and a memory write is the legacy fallback.
>> sw_breakpoint_from_kind would be used if the Z0 packet was used, to translate
>> the "kind" into an opcode.  Since you claim that sw_breakpoint_from_kind is
>> not used, I guess that the breakpoint is inserted with a memory write operation.
>> I'd look into why that is the case.  GDB tries Z0 first and falls back to the
>> memory write if Z0 is not supported, so your GDBserver must not support it for
>> some reason.
>
> I am not sure why this could be the case. I will investigate that.

Probably because the ARC port doesn't implement low_insert_point and
low_remove_point? There is only a dummy Linux implementation, and Linux
implementations for insert_point/remove_point.
Reply | Threaded
Open this post in threaded view
|

Re: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
Hi Luis, Simon,

On Thu, Jun 11, 2020 at 07:35:33AM -0300, Luis Machado wrote:
> On 6/11/20 6:40 AM, Shahab Vahedi via Gdb wrote:
> > Hi Simon,
> >
> > The ARC GDB client inserts the breakpoint by writing to memory (the
> > legacy way). With your explanations, I plan to add the Z0 packet
> > support to it.  Nevertheless, should it be still necessary to have
> > "sw_breakpoint_from_kind" in GDBserver as a mandatory method?

Simon, I thought about this a little. Are we aiming for deprecating
the old way? Then I guess that's the way to go.

> >
> > On Wed, Jun 10, 2020 at 11:05:38PM -0400, Simon Marchi wrote:
> > > I'd look into why that is the case.  GDB tries Z0 first and falls back to the
> > > memory write if Z0 is not supported, so your GDBserver must not support it for
> > > some reason.
> >
> > I am not sure why this could be the case. I will investigate that.
>
> Probably because the ARC port doesn't implement low_insert_point and
> low_remove_point? There is only a dummy Linux implementation, and Linux
> implementations for insert_point/remove_point.

Luis, indeed it does not.

Shahab
Reply | Threaded
Open this post in threaded view
|

Re: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
Hi Tankut,

It'd be great if we could have your feedback on this too.

Cheers,
Shahab
Reply | Threaded
Open this post in threaded view
|

Re: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Simon Marchi-4
In reply to this post by Sourceware - gdb list mailing list
On 2020-06-11 7:00 a.m., Shahab Vahedi wrote:

> Hi Luis, Simon,
>
> On Thu, Jun 11, 2020 at 07:35:33AM -0300, Luis Machado wrote:
>> On 6/11/20 6:40 AM, Shahab Vahedi via Gdb wrote:
>>> Hi Simon,
>>>
>>> The ARC GDB client inserts the breakpoint by writing to memory (the
>>> legacy way). With your explanations, I plan to add the Z0 packet
>>> support to it.  Nevertheless, should it be still necessary to have
>>> "sw_breakpoint_from_kind" in GDBserver as a mandatory method?
>
> Simon, I thought about this a little. Are we aiming for deprecating
> the old way? Then I guess that's the way to go.

If all the gdbserver targets we support do support Z0, then yes I think
we could consider doing that.  How would we do it?  Make insert_point
and remove_point virtual pure to force sub-classes to implement them
with something meaningful?

Note that this would only concern GDBserver, other server implementations
of the remote protocol are free to support Z0 or not.  But we could decide
that all GDBserver ports have to support it.

Simon

Reply | Threaded
Open this post in threaded view
|

Re: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Martin Simmons-2
In reply to this post by Sourceware - gdb list mailing list
>>>>> On Wed, 10 Jun 2020 19:47:02 +0200, Shahab Vahedi via Gdb said:
>
> Last  but  not  least,  one  nitpick:  Even  though  I  have  added  the
> implementation of "sw_breakpoint_from_kind", I have never  done  so  for
> "breakpoint_kind_from_pc"    or    "breakpoint_kind_from_current_state".
> These last two are supposed to provide  the  "kind"  that  will  be  the
> input parameter for "sw_breakpoint_from_kind".  Therefore, even  if  the
> new piece of "sw_breakpoint_from_kind" would be executed, that would  be
> problematic.  I'm not sure what can be done about this but I think  _if_
> "sw_breakpoint_from_kind"     should     be    mandatory,     so     are
> "breakpoint_kind_from_pc" and "breakpoint_kind_from_current_state".

"breakpoint_kind_from_pc" and "breakpoint_kind_from_current_state" have
default implementations in gdbserver/target.cc so there is no need to
make them mandatory.

__Martin
Reply | Threaded
Open this post in threaded view
|

RE: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
In reply to this post by Sourceware - gdb list mailing list
On Thursday, June 11, 2020 1:44 PM, Shahab Vahedi wrote:
> Hi Tankut,
>
> It'd be great if we could have your feedback on this too.
>
> Cheers,
> Shahab

Hi,

Before the gdbserver target definitions were converted into C++ classes,
the `sw_breakpoint_from_kind` target op in linux-low was defined as follows
(the code snippet is from gdb 9.1):

  /* Implementation of the target_ops method "sw_breakpoint_from_kind".  */

  static const gdb_byte *
  linux_sw_breakpoint_from_kind (int kind, int *size)
  {
    gdb_assert (the_low_target.sw_breakpoint_from_kind != NULL);

    return (*the_low_target.sw_breakpoint_from_kind) (kind, size);
  }

So, the base linux target delegates the op to the low target and enforces
an implementation.

The `sw_breakpoint_from_kind` target op is invoked in linux_wait_1 as follows:

  if (step_over_bkpt != null_ptid
      && event_child->stop_reason == TARGET_STOPPED_BY_SW_BREAKPOINT
      && (event_child->stepping
          || !single_step_breakpoint_inserted_here (event_child->stop_pc)))
    {
      ...
      breakpoint_kind =
        the_target->breakpoint_kind_from_current_state (&stop_pc);
      the_target->sw_breakpoint_from_kind (breakpoint_kind, &increment_pc);
      ...
    }

There are two additional uses in mem-break.c, in bp_size and bp_opcode, which are used
in several functions in mem-break.c.  These uses as well as the one from linux_wait_1
above do not guard against whether the target op is implemented.  Therefore it had made
sense to enforce that the target op is implemented by the linux low target by making it
pure virtual.  All the linux low targets except ia64 implemented the
`sw_breakpoint_from_kind` target op.

Regards
-Baris


Intel Deutschland GmbH
Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany
Tel: +49 89 99 8853-0, www.intel.de
Managing Directors: Christin Eisenschmid, Gary Kershaw
Chairperson of the Supervisory Board: Nicole Lau
Registered Office: Munich
Commercial Register: Amtsgericht Muenchen HRB 186928
Reply | Threaded
Open this post in threaded view
|

RE: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
In reply to this post by Simon Marchi-4
> >>> The ARC GDB client inserts the breakpoint by writing to memory (the
> >>> legacy way). With your explanations, I plan to add the Z0 packet
> >>> support to it.  Nevertheless, should it be still necessary to have
> >>> "sw_breakpoint_from_kind" in GDBserver as a mandatory method?
> >
> > Simon, I thought about this a little. Are we aiming for deprecating
> > the old way? Then I guess that's the way to go.
>
> If all the gdbserver targets we support do support Z0, then yes I think
> we could consider doing that.  How would we do it?  Make insert_point
> and remove_point virtual pure to force sub-classes to implement them
> with something meaningful?
>
> Note that this would only concern GDBserver, other server implementations
> of the remote protocol are free to support Z0 or not.  But we could decide
> that all GDBserver ports have to support it.

The Intel Graphics architecture uses breakpoint bits inside instructions.  There
is no single breakpoint opcode as there is INT3 on IA, for example.

The breakpoint can be ignored one time, which allows stepping over breakpoints
without having to  remove them.  This obviously only works if the breakpoint bit
in the original instruction is set and the instruction is not replaced with a fixed
breakpoint pattern.

I've been looking into z packets and insert/remove_point () target methods.
Since struct raw_breakpoint is opaque, it would not allow me to store a shadow
copy - unless I extended mem-break.cc to do that for me.

I ended up using the gdbarch methods.

Regards,
Markus.

Intel Deutschland GmbH
Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany
Tel: +49 89 99 8853-0, www.intel.de
Managing Directors: Christin Eisenschmid, Gary Kershaw
Chairperson of the Supervisory Board: Nicole Lau
Registered Office: Munich
Commercial Register: Amtsgericht Muenchen HRB 186928
Reply | Threaded
Open this post in threaded view
|

Re: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
In reply to this post by Sourceware - gdb list mailing list
Tankut,

Thank you for your clear response. I have only one remark:

On Fri, Jun 12, 2020 at 11:04:08AM +0000, Aktemur, Tankut Baris wrote:
> So, the base linux target delegates the op to the low target and enforces
> an implementation.

What about scenarios that "sw_breakpoint_from_kind" for the linux is
defined by the target?  There would be no execution of base-linux-target
flavour of "sw_breakpoint_from_kind". Hence, not a mandatory dependency
to the low target's "sw_breakpoint_from_kind".

This scenario is actally not far fetched. That's what's happening in ARC
port.

Shahab
Reply | Threaded
Open this post in threaded view
|

RE: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
Hi Shahab,

On Monday, June 15, 2020 12:39 PM, Shahab Vahedi wrote:

> Tankut,
>
> Thank you for your clear response. I have only one remark:
>
> On Fri, Jun 12, 2020 at 11:04:08AM +0000, Aktemur, Tankut Baris wrote:
> > So, the base linux target delegates the op to the low target and enforces
> > an implementation.
>
> What about scenarios that "sw_breakpoint_from_kind" for the linux is
> defined by the target?  There would be no execution of base-linux-target
> flavour of "sw_breakpoint_from_kind". Hence, not a mandatory dependency
> to the low target's "sw_breakpoint_from_kind".

I'm not sure I understand this statement fully.  Let me clarify these cases:

1. In GDB 9 (i.e. before C++'ification of gdbserver's target definition),
the base linux target implemented the "sw_breakpoint_from_kind" target op
to directly delegate to "sw_breakpoint_from_kind" of the linux low target.
It asserted that the low target's "sw_breakpoint_from_kind" function pointer
is non-null.

2. In the current GDB master (i.e. after the C++'ification refactoring), the
"sw_breakpoint_from_kind" is a pure virtual function in target.h.  The base linux
target, linux_process_target, does not implement this method; it leaves it as a
pure virtual method to be implemented by the linux low target.

So, neither in case #1 nor in #2 there exists a concrete base linux target
implementation.  If in case #1 the "sw_breakpoint_from_kind" target op was
invoked but the low linux target did not provide an implementation, we would
get a runtime assert.

>
> This scenario is actally not far fetched. That's what's happening in ARC
> port.
>
> Shahab

As far as I understand, there is a certain control flow and a combination
of (un)supported features in the ARC port with which the "sw_breakpoint_from_kind"
target op is not invoked.  I'm not sure what this flow or feature combination is.

There might be two alternative approaches with the current code base:

1. The new linux low target (in this case, ARC) implements the "sw_breakpoint_from_kind"
target op with a "gdb_assert_not_reached".

2. The base linux target, linux_process_target, overrides the "sw_breakpoint_from_kind"
with a "gdb_assert_not_reached" so that new linux low targets can omit overriding
the method if they choose to do so.

IMHO #1 is the better approach, but I'm not a maintainer.  The ia64 target is the
only linux low target that did not implement the "sw_breakpoint_from_kind" linux low
target op before C++'ification.  It currently contains a gdb_assert_not_reached
to mimic this behavior.

Just out of curiosity, is there a specific reason why you want to avoid implementing
the "sw_breakpoint_from_kind" target op?  If there is a breakpoint instruction defined
for ARC, it would be as simple as filling in the 'size' parameter with the length
of the instruction and returning a pointer to the instruction contents.  For many
targets, there is only one breakpoint kind and those targets simply ignore the 'kind'
argument.

-Baris


Intel Deutschland GmbH
Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany
Tel: +49 89 99 8853-0, www.intel.de
Managing Directors: Christin Eisenschmid, Gary Kershaw
Chairperson of the Supervisory Board: Nicole Lau
Registered Office: Munich
Commercial Register: Amtsgericht Muenchen HRB 186928
Reply | Threaded
Open this post in threaded view
|

RE: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
In reply to this post by Sourceware - gdb list mailing list
On Mon, 15 Jun 2020, Metzger, Markus T via Gdb wrote:

> > Note that this would only concern GDBserver, other server implementations
> > of the remote protocol are free to support Z0 or not.  But we could decide
> > that all GDBserver ports have to support it.
>
> The Intel Graphics architecture uses breakpoint bits inside instructions.  There
> is no single breakpoint opcode as there is INT3 on IA, for example.
>
> The breakpoint can be ignored one time, which allows stepping over breakpoints
> without having to  remove them.  This obviously only works if the breakpoint bit
> in the original instruction is set and the instruction is not replaced with a fixed
> breakpoint pattern.

 Hmm, it seems to me like a use case for the breakpoint kind.

  Maciej
Reply | Threaded
Open this post in threaded view
|

Re: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
In reply to this post by Sourceware - gdb list mailing list
Hi Baris,

Sorry for my late reply and thank you again for your thorough response.

On Tue, Jun 16, 2020 at 01:15:29PM +0000, Aktemur, Tankut Baris wrote:
> 1. In GDB 9 (i.e. before C++'ification of gdbserver's target definition),
> the base linux target implemented the "sw_breakpoint_from_kind" target op
> to directly delegate to "sw_breakpoint_from_kind" of the linux low target.
> It asserted that the low target's "sw_breakpoint_from_kind" function pointer
> is non-null.

And this is how the ARC port at the time looked like:
gdb/gdbserver/linux-arc-low.c [1]:
...
struct linux_target_ops the_low_target =
{
  arc_arch_setup,
  arc_regs_info,
  arc_cannot_fetch_register,
  arc_cannot_store_register,
  NULL,                      /* fetch_register  */
  linux_get_pc_32bit,
  linux_set_pc_32bit,
  NULL,                      /* breakpoint_kind_from_pc  */
  NULL,                      /* sw_breakpoint_from_kind  */
  NULL,                      /* get_next_pcs  */
  0,                         /* decr_pc_after_break  */
  arc_breakpoint_at,
};
...
This did not hit the assert in base linux implementation, since
"linux_sw_breakpoint_from_kind" was never executed. It's because
the GDB client was writing the breakpoint opcodes to the address
through gdbserver while bypassing sw_breakpoint_from_kind().

> As far as I understand, there is a certain control flow and a combination
> of (un)supported features in the ARC port with which the "sw_breakpoint_from_kind"
> target op is not invoked.  I'm not sure what this flow or feature combination is.

I guess what I explained above should be the case. I do not know
the exact control flow that led/leads to this though.
> Just out of curiosity, is there a specific reason why you want to avoid implementing
> the "sw_breakpoint_from_kind" target op?

No, there is no reason that I might be against it. I brought this whole thing
up because I noticed ARC did not have sw_breakpoint_from_kind implemented and
it was working fine. Even after I implemented the pure virtual method [2], it
does not get executed (I put an assert in a temporary implementation as an
experiment). So it seemed like an unnecessary constraint for compilation.
If we want to deprecate how ARC inserts breakpoints (GDB client writing
the opcode through GDBserver) then that is fine [3]. If not, then implementing
sw_breakpoint_from_kind() shouldn't be mandatory.

I just wanted to make sure that this message is conveyed. That's all :)

For the record, I plan to change ARC's behavior to really use
sw_breakpoint_from_kind(), but that is not the purpose of this email.


Shahab

[1]
https://github.com/foss-for-synopsys-dwc-arc-processors/binutils-gdb/blob/arc-2019.09-gdb/gdb/gdbserver/linux-arc-low.c#L300

[2]
https://github.com/foss-for-synopsys-dwc-arc-processors/binutils-gdb/blob/arc-2020.09/gdbserver/linux-arc-low.cc#L350

[3]
Please keep in mind that if deprecation is wanted, yet still nothing is
stopping ARC (and the likes) from bypassing sw_breakpoint_from_kind().
Reply | Threaded
Open this post in threaded view
|

Re: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
On 6/17/20 6:31 PM, Shahab Vahedi wrote:

> Hi Baris,
>
> Sorry for my late reply and thank you again for your thorough response.
>
> On Tue, Jun 16, 2020 at 01:15:29PM +0000, Aktemur, Tankut Baris wrote:
>> 1. In GDB 9 (i.e. before C++'ification of gdbserver's target definition),
>> the base linux target implemented the "sw_breakpoint_from_kind" target op
>> to directly delegate to "sw_breakpoint_from_kind" of the linux low target.
>> It asserted that the low target's "sw_breakpoint_from_kind" function pointer
>> is non-null.
>
> And this is how the ARC port at the time looked like:
> gdb/gdbserver/linux-arc-low.c [1]:
> ...
> struct linux_target_ops the_low_target =
> {
>    arc_arch_setup,
>    arc_regs_info,
>    arc_cannot_fetch_register,
>    arc_cannot_store_register,
>    NULL,                      /* fetch_register  */
>    linux_get_pc_32bit,
>    linux_set_pc_32bit,
>    NULL,                      /* breakpoint_kind_from_pc  */
>    NULL,                      /* sw_breakpoint_from_kind  */
>    NULL,                      /* get_next_pcs  */
>    0,                         /* decr_pc_after_break  */
>    arc_breakpoint_at,
> };
> ...
> This did not hit the assert in base linux implementation, since
> "linux_sw_breakpoint_from_kind" was never executed. It's because
> the GDB client was writing the breakpoint opcodes to the address
> through gdbserver while bypassing sw_breakpoint_from_kind().
>
>> As far as I understand, there is a certain control flow and a combination
>> of (un)supported features in the ARC port with which the "sw_breakpoint_from_kind"
>> target op is not invoked.  I'm not sure what this flow or feature combination is.
>
> I guess what I explained above should be the case. I do not know
> the exact control flow that led/leads to this though.
>> Just out of curiosity, is there a specific reason why you want to avoid implementing
>> the "sw_breakpoint_from_kind" target op?
>
> No, there is no reason that I might be against it. I brought this whole thing
> up because I noticed ARC did not have sw_breakpoint_from_kind implemented and
> it was working fine. Even after I implemented the pure virtual method [2], it
> does not get executed (I put an assert in a temporary implementation as an
> experiment). So it seemed like an unnecessary constraint for compilation.
> If we want to deprecate how ARC inserts breakpoints (GDB client writing
> the opcode through GDBserver) then that is fine [3]. If not, then implementing
> sw_breakpoint_from_kind() shouldn't be mandatory.
>
> I just wanted to make sure that this message is conveyed. That's all :)
>
> For the record, I plan to change ARC's behavior to really use
> sw_breakpoint_from_kind(), but that is not the purpose of this email.
>
>
> Shahab
>
> [1]
> https://github.com/foss-for-synopsys-dwc-arc-processors/binutils-gdb/blob/arc-2019.09-gdb/gdb/gdbserver/linux-arc-low.c#L300
>
> [2]
> https://github.com/foss-for-synopsys-dwc-arc-processors/binutils-gdb/blob/arc-2020.09/gdbserver/linux-arc-low.cc#L350
>
> [3]
> Please keep in mind that if deprecation is wanted, yet still nothing is
> stopping ARC (and the likes) from bypassing sw_breakpoint_from_kind().
>

I think commit dd373349578df87396bc43e7ab00a1a5ceb16c8b made
sw_breakpoint_from_kind required, but under the old C structure, so no
compilation errors for whoever didn't implement it.  But it seems this
doesn't always get used. In fact, it may only be useful to ARM/AArch32
and RISCV.

The fact that multiple backends implement it the same way indicates a
good candidate for a linux-low implementation that only returns the
necessary default bits.

Most of the implementations completely ignore the KIND parameter anyway.
Reply | Threaded
Open this post in threaded view
|

RE: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
In reply to this post by Sourceware - gdb list mailing list
Hello Maciej,

> > > Note that this would only concern GDBserver, other server implementations
> > > of the remote protocol are free to support Z0 or not.  But we could decide
> > > that all GDBserver ports have to support it.
> >
> > The Intel Graphics architecture uses breakpoint bits inside instructions.  There
> > is no single breakpoint opcode as there is INT3 on IA, for example.
> >
> > The breakpoint can be ignored one time, which allows stepping over
> breakpoints
> > without having to  remove them.  This obviously only works if the breakpoint bit
> > in the original instruction is set and the instruction is not replaced with a fixed
> > breakpoint pattern.
>
>  Hmm, it seems to me like a use case for the breakpoint kind.

That's the only kind of breakpoint this architecture supports.  As far as I understand,
the kind is per-target and used to distinguish different kinds of breakpoints supported
by this target.

The points I was trying to make is that we're also using gdbarch breakpoints and that
in order to use z packets, we'd need insert_point() to be able to store shadow copies.
I have not looked into that since gdbarch breakpoints worked for me and would also
allow sharing the code between a native target and gdbserver.

Markus.
Intel Deutschland GmbH
Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany
Tel: +49 89 99 8853-0, www.intel.de
Managing Directors: Christin Eisenschmid, Gary Kershaw
Chairperson of the Supervisory Board: Nicole Lau
Registered Office: Munich
Commercial Register: Amtsgericht Muenchen HRB 186928
Reply | Threaded
Open this post in threaded view
|

RE: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
Hi Markus,

> > > > Note that this would only concern GDBserver, other server implementations
> > > > of the remote protocol are free to support Z0 or not.  But we could decide
> > > > that all GDBserver ports have to support it.
> > >
> > > The Intel Graphics architecture uses breakpoint bits inside instructions.  There
> > > is no single breakpoint opcode as there is INT3 on IA, for example.
> > >
> > > The breakpoint can be ignored one time, which allows stepping over
> > breakpoints
> > > without having to  remove them.  This obviously only works if the breakpoint bit
> > > in the original instruction is set and the instruction is not replaced with a fixed
> > > breakpoint pattern.
> >
> >  Hmm, it seems to me like a use case for the breakpoint kind.
>
> That's the only kind of breakpoint this architecture supports.  As far as I understand,
> the kind is per-target and used to distinguish different kinds of breakpoints supported
> by this target.

 From your description I infer you do have different kinds of breakpoints,
one (or more?) for each instruction.

> The points I was trying to make is that we're also using gdbarch breakpoints and that
> in order to use z packets, we'd need insert_point() to be able to store shadow copies.
> I have not looked into that since gdbarch breakpoints worked for me and would also
> allow sharing the code between a native target and gdbserver.

 Why do you need to store any copy in `gdbserver'?  GDB keeps a record of
replaced instructions, so you can use it instead.  Just send the original
instruction as the breakpoint kind.  For consistency you can do it with
both `z0' and `Z0' packets.  If you need to encode further choices (you
write plural "breakpoint bits"), then include them with the rest of data
in the breakpoint kind field (i.e. either set the breakpoint bits within
the encoding sent or concat the information required with the instruction
data).

  Maciej
Reply | Threaded
Open this post in threaded view
|

RE: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
Hello Maciej,

> > That's the only kind of breakpoint this architecture supports.  As far as I
> understand,
> > the kind is per-target and used to distinguish different kinds of breakpoints
> supported
> > by this target.
>
>  From your description I infer you do have different kinds of breakpoints,
> one (or more?) for each instruction.

OK, one could view it that way that we have a separate breakpoint instruction
for each instruction/operand combination.

I rather view it as a bit inside every instruction.


> > The points I was trying to make is that we're also using gdbarch breakpoints
> and that
> > in order to use z packets, we'd need insert_point() to be able to store shadow
> copies.
> > I have not looked into that since gdbarch breakpoints worked for me and would
> also
> > allow sharing the code between a native target and gdbserver.
>
>  Why do you need to store any copy in `gdbserver'?  GDB keeps a record of
> replaced instructions, so you can use it instead.  Just send the original
> instruction as the breakpoint kind.

The breakpoint kind would depend on the ISA.  And we'd have a real lot of them.

I don't think that we'd want to encode the original instruction in the breakpoint
kind.  That's what the shadow copy inside the breakpoint object is for.

Markus.

Intel Deutschland GmbH
Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany
Tel: +49 89 99 8853-0, www.intel.de
Managing Directors: Christin Eisenschmid, Gary Kershaw
Chairperson of the Supervisory Board: Nicole Lau
Registered Office: Munich
Commercial Register: Amtsgericht Muenchen HRB 186928
Reply | Threaded
Open this post in threaded view
|

RE: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
Hi Markus,

> > > That's the only kind of breakpoint this architecture supports.  As far as I
> > understand,
> > > the kind is per-target and used to distinguish different kinds of breakpoints
> > supported
> > > by this target.
> >
> >  From your description I infer you do have different kinds of breakpoints,
> > one (or more?) for each instruction.
>
> OK, one could view it that way that we have a separate breakpoint instruction
> for each instruction/operand combination.
>
> I rather view it as a bit inside every instruction.

 Sure, that's fine, especially from the hardware's point of view.  However
for the purpose of the RSP abstraction I think my alternative point of
view does make sense.

> > > The points I was trying to make is that we're also using gdbarch breakpoints
> > and that
> > > in order to use z packets, we'd need insert_point() to be able to store shadow
> > copies.
> > > I have not looked into that since gdbarch breakpoints worked for me and would
> > also
> > > allow sharing the code between a native target and gdbserver.
> >
> >  Why do you need to store any copy in `gdbserver'?  GDB keeps a record of
> > replaced instructions, so you can use it instead.  Just send the original
> > instruction as the breakpoint kind.
>
> The breakpoint kind would depend on the ISA.  And we'd have a real lot of them.

 Well, that's just one interpretation.  The breakpoint kind is really a
generic cookie each target is free to give any interpretation to.  There
is no processing of this data I know of in common GDB or `gdbserver' code.

> I don't think that we'd want to encode the original instruction in the breakpoint
> kind.  That's what the shadow copy inside the breakpoint object is for.

 I thought you wrote you had had to choose not to use `z0'/`Z0' packets
due to the inability to utilise the breakpoint bits this way.  Maybe I got
this wrong.

  Maciej
Reply | Threaded
Open this post in threaded view
|

RE: Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Sourceware - gdb list mailing list
Hello Maciej,

> > The breakpoint kind would depend on the ISA.  And we'd have a real lot of
> them.
>
>  Well, that's just one interpretation.  The breakpoint kind is really a
> generic cookie each target is free to give any interpretation to.  There
> is no processing of this data I know of in common GDB or `gdbserver' code.

That's how I had interpreted it.  I thought you suggested defining one kind per
instruction/operand so the kind would encode the original instruction to restore.


> > I don't think that we'd want to encode the original instruction in the breakpoint
> > kind.  That's what the shadow copy inside the breakpoint object is for.
>
>  I thought you wrote you had had to choose not to use `z0'/`Z0' packets
> due to the inability to utilise the breakpoint bits this way.  Maybe I got
> this wrong.

The issue is that the breakpoint struct is opaque and there is no API to access
the shadow copy storage from a target's insert_point() function.

This can be changed, of course.  I did not look into it because gdbarch breakpoint
methods worked for me.

Markus.

Intel Deutschland GmbH
Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany
Tel: +49 89 99 8853-0, www.intel.de
Managing Directors: Christin Eisenschmid, Gary Kershaw
Chairperson of the Supervisory Board: Nicole Lau
Registered Office: Munich
Commercial Register: Amtsgericht Muenchen HRB 186928