[PATCH] Check for overflow in __alloc_dir

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

[PATCH] Check for overflow in __alloc_dir

Florian Weimer-5
In __alloc_dir in sysdeps/posix/opendir.c, the st_blksize member can
contain a large value from a source which is not necessarily trusted.
Therefore, we should check that the addition does not overflow and fall
back to default_allocation in that case.

Built and regression-tested on x86_64-redhat-linux-gnu.  This is
difficult to test because it requires file system support.

--
Florian Weimer / Red Hat Product Security Team

0001-__aloc_dir-check-for-integer-overflow-in-malloc-argu.patch (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Roland McGrath-4
If st_blksize might ever be any kind of bogus, then I think it's better
just to cap it to some reasonable maximum (maybe a megabyte or two?).


Thanks,
Roland
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Rich Felker
On Thu, Oct 11, 2012 at 01:17:41PM -0700, Roland McGrath wrote:
> If st_blksize might ever be any kind of bogus, then I think it's better
> just to cap it to some reasonable maximum (maybe a megabyte or two?).

The highest reasonable value is about 64k, and that's being generous;
I'd cap it at 4k or so, or better yet completely ignore the blksize.
That's roughly the point at which the cost of actually processing
entries drowns out the syscall cost anyway.

Rich
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Roland McGrath-4
> The highest reasonable value is about 64k, and that's being generous;
> I'd cap it at 4k or so, or better yet completely ignore the blksize.
> That's roughly the point at which the cost of actually processing
> entries drowns out the syscall cost anyway.

For now, don't cap it at anything smaller than what actual filesystems
report today for st_blksize.  Being appropriately conservative means
setting high enough that it doesn't change the behavior for any
circumstance other than a thoroughly misbehaving filesystem--in other
words, only cap amounts that are quite likely to actually lead to an
allocation failure or excessive memory starvation, so 2M is pretty
reasonable.

Of course, use a usefully-named internal macro for the particular value of
the cap.  We can consider a change for optimization later, and you can put
a comment at the macro definition site saying we should consider tuning
this later.  The change you're making right now is for sanity-checking, and
the criteria for that are different than those for a tuning change.


Thanks,
Roland
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Rich Felker
On Thu, Oct 11, 2012 at 03:38:59PM -0700, Roland McGrath wrote:

> > The highest reasonable value is about 64k, and that's being generous;
> > I'd cap it at 4k or so, or better yet completely ignore the blksize.
> > That's roughly the point at which the cost of actually processing
> > entries drowns out the syscall cost anyway.
>
> For now, don't cap it at anything smaller than what actual filesystems
> report today for st_blksize.  Being appropriately conservative means
> setting high enough that it doesn't change the behavior for any
> circumstance other than a thoroughly misbehaving filesystem--in other
> words, only cap amounts that are quite likely to actually lead to an
> allocation failure or excessive memory starvation, so 2M is pretty
> reasonable.

I think it's pretty unreasonable for a small program (e.g. an ftpd)
that might have many instances and otherwise only be using <100k of
memory per instance to suddenly consume ~2M just because it's reading
a directory on an odd filesystem with large blocksize.

The kernel is perfectly capable of handling caching efficiently by
itself. There is no reason userspace should care about block sizes.

Rich
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Roland McGrath-4
> I think it's pretty unreasonable for a small program (e.g. an ftpd)
> that might have many instances and otherwise only be using <100k of
> memory per instance to suddenly consume ~2M just because it's reading
> a directory on an odd filesystem with large blocksize.
>
> The kernel is perfectly capable of handling caching efficiently by
> itself. There is no reason userspace should care about block sizes.

This is an entirely appropriate argument to be making in favor of a
later tuning change.  The change you proposed to start with was not
for a performance issue, but for sanity-checking.

No such argument will be considered without actual data, just like all
other performance changes.  As I said, we'll be glad to have that
discussion separately after a sanity-checking change that doesn't
change current behavior on non-buggy/malicious filesystems is in.


Thanks,
Roland
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Florian Weimer-5
In reply to this post by Roland McGrath-4
On 10/11/2012 10:17 PM, Roland McGrath wrote:

> If st_blksize might ever be any kind of bogus, then I think it's better
> just to cap it to some reasonable maximum (maybe a megabyte or two?).

Good idea.  What about this?  Manual testing shows that the large malloc
attempt is gone, and there are no regressions.

--
Florian Weimer / Red Hat Product Security Team

opendir-size.patch (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Roland McGrath-4
Use a macro for the constant and give it a comment about the choice of value.
Don't ignore a large st_blksize, just use MAX (st_blksize, MAX_ALLOCATION).


Thanks,
Roland
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Florian Weimer-5
On 10/12/2012 04:54 PM, Roland McGrath wrote:
> Use a macro for the constant and give it a comment about the choice of value.
> Don't ignore a large st_blksize, just use MAX (st_blksize, MAX_ALLOCATION).

Sorry, I don't understand.  Do you mean MIN (st_blksize, MAX_ALLOCATION)?

--
Florian Weimer / Red Hat Product Security Team
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Roland McGrath-4
> On 10/12/2012 04:54 PM, Roland McGrath wrote:
> > Use a macro for the constant and give it a comment about the choice of value.
> > Don't ignore a large st_blksize, just use MAX (st_blksize, MAX_ALLOCATION).
>
> Sorry, I don't understand.  Do you mean MIN (st_blksize, MAX_ALLOCATION)?

Yes, sorry.
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Florian Weimer-5
On 10/12/2012 07:57 PM, Roland McGrath wrote:
>> On 10/12/2012 04:54 PM, Roland McGrath wrote:
>>> Use a macro for the constant and give it a comment about the choice of value.
>>> Don't ignore a large st_blksize, just use MAX (st_blksize, MAX_ALLOCATION).
>>
>> Sorry, I don't understand.  Do you mean MIN (st_blksize, MAX_ALLOCATION)?
>
> Yes, sorry.

Is this what you had in mind?

I no longer use MIN/MAX for clamping because it's so counter-intuitive.

As far as I can tell, this doesn't cause any regressions on
x86_64-redhat-linux-gnu.

--
Florian Weimer / Red Hat Product Security Team

opendir-overflow.patch (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Check for overflow in __alloc_dir

Roland McGrath-4
That looks fine.