inet: frags: get rid of ipfrag_skb_cb/FRAG_CB
commit bf66337140c64c27fa37222b7abca7e49d63fb57 upstream.
ip_defrag uses skb->cb[] to store the fragment offset, and unfortunately
this integer is currently in a different cache line than skb->next,
meaning that we use two cache lines per skb when finding the insertion point.
By aliasing skb->ip_defrag_offset and skb->dev, we pack all the fields
in a single cache line and save precious memory bandwidth.
Note that after the fast path added by Changli Gao in commit
d6bebca92c
("fragment: add fast path for in-order fragments")
this change wont help the fast path, since we still need
to access prev->len (2nd cache line), but will show great
benefits when slow path is entered, since we perform
a linear scan of a potentially long list.
Also, note that this potential long list is an attack vector,
we might consider also using an rb-tree there eventually.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
parent
29ff723c54
commit
826ff79914
1 changed files with 5 additions and 0 deletions
|
@ -558,6 +558,11 @@ struct sk_buff {
|
|||
};
|
||||
struct rb_node rbnode; /* used in netem & tcp stack */
|
||||
};
|
||||
|
||||
union {
|
||||
int ip_defrag_offset;
|
||||
};
|
||||
|
||||
struct sock *sk;
|
||||
struct net_device *dev;
|
||||
|
||||
|
|
Loading…
Add table
Reference in a new issue