Skip to content

Commit 68600f6

Browse files
rgushchintorvalds
authored andcommitted
mm: don't miss the last page because of round-off error
I've noticed, that dying memory cgroups are often pinned in memory by a single pagecache page. Even under moderate memory pressure they sometimes stayed in such state for a long time. That looked strange. My investigation showed that the problem is caused by applying the LRU pressure balancing math: scan = div64_u64(scan * fraction[lru], denominator), where denominator = fraction[anon] + fraction[file] + 1. Because fraction[lru] is always less than denominator, if the initial scan size is 1, the result is always 0. This means the last page is not scanned and has no chances to be reclaimed. Fix this by rounding up the result of the division. In practice this change significantly improves the speed of dying cgroups reclaim. [[email protected]: prevent double calculation of DIV64_U64_ROUND_UP() arguments] Link: http://lkml.kernel.org/r/20180829213311.GA13501@castle Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Roman Gushchin <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Konstantin Khlebnikov <[email protected]> Cc: Matthew Wilcox <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 591edfb commit 68600f6

File tree

2 files changed

+7
-2
lines changed

2 files changed

+7
-2
lines changed

include/linux/math64.h

+3
Original file line numberDiff line numberDiff line change
@@ -281,4 +281,7 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
281281
}
282282
#endif /* mul_u64_u32_div */
283283

284+
#define DIV64_U64_ROUND_UP(ll, d) \
285+
({ u64 _tmp = (d); div64_u64((ll) + _tmp - 1, _tmp); })
286+
284287
#endif /* _LINUX_MATH64_H */

mm/vmscan.c

+4-2
Original file line numberDiff line numberDiff line change
@@ -2456,9 +2456,11 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
24562456
/*
24572457
* Scan types proportional to swappiness and
24582458
* their relative recent reclaim efficiency.
2459+
* Make sure we don't miss the last page
2460+
* because of a round-off error.
24592461
*/
2460-
scan = div64_u64(scan * fraction[file],
2461-
denominator);
2462+
scan = DIV64_U64_ROUND_UP(scan * fraction[file],
2463+
denominator);
24622464
break;
24632465
case SCAN_FILE:
24642466
case SCAN_ANON:

0 commit comments

Comments
 (0)