From: Willy Tarreau Date: Wed, 9 Feb 2022 15:19:24 +0000 (+0100) Subject: BUG/MINOR: pools: always flush pools about to be destroyed X-Git-Tag: v2.5.2~22 X-Git-Url: http://git.haproxy.org/?a=commitdiff_plain;h=cc5504f86409b754317bdc2d4dc8e2c554a25549;p=haproxy-2.5.git BUG/MINOR: pools: always flush pools about to be destroyed When destroying a pool (e.g. at exit or when resizing buffers), it's important to try to free all their local objects otherwise we can leave some in the cache. This is particularly visible when changing "bufsize", because "show pools" will then show two "trash" pools, one of which contains a single object in cache (which is fortunately not reachable). In all cases this happens while single-threaded so that's easy to do, we just have to do it on the current thread. The easiest way to do this is to pass an extra argument to function pool_evict_from_local_cache() to force a full flush instead of a partial one. This can probably be backported to about all branches where this applies, but at least 2.4 needs it. (cherry picked from commit c895c441c7579db652e4ed976c14c2f5b2de0c0e) [wt: context adjustments as 2.5 doesn't use pool_evict_last_items()] Signed-off-by: Willy Tarreau --- diff --git a/include/haproxy/pool.h b/include/haproxy/pool.h index 245f2ff..f329ef1 100644 --- a/include/haproxy/pool.h +++ b/include/haproxy/pool.h @@ -73,7 +73,7 @@ int mem_should_fail(const struct pool_head *pool); extern THREAD_LOCAL size_t pool_cache_bytes; /* total cache size */ extern THREAD_LOCAL size_t pool_cache_count; /* #cache objects */ -void pool_evict_from_local_cache(struct pool_head *pool); +void pool_evict_from_local_cache(struct pool_head *pool, int full); void pool_evict_from_local_caches(void); void pool_put_to_cache(struct pool_head *pool, void *ptr); diff --git a/src/pool.c b/src/pool.c index 15e58d3..37c6ed0 100644 --- a/src/pool.c +++ b/src/pool.c @@ -275,13 +275,14 @@ void pool_free_nocache(struct pool_head *pool, void *ptr) * we don't want a single cache to use all the cache for itself). For this, the * list is scanned in reverse. */ -void pool_evict_from_local_cache(struct pool_head *pool) +void pool_evict_from_local_cache(struct pool_head *pool, int full) { struct pool_cache_head *ph = &pool->cache[tid]; struct pool_cache_item *item; - while (ph->count >= 16 + pool_cache_count / 8 && - pool_cache_bytes > CONFIG_HAP_POOL_CACHE_SIZE * 3 / 4) { + while ((ph->count && full) || + (ph->count >= 16 + pool_cache_count / 8 && + pool_cache_bytes > CONFIG_HAP_POOL_CACHE_SIZE * 3 / 4)) { item = LIST_NEXT(&ph->list, typeof(item), by_pool); ph->count--; pool_cache_bytes -= pool->size; @@ -338,7 +339,7 @@ void pool_put_to_cache(struct pool_head *pool, void *ptr) if (unlikely(pool_cache_bytes > CONFIG_HAP_POOL_CACHE_SIZE * 3 / 4)) { if (ph->count >= 16 + pool_cache_count / 8) - pool_evict_from_local_cache(pool); + pool_evict_from_local_cache(pool, 0); if (pool_cache_bytes > CONFIG_HAP_POOL_CACHE_SIZE) pool_evict_from_local_caches(); } @@ -499,6 +500,9 @@ void pool_free_area_uaf(void *area, size_t size) void *pool_destroy(struct pool_head *pool) { if (pool) { +#ifdef CONFIG_HAP_POOLS + pool_evict_from_local_cache(pool, 1); +#endif pool_flush(pool); if (pool->used) return pool;