William Lallemand [Fri, 9 May 2025 17:01:28 +0000 (19:01 +0200)]
BUG/MINOR: ssl/ckch: always free() the previous entry during parsing
The ckch_conf_parse() function is the generic function which parses
crt-store keywords from the crt-store section, and also from a crt-list.
When having multiple time the same keyword, a leak of the previous value
happens. This patch ensure that the previous value is always freed
before overwriting it.
This patch should be backported as far as 3.0.
(cherry picked from commit
311e0aa5c712f38700b7b185c0d5f1aa33c48613)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
adebe6246558ece09c82fbbf3e549cf5e4cdd2ec)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Aurelien DARRAGON [Wed, 7 May 2025 23:01:28 +0000 (01:01 +0200)]
BUG/MINOR: cli: fix too many args detection for commands
d3f928944 ("BUG/MINOR: cli: Issue an error when too many args are passed
for a command") added a new check to prevent the command to run when
too many arguments are provided. In this case an error is reported.
However it turns out this check (despite marked for backports) was
ineffective prior to
20ec1de21 ("MAJOR: cli: Refacor parsing and
execution of pipelined commands") as 'p' pointer was reset to the end of
the buffer before the check was executed.
Now since
20ec1de21, the check works, but we have another issue: we may
read past initialized bytes in the buffer because 'p' pointer is always
incremented in a while loop without checking if we increment it past 'end'
(This was detected using valgrind)
To fix the issue introduced by
20ec1de21, let's only increment 'p' pointer
if p < end.
For 3.2 this is it, now for older versions, since
d3f928944 was marked for
backport, a sligthly different approach is needed:
- conditional p increment must be done in the loop (as in this patch)
- max arg check must moved above "fill unused slots" comment where p is
assigned to the end of the buffer
This patch should be backported with
d3f928944.
(cherry picked from commit
976e0bd32f6d87fccf351edf1b5a18f3e0c5c6ab)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
71f72809886476e5cf3bf77631324e2d6b89071f)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Willy Tarreau [Tue, 6 May 2025 09:32:34 +0000 (11:32 +0200)]
BUG/MEDIUM: peers: hold the refcnt until updating ts->seen
In peer_treat_updatemsg(), we call stktable_touch_remote() after
releasing the write lock on the TS, asking it to decrement the
refcnt, then we update ts->seen. Unfortunately this is racy and
causes the issue that Christian reported in issue #2959.
The sequence of events is very hard to trigger manually, but what happens
is the following:
T1. stktable_touch_remote(table, ts, 1);
-> at this point the entry is in the mt_list, and the refcnt is zero.
T2. stktable_trash_oldest() or process_table_expire()
-> these can run, because the refcnt is now zero.
The entry is cleanly deleted and freed.
T1. HA_ATOMIC_STORE(&ts->seen, 1)
-> we dereference freed memory.
A first attempt at a fix was made by keeping the refcnt held during
all the time the entry is in the mt_list, but this is expensive as
such entries cannot be purged, causing lots of skips during
trash_oldest_data(). This managed to trigger watchdogs, and was only
hiding the real cause of the problem.
The correct approach clearly is to maintain the ref_cnt until we
touch ->seen. That's what this patch does. It does not decrement
the refcnt, while calling stktable_touch_remote(), and does it
manually after touching ->seen. With this the problem is gone.
Note that a reproducer involves the following:
- a config with 10 stick-ctr tracking the same table with a
random key between 10M and 100M depending on the machine.
- the expiration should be between 10 and 20s. http_req_cnt
is stored and shared with the peers.
- 4 total processes with such a config on the local machine,
each corresponding to a different peer. 3 of the peers are
bound to half of the cores (all threads) and share the same
threads; the last process is bound to the other half with
its own threads.
- injecting at full load, ~256 conn, on the shared listening
port. After ~2x expiration time to 1 minute the lone process
should segfault in pools code due to a corrupted by_lru list.
This problem already exists in earlier versions but the race looks
narrower. Given how difficult it is to trigger on a given machine
in its current form, it's likely that it only happens once in a
while on stable branches. The fix must be backported wherever the
code is similar, and there's no hope to reproduce it to validate
the backport.
Thanks again to Christian for his amazing help!
(cherry picked from commit
006a3acbde309f11190a003c4ac1c026480444e4)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
a3343abd2cce9e72f2db1cb1e8a80b277a769fd7)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Amaury Denoyelle [Tue, 6 May 2025 16:01:32 +0000 (18:01 +0200)]
BUG/MINOR: quic: reject invalid max_udp_payload size
Add a checks on received max_udp_payload transport parameters. As
defined per RFC 9000, values below 1200 are invalid, and thus the
connection must be closed with TRANSPORT_PARAMETER_ERROR code.
Prior to this patch, an invalid value was silently ignored.
This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
(cherry picked from commit
4bc7aa548adcd9ee424c65cd346e94f8749dce64)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
c806ee0300fd82f6414a172a3c5829afaeb16eac)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Amaury Denoyelle [Tue, 6 May 2025 16:01:09 +0000 (18:01 +0200)]
BUG/MINOR: quic: fix TP reject on invalid max-ack-delay
Checks are implemented on some received transport parameter values,
to reject invalid ones defined per RFC 9000. This is the case for
max_ack_delay parameter.
The check was not properly implemented as it only reject values strictly
greater than the limit set to 2^14. Fix this by rejecting values of 2^14
and above. Also, the proper error code TRANSPORT_PARAMETER_ERROR is now
set.
This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
(cherry picked from commit
ffabfb0fc3ad8774024d152fc31a7711a8a9c382)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
732bba41245e2365eb24bdbb856f5ed44f06d262)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Amaury Denoyelle [Tue, 6 May 2025 16:00:43 +0000 (18:00 +0200)]
BUG/MINOR: quic: use proper error code on invalid received TP value
As per RFC 9000, checks must be implemented to reject invalid values for
received transport parameters. Such values are dependent on the
parameter type.
Checks were already implemented for ack_delay_exponent and
active_connection_id_limit, accordingly with the QUIC specification.
However, the connection was closed with an incorrect error code. Fix
this to ensure that TRANSPORT_PARAMETER_ERROR code is used as expected.
This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
(cherry picked from commit
b60a17aad768369ab7e328949112b50cd78bc987)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
af4048e4e36503557d90e37514a8a3e8e7210c03)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Amaury Denoyelle [Tue, 6 May 2025 15:59:37 +0000 (17:59 +0200)]
BUG/MINOR: quic: reject retry_source_cid TP on server side
Close the connection on error if retry_source_connection_id transport
parameter is received. This is specified by RFC 9000 as this parameter
must not be emitted by a client. Previously, it was silently ignored.
This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
(cherry picked from commit
10f1f1adce032742d60fe14ee780871c4e6a1db1)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
f66a92724a8f8469e98abcaeccf46cc58ea7fb5c)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Amaury Denoyelle [Tue, 6 May 2025 15:59:21 +0000 (17:59 +0200)]
BUG/MINOR: quic: use proper error code on invalid server TP
This commit is similar to the previous one. It fixes the error code
reported when dealing with invalid received transport parameters. This
time, it handles reception of original_destination_connection_id,
preferred_address and stateless_reset_token which must not be emitted by
the client.
This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
(cherry picked from commit
a54fdd3d926fabfc438dbaedbd3d08814fb99862)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
b70be5c97c713c12e9e2a2483b7c52a5a849fcd4)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Amaury Denoyelle [Tue, 6 May 2025 14:45:23 +0000 (16:45 +0200)]
BUG/MINOR: quic: use proper error code on missing CID in TPs
Handle missing received transport parameter value
initial_source_connection_id / original_destination_connection_id.
Previously, such case would result in an error reported via
quic_transport_params_store(), which triggers a TLS alert converted as
expected as a CONNECTION_CLOSE. The issue is that the error code
reported in the frame was incorrect.
Fix this by returning QUIC_TP_DEC_ERR_INVAL for such conditions. This is
directly handled via quic_transport_params_store() which set the proper
TRANSPORT_PARAMETER_ERROR code for the CONNECTION_CLOSE. However, no
error is reported so the SSL handshake is properly terminated without a
TLS alert. This is enough to ensure that the CONNECTION_CLOSE frame will
be emitted as expected.
This should be backported up to 2.6. Note that is relies on previous
patch "MINOR: quic: extend return value on TP parsing".
(cherry picked from commit
df6bd4909e8dfa64337662a78adc39ca702c1dc7)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
5eb284aa55fa3a7c12060b3834000dd66f88493e)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Amaury Denoyelle [Tue, 6 May 2025 16:10:27 +0000 (18:10 +0200)]
MINOR: quic: extend return value during TP parsing
Extend API used for QUIC transport parameter decoding. This is done via
the introduction of a dedicated enum to report the various error
condition detected. No functional change should occur with this patch,
as the only returned code is QUIC_TP_DEC_ERR_TRUNC, which results in the
connection closure via a TLS alert.
This patch will be necessary to properly reject transport parameters
with the proper CONNECTION_CLOSE error code. As such, it should be
backported up to 2.6 with the following series.
(cherry picked from commit
294bf26c06404e35edd4ad3381ccb26e835bd7a1)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
f5a5564f65da6190e1cf4a7c92a1610c2fe03caf)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Aurelien DARRAGON [Fri, 2 May 2025 17:06:44 +0000 (19:06 +0200)]
BUG/MINOR: proxy: only use proxy_inc_fe_cum_sess_ver_ctr() with frontends
proxy_inc_fe_cum_sess_ver_ctr() was implemented in
9969adbc
("MINOR: stats: add by HTTP version cumulated number of sessions and
requests")
As its name suggests, it is meant to be called for frontends, not backends
Also, in
9969adbc, when used under h1_init(), a precaution is taken to
ensure that the function is only called with frontends.
However, this precaution was not applied in h2_init() and qc_init().
Due to this, it remains possible to have proxy_inc_fe_cum_sess_ver_ctr()
being called with a backend proxy as parameter. While it did not cause
known issues so far, it is not expected and could result in bugs in the
future. Better fix this by ensuring the function is only called with
frontends.
It may be backported up to 2.8
(cherry picked from commit
b39825ee45150415d7ed64b7ce785bb946f727bd)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
736bb497acb656e736291f8113650a62bcf96f5f)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Christopher Faulet [Wed, 30 Apr 2025 12:32:16 +0000 (14:32 +0200)]
BUG/MINOR: mux-h1: Fix trace message in h1_detroy() to not relay on connection
h1_destroy() may be called to release a H1C after a multiplexer upgrade. In
that case, the connection is no longer attached to the H1C. It must not be
used in the h1 trace message because the connection context is no longer a H1C.
Because of this bug, when a H1>H2 upgrade is performed, a crash may be
experienced if the H1 traces are enabled.
This patch must be backport to all stable versions.
(cherry picked from commit
7dc4e94830ef8fe9b0ffc2901d63b9f3183ed12c)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
cad223e19946a6d02261a259c2be5c20096ef2cd)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Christopher Faulet [Wed, 30 Apr 2025 12:16:42 +0000 (14:16 +0200)]
BUG/MINOR: mux-h1: Don't pretend connection was released for TCP>H1>H2 upgrade
When an applicative upgrade of the H1 multiplexer is performed, we must not
pretend the connection was released. Indeed, in that case, a H1 stream is
still their with a stream connector attached on it. It must be detached
first before releasing the H1 connection and the underlying connection. So
it is important to not pretend the connection was already released.
Concretely, in that case h1_process() must return 0 instead of -1. It is
minor error because, AFAIK, it is harmless. But it is not correct. So let's
fix it to avoid futur bugs.
To be clear, this happens when a TCP connection is upgraded to H1 connection
and a H2 preface is detected, leading to a second upgrade from H1 to H2.
This patch may be backport to all stable versions.
(cherry picked from commit
2dc334be61a0a9feaa7b844e122c2c4ce37e1b1a)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
3b774ac747d54f107b25ed1c6e9e16720e78d421)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Aurelien DARRAGON [Tue, 29 Apr 2025 08:22:38 +0000 (10:22 +0200)]
BUG/MINOR: dns: prevent ds accumulation within dss
when dns session callback (dns_session_release()) is called upon error
(ie: when some pending queries were not sent), we try our best to
re-create the applet in order to preserve the pending queries and give
them a chance to be retried. This is done at the end of
dns_session_release().
However, doing so exposes to an issue: if the error preventing queries
from being sent is still encountered over and over the dns session could
stay there indefinitely. Meanwhile, other dns sessions may be created on
the same dns_stream_server periodically. If previous failing dns sessions
don't terminate but we also keep creating new ones, we end up accumulating
failing sessions on a given dns_stream_server, which can eventually cause
ressource shortage.
This issue was found when trying to address ("BUG/MINOR: dns: add tempo
between 2 connection attempts for dns servers")
To fix it, we track the number of failed consecutive sessions for a given
dns server. When we reach the threshold (set to 100), we consider that the
link to the dns server is broken (at least temporarily) and we force
dns_session_new() to fail, so that we stop creating new sessions until one
of the existing one eventually succeeds.
A workaround for this fix consists in setting the "maxconn" parameter on
nameserver directive (under resolvers section) to a reasonnable value so
that no more than "maxconn" sessions may co-exist on the same server at
a given time.
This may be backported to all stable versions.
("CLEANUP: dns: remove unused dns_stream_server struct member") may be
backported to ease the backport.
(cherry picked from commit
5288b39011b2449bfa896f7932c7702b5a85ee77)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
020348974b09cb011159260c08a8b821456029a6)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Aurelien DARRAGON [Tue, 29 Apr 2025 14:48:28 +0000 (16:48 +0200)]
CLEANUP: dns: remove unused dns_stream_server struct member
dns_stream_server "max_slots" is unused, let's get rid of it
(cherry picked from commit
14ebe95a10e7fdc003e369b58463a4744e88fa8e)
[wt: just to help with next backport]
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
73f11d9caafeacbca0b68e660c119cb60fa976fe)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Aurelien DARRAGON [Tue, 29 Apr 2025 18:13:00 +0000 (20:13 +0200)]
BUG/MINOR: dns: add tempo between 2 connection attempts for dns servers
As reported by Lukas Tribus on the mailing list [1], trying to connect to
a nameserver with invalid network settings causes haproxy to retry a new
connection attempt immediately which eventually causes unexpected CPU usage
on the thread responsible for the applet (namely 100% on one CPU will be
observed).
This can be reproduced with the test config below:
resolvers default
nameserver ns1 tcp4@8.8.8.8:53 source 192.168.99.99
listen listen
mode http
bind :8080
server s1 www.google.com resolvers default init-addr none
To fix this the issue, we add a temporisation of one second between a new
connection attempt is retried. We do this in dns_session_create() when we
know that the applet was created in the release callback (when previous
query attempt was unsuccessful), which means initial connection is not
affected.
[1]: https://www.mail-archive.com/haproxy@formilux.org/msg45665.html
This should fix GH #2909 and may be backported to all stable versions.
This patch depends on ("MINOR: applet: add appctx_schedule() macro")
(cherry picked from commit
27236f2218e57d839197e5b6479dc8e67e484e32)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
70653e0831b54a1b2062d377352830cca3f56ea1)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Aurelien DARRAGON [Mon, 28 Apr 2025 16:03:36 +0000 (18:03 +0200)]
MINOR: applet: add appctx_schedule() macro
Just like task_schedule() but for applets to wakeup an applet at a
specific time, leverages _task_schedule() internally
(cherry picked from commit
1ced5ef2fdf9a4b4ab0941849be0f4627066331f)
[wt: needed for next dns backport]
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
3fdb657f6b2b02ac6b1b95be0217b06769fc499e)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Willy Tarreau [Fri, 25 Apr 2025 16:32:02 +0000 (18:32 +0200)]
BUG/MAJOR: listeners: transfer connection accounting when switching listeners
Since we made it possible for a bind_conf to listen to multiple thread
groups with shards in 2.8 with commit
9d360604bd ("MEDIUM: listener:
rework thread assignment to consider all groups"), the per-listener
connection count was not properly transferred to the target listener
with the connection when switching to another thread group. This results
in one listener possibly reaching high values and another one possibly
reaching negative values. Usually it's not visible, unless a maxconn is
set on the bind_conf, in which case comparisons will quickly put an end
to the willingness to accept new connections.
This problem only happens when thread groups are enabled, and it seems
very hard to trigger it normally, it only impacts sockets having a single
shard, hence currently the CLI (or any conf with "bind ... shards 1"),
where it can be reproduced with a config having a very low "maxconn" on
the stats socket directive (here, 4), and issuing a few tens of
socat <<< "show activity" in parallel, or sending HTTP connections to a
single-shared listener. Very quickly, haproxy stops accepting connections
and eats CPU in the poller which tries to get its connections accepted.
A BUG_ON(l->nbconn<0) after HA_ATOMIC_DEC() in listener_release() also
helps spotting them better.
Many thanks to Christian Ruppert who once again provided a very accurate
report in GH #2951 with the required data permitting this analysis.
This fix must be backported to 2.8.
(cherry picked from commit
f1064c7382e170cd59dd7984808bc436e2b3de7f)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
33ec1e2fb357e46a83be886624cf695259fd1bf6)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Christopher Faulet [Wed, 23 Apr 2025 13:29:00 +0000 (15:29 +0200)]
BUG/MINOR: cli: Issue an error when too many args are passed for a command
When a command is parsed to split it in an array of arguments, by default,
at most 64 arguments are supported. But no warning was emitted when there
were too many arguments. Instead, the arguments above the limit were
silently ignored. It could be an issue for some commands, like "add server",
because there was no way to know some arguments were ignored.
Now an error is issued when too many arguments are passed and the command is
not executed.
This patch should be backported to all stable versions.
(cherry picked from commit
d3f92894471dd7306f857156a5820c53c10f8d88)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
cb0fbd5e1a7cdde5210fea8ef87ede21c431807d)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Christopher Faulet [Fri, 28 Feb 2025 15:07:00 +0000 (16:07 +0100)]
BUG/MEDIUM: mux-fcgi: Try to fully fill demux buffer on receive if not empty
Don't reserve space for the HTX overhead on receive if the demux buffer is
not empty. Otherwise, the demux buffer may be erroneously reported as full
and this may block records processing. Because of this bug, a ping-pong loop
till timeout between data reception and demux process can be observed.
This bug was introduced by the commit
5f927f603 ("BUG/MEDIUM: mux-fcgi:
Properly handle read0 on partial records"). To fix the issue, if the demux
buffer is not empty when we try to receive more data, all free space in the
buffer can now be used. However, if the demux buffer is empty, we still try
to keep it aligned with the HTX.
This patch must be backported to 3.1.
(cherry picked from commit
0e08252294d5a7389ad42b51b4b931fab2e66f31)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit
cc4981548b12e3162fc020be02917f8462f00866)
[cf: This commit must be backported with the commit above]
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Willy Tarreau [Tue, 22 Apr 2025 11:53:02 +0000 (13:53 +0200)]
[RELEASE] Released version 3.0.10
Released version 3.0.10 with the following main changes :
- MINOR: log: support "raw" logformat node typecast
- BUG/MINOR: peers: fix expire learned from a peer not converted from ms to ticks
- BUG/MEDIUM: peers: prevent learning expiration too far in futur from unsync node
- BUG/MEDIUM: mux-quic: fix crash on RS/SS emission if already close local
- BUG/MINOR: mux-quic: remove extra BUG_ON() in _qcc_send_stream()
- BUG/MINOR: log: fix gcc warn about truncating NUL terminator while init char arrays
- DOC: config: fix two missing "content" in "tcp-request" examples
- BUILD: compiler: undefine the CONCAT() macro if already defined
- BUG/MINOR: rhttp: fix incorrect dst/dst_port values
- BUG/MINOR: backend: do not overwrite srv dst address on reuse
- BUG/MEDIUM: backend: fix reuse with set-dst/set-dst-port
- BUG/MEDIUM: stream: Fix a possible freeze during a forced shut on a stream
- TESTS: Fix build for filltab25.c
- MINOR: task: add thread safe notification_new and notification_wake variants
- BUG/MINOR: hlua_fcn: fix potential UAF with Queue:pop_wait()
- CLEANUP: log: adjust _lf_cbor_encode_byte() comment
- BUG/MINOR: log: fix CBOR encoding with LOG_VARTEXT_START() + lf_encode_chunk()
- BUG/MEDIUM: sample: fix risk of overflow when replacing multiple regex back-refs
- BUG/MINOR: backend: do not use the source port when hashing clientip
- BUG/MINOR: hlua: fix invalid errmsg use in hlua_init()
- DOC: config: add the missing "profiling.memory" to the global kw index
- BUG/MINOR: http-ana: Properly detect client abort when forwarding the response
- BUG/MEDIUM: http-ana: Report 502 from req analyzer only during rsp forwarding
- BUG/MINOR: sink: add tempo between 2 connection attempts for sft servers (2)
- BUG/MEDIUM: h3: trim whitespaces when parsing headers value
- BUG/MEDIUM: h3: trim whitespaces in header value prior to QPACK encoding
- BUG/MINOR: h3: filter upgrade connection header
- BUG/MINOR: h3: reject invalid :path in request
- BUG/MINOR: h3: reject request URI with invalid characters
- BUG/MEDIUM: hlua: fix hlua_applet_{http,tcp}_fct() yield regression (lost data)
- BUG/MINOR: quic: do not crash on CRYPTO ncbuf alloc failure
- DEBUG: stream: Add debug counters to track some client/server aborts
- BUG/MINOR: stktable: invalid use of stkctr_set_entry() with mixed table types
- BUG/MEDIUM: mux-fcgi: Properly handle read0 on partial records
- DEBUG: fd: add a counter of takeovers of an FD since it was last opened
- MINOR: fd: add a generation number to file descriptors
- MINOR: epoll: permit to mask certain specific events
- DEBUG: epoll: store and compare the FD's generation count with reported event
- MEDIUM: epoll: skip reports of stale file descriptors
- IMPORT: plock: give higher precedence to W than S
- IMPORT: plock: lower the slope of the exponential back-off
- IMPORT: plock: use cpu_relax() for a shorter time in EBO
- BUG/MINOR debug: fix !USE_THREAD_DUMP in ha_thread_dump_fill()
- BUG/MINOR: mux-h2: prevent past scheduling with idle connections
- BUG/MINOR: rhttp: fix reconnect if timeout connect unset
- BUG/MINOR: rhttp: ensure GOAWAY can be emitted after reversal
- MINOR: tools: also protect the library name resolution against concurrent accesses
Willy Tarreau [Fri, 4 Apr 2025 14:56:44 +0000 (16:56 +0200)]
MINOR: tools: also protect the library name resolution against concurrent accesses
This is an extension of
eb41d768f ("MINOR: tools: use only opportunistic
symbols resolution"). It also makes sure we're not calling dladddr() in
parallel to dladdr_and_size(), as a preventive measure against some
potential deadlocks in the inner layers of the libc.
(cherry picked from commit
8d0c6336778fed316a174fd278c78a88f7dd6975)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
86d6b398d481d0d60bed9e509659e6722ce66b15)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Amaury Denoyelle [Thu, 10 Apr 2025 15:41:39 +0000 (17:41 +0200)]
BUG/MINOR: rhttp: ensure GOAWAY can be emitted after reversal
GOAWAY emission should not be emitted before preface. Thus, max_id field
from h2c acting as a server is initialized to -1, which prevents its
emission until preface is received from the peer. If acting as a client,
max_id is initialized to a valid value on the first h2s emission.
This causes an issue with reverse HTTP on the active side. First, it
starts as a client, so the peer does not emit a preface but instead a
simple SETTINGS frame. As role are switched, max_id is initialized much
later when the first h2s response is emitted. Thus, if the connection
must be terminated before any stream transfer, GOAWAY cannot be emitted.
To fix this, ensure max_id is initialized to 0 on h2_conn_reverse() for
active connect side. Thus, a GOAWAY indicating that no stream has been
handled can be generated.
Note that passive connect side is not impacted, as it max_id is
initialized thanks to preface reception.
This should be backported up to 2.9.
(cherry picked from commit
9e6f8ce3282fc5461e30adae3c76cbceb2bc327e)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
05509361e55eda929ff11cf245812c8154728c36)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Amaury Denoyelle [Thu, 10 Apr 2025 16:05:55 +0000 (18:05 +0200)]
BUG/MINOR: rhttp: fix reconnect if timeout connect unset
Active connect on reverse http relies on connect timeout to detect
connection failure. Thus, if this timeout was unset, connection failure
may not be properly detected.
Fix this by fallback on hardcoded value of 1s for connect if timeout is
unset in the configuration. This is considered as a minor bug, as
haproxy advises against running with timeout unset.
This must be backported up to 2.9.
(cherry picked from commit
2b8da5f9ab8639f4213ee21fc5fd5ca12c53cddb)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
cd6e4517fdec392cef375c9ddc75da38a1f07b44)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Amaury Denoyelle [Wed, 9 Apr 2025 12:26:54 +0000 (14:26 +0200)]
BUG/MINOR: mux-h2: prevent past scheduling with idle connections
While reviewing HTTP/2 MUX timeout, it seems there is a possibility that
MUX task is requeued via h2c_update_timeout() with an already expired
date. This can happens with idle connections on two cases :
* first with shut timeout, as timer is not refreshed if already set
* second with http-request and keep-alive timers, which are based on
idle_start
Queuing an already expired task is an undefined behavior. Fix this by
using task_wakeup() instead of task_queue() at the end of
h2c_update_timeout() if such case occurs.
This should be backported up to 2.6.
(cherry picked from commit
3ebdd3ae509236af1f19d80cab98b385a751426b)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
516121499c8edafb67a861bcd1bf41b3cc6f12d2)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Thu, 17 Apr 2025 08:28:37 +0000 (10:28 +0200)]
BUG/MINOR debug: fix !USE_THREAD_DUMP in ha_thread_dump_fill()
The function must make sure to return NULL for foreign threads and
the local buffer for the current thread in this case, otherwise panics
(and sometimes even warnings) will segfault when USE_THREAD_DUMP is
disabled. Let's slightly re-arrange the function to reduce the #if/else
since we have to specifically handle the case of !USE_THREAD_DUMP anyway.
This needs to be backported wherever
b8adef065d ("MEDIUM: debug: on
panic, make the target thread automatically allocate its buf") was
backported (at least 2.8).
(cherry picked from commit
0b56839455c7f45ae58954e1eb3873a3725899dc)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
f668b85ecea96218c24524d91b7d4f1e065fcd74)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Fri, 7 Feb 2025 16:33:49 +0000 (17:33 +0100)]
IMPORT: plock: use cpu_relax() for a shorter time in EBO
Tests have shown that on modern CPUs it's interesting to wait a bit less
in cpu_relax(). Till now we were looping down to 60 iterations and then
switching to just barriers. Increasing the threshold to 90 iterations
left before getting out of the loop improved the average and max time
to grab a write lock by a few percent (e.g. 10% at 1us, 20% at 256ns
or lower). Higher values tend to progressively lose that gain so let's
stick to this one. This was measured on an EPYC 74F3 like previous
measurements that initially led to this value, and the value might
possibly depend on the mask applied to the loop counter.
This is plock commit
74ca0a7307fa6aec3139f27d3b7e534e1bdb748e.
(cherry picked from commit
b957e2f3ef63e1846988bed9d2bbb42213b29fbf)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
71f66d03bd199b326b7231cbe6fe40f60251bb1c)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Fri, 7 Feb 2025 16:20:48 +0000 (17:20 +0100)]
IMPORT: plock: lower the slope of the exponential back-off
Along many tests involving both haproxy's scheduler and forwarded
traffic, various exponents and algorithms were attempted for the EBO
and their effects were measured. It was found that a growth in 1.25^N
limited to 128k cycles consistently gives a better latency than 1.5^N
limited to 256k cycles, without degrading general performance. The
measures of the time to grab a write lock on a 48-thread EPYC show
that the number of occurrences of low times was roughly multiplied by
2-3 while the number of occurrences of times above 64us was reduced
by similar factors, to even reach 300 at 64us and limiting the maximum
time by a factor of 4.
The other variants that were experimented with are:
m = ((m + (m >> 1)) + 2) & 0x3ffff; // original
m = ((m + (m >> 1) + (m >> 3)) + 2) & 0x3ffff;
m = ((m + (m >> 1) + (m >> 4)) + 2) & 0x3ffff;
m = ((m + (m >> 1) + (m >> 4)) + 2) & 0x1ffff;
m = ((m + (m >> 1) + (m >> 4)) + 1) & 0x1ffff;
m = ((m + (m >> 2) + (m >> 4)) + 1) & 0x1ffff; // lowest CPU on pl_wr test + good perf
m = ((m + (m >> 2)) + 1) & 0x1ffff; // even lower cpu usage, lowest max
m = ((m + (m >> 1) + (m >> 2)) + 1) & 0x1ffff; // correct but slightly higher maxes
m = ((m + (m >> 1) + (m >> 3)) + 1) & 0x1ffff; // less good than m+m>>2
m = ((m + (m >> 2) + (m >> 3)) + 1) & 0x1ffff; // better but not as good as m+m>>2
m = ((m + (m >> 3) + (m >> 4)) + 1) & 0x1ffff; // less good, lower rates on small coounts.
m = ((m + (m >> 2) + (m >> 3) + (m >> 4)) + 1) & 0x1ffff; // less good as well
m = ((m & 0x7fff) + (m >> 1) + (m >> 4)) + 2;
m = ((m & 0xffff) + (m >> 1) + (m >> 4)) + 2;
This is plock commit
dddd9ee01c522da33c353e2e4d4fd743d8336ec3.
(cherry picked from commit
253fba01a7adf20303ec65cbe256681df588f065)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
890ba0c5b490b9272b9dcd58db876d4f3dcb0aea)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Fri, 7 Feb 2025 15:57:28 +0000 (16:57 +0100)]
IMPORT: plock: give higher precedence to W than S
It was noticed in haproxy that in certain extreme cases, a write lock
subject to EBO may fail for a very long time in front of a large set
of readers constantly trying to upgrade to the S state. The reason is
that among many readers, one will succeed in its upgrade, and this
situation can last for a very long time with many readers upgrading
in turn, while the writer waits longer and longer before trying again.
Here we're taking a reasonable approach which is that the write lock
should have a higher precedence in its attempt to grab the lock. What
is done is that instead of fully rolling back in case of conflict with
a pure S lock, the writer will only release its read part in order to
let the S upgrade to W if needed, and finish its operations. This
guarantees no other seek/read/write can enter. Once the conflict is
resolved, the writer grabs the read part again and waits for readers
to be gone (in practice it could even return without waiting since we
know that any possible wanderers would leave or even not be there at
all, but it avoids a complicated loop code that wouldn't improve the
practical situation but inflate the code).
Thanks to this change, the maximum write lock latency on a 48 threads
AMD with aheavily loaded scheduler went down from 256 to 64 ms, and the
number of occurrences of 32ms or more was divided by 300, while all
occurrences of 1ms or less were multiplied by up to 3 (3 for the 4-16ns
cases).
This is plock commit
b6a28366d156812f59c91346edc2eab6374a5ebd.
(cherry picked from commit
9dd56da73072f88e1b4a81923616b5276013b186)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
101c047e04521318791c7ffb5634a984727bc8ae)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Thu, 30 Jan 2025 15:32:35 +0000 (16:32 +0100)]
MEDIUM: epoll: skip reports of stale file descriptors
Now that we can see that some events are reported for older instances
of a file descriptor, let's skip these ones instead of reporting
dangerous events on them. It might possibly qualify as a bug if it
helps fixing strange issues in certain environments, in which case it
can make sense to backport it along with the following recent patches:
DEBUG: fd: add a counter of takeovers of an FD since it was last opened
MINOR: fd: add a generation number to file descriptors
DEBUG: epoll: store and compare the FD's generation count with reported event
(cherry picked from commit
8235a24782e528b9bf8ca9dd69c0a147556dfcb5)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
cdb4027d54baab39347bf2e1da48aff7d1af5429)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Thu, 30 Jan 2025 15:28:33 +0000 (16:28 +0100)]
DEBUG: epoll: store and compare the FD's generation count with reported event
There have been some reported cases where races between threads in epoll
were causing wrong reports of close or error events. Since the epoll_event
data is 64 bits, we can store the FD's generation counter in the upper
bits to verify if we're speaking about the same instance of the FD as the
current one or a stale one. If the generation number does not match, then
we classify these into 3 conditions and increment the relevant COUNT_IF()
counters (stale report for closed FD, stale report of harmless event on
reopened FD, stale report of HUP/ERR on reopened FD). Tests have shown that
with heavy concurrency, a very small maxconn (typically 1 per thread),
http-reuse always and a server closing connections first but randomly
(httpterm with /C=2r), such events can happen at a pace of a few per second
for the closed FDs, and a few per minute for the other ones, so there's value
in leaving this accessible for troubleshooting. E.g after a few minutes:
Count Type Location function(): "condition" [comment]
5541 CNT ev_epoll.c:296 _do_poll(): "1" [epoll report of event on a just closed fd (harmless)]
10 CNT ev_epoll.c:294 _do_poll(): "1" [epoll report of event on a closed recycled fd (rare)]
42 CNT ev_epoll.c:289 _do_poll(): "1" [epoll report of HUP on a stale fd reopened on the same thread (suspicious)]
212 CNT ev_epoll.c:279 _do_poll(): "1" [epoll report of HUP/ERR on a stale fd reopened on another thread (harmless)]
1 CNT mux_h1.c:3911 h1_send(): "b_data(&h1c->obuf)" [connection error (send) with pending output data]
This one with the following setup, whicih abuses threads contention by
starting 64 threads on two cores:
- config:
global
nbthread 64
stats socket /tmp/sock1 level admin
stats timeout 1h
defaults
timeout client 5s
timeout server 5s
timeout connect 5s
mode http
listen p2
bind :8002
http-reuse always
server s1 127.0.0.1:8000 maxconn 4
- haproxy forcefully started on 2C4T:
$ taskset -c 0,1,4,5 ./haproxy -db -f epoll-dbg.cfg
- httpterm on port 8000, cpus 2,3,6,7 (2C4T)
- h1load with responses larger than a single buffer, and randomly
closing/keeping alive:
$ taskset -c 2,3,6,7 h1load -e -t 4 -c 256 -r 1 0:8002/?s=19k/C=2r
(cherry picked from commit
5012b6c6d9e2278ae247e3e372765be3c11af86c)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
c25bd8a78d82a8bd6014b718a84ca69b73fc0e81)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Mon, 27 Jan 2025 14:41:26 +0000 (15:41 +0100)]
MINOR: epoll: permit to mask certain specific events
A few times in the past we've seen cases where epoll was caught reporting
a wrong event that caused trouble (e.g. spuriously reporting HUP or RDHUP
after a successful connect()). The new tune.epoll.mask-events directive
permits to mask events such as ERR, HUP and RDHUP and convert them to IN
events that are processed by the regular receive path. This should help
better diagnose and troubleshoot issues such as this one, as well as rule
out such a cause when similar issues are reported:
https://github.com/haproxy/haproxy/issues/2368
https://www.spinics.net/lists/netdev/msg876470.html
It should be harmless to backport this if necessary.
(cherry picked from commit
7fa70da06db3e69a61e4451a82a4f980b40b4d0c)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
cf4d3fc9aec7bb16191cad2678554b631553a99d)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Thu, 30 Jan 2025 15:25:40 +0000 (16:25 +0100)]
MINOR: fd: add a generation number to file descriptors
This patch adds a counter of close() on file descriptors in the fdtab.
The goal is to better detect if reported events concern the current or
a previous file descriptor. For now the counter is only added, and is
showed in "show fd" as "gen". We're reusing unused space at the end of
the struct. If it's needed for something more important later, this
patch can be reverted.
(cherry picked from commit
d155924efe453faa0ea5b5e62e6a3e2bf7ee7358)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
3f646022e0d7421a7f105e9aab1546fd7263ce7d)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Thu, 30 Jan 2025 14:59:11 +0000 (15:59 +0100)]
DEBUG: fd: add a counter of takeovers of an FD since it was last opened
That's essentially in order to help with debugging strange cases like
the occasional epoll issues/races, by keeping a counter of how many
times an FD was taken over since last inserted. The room is available
so let's use it. If it's needed later, this patch can easily be reverted.
The counter is also reported in "show fd" as "tkov".
(cherry picked from commit
44ac7a7e731537c9388a141b69cff7074afe2376)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
e8c657a034d5ff4ad9648370faea5b9f1039d6fb)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Christopher Faulet [Mon, 27 Jan 2025 14:18:14 +0000 (15:18 +0100)]
BUG/MEDIUM: mux-fcgi: Properly handle read0 on partial records
A Read0 event could be ignored by the FCGI multiplexer if it is blocked on a
partial record. Instead of handling the event, it remained blocked, waiting
for the end of the record.
To fix the issue, the same solution than the H2 multiplexer is used. Two
flags are introduced. The first one, FCGI_CF_END_REACHED, is used to
acknowledge a read0. This flag is set when a read0 was received AND the FCGI
multiplexer must handle it. The second one, FCGI_CF_DEM_SHORT_READ, is set
when the demux is interrupted on a partial record. A short read and a read0
lead to set the FCGI_CF_END_REACHED flag.
With these changes, the FCGI mux should be able to properly handle read0 on
partial records.
This patch should be backported to all stable versions after a period of
observation.
(cherry picked from commit
5f927f603aae4614f82c73ff6c2e3324ecfdf506)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
e20714440249497e99ef7b45c47ac4439d5af803)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Aurelien DARRAGON [Tue, 24 Dec 2024 15:28:35 +0000 (16:28 +0100)]
BUG/MINOR: stktable: invalid use of stkctr_set_entry() with mixed table types
Some actions such as "sc0_get_gpc0" (using smp_fetch_sc_stkctr()
internally) can take an optional table name as parameter to perform the
lookup on a different table from the tracked one but using the key from
the tracked entry. It is done by leveraging the stktable_lookup() function
which was originally meant to perform intra-table lookups.
Calling sc0_get_gpc0() with a different table name will result in
stktable_lookup() being called to perform lookup using a stktsess from
a different table. While it is theorically fine, it comes with a pitfall:
both tables (the one from where the stktsess originates and the actual
target table) should rely on the exact same key type and length.
Failure to do so actually results in undefined behavior, because the key
type and/or length from one table is used to perform the lookup in
another table, while the underlying lookup API expects explicit type and
key length.
For instance, consider the below example:
peers testpeers
bind 127.0.0.1:10001
server localhost
table test type binary len 1 size 100k expire 1h store gpc0
table test2 type string size 100k expire 1h store gpc0
listen test_px
mode http
bind 0.0.0.0:8080
http-request track-sc0 bin(AA) table testpeers/test
http-request track-sc1 str(ok) table testpeers/test2
log-format "%[sc0_get_gpc0(testpeers/test2)]"
log stdout format raw local0
server s1 git.haproxy.org:80
Performing a curl request to localhost:8080 will cause unitialized reads
because string "ok" from test2 table will be compared as a string against
"AA" binary sample which is not NULL terminated:
==2450742== Conditional jump or move depends on uninitialised value(s)
==2450742== at 0x484F238: strlen (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2450742== by 0x27BCE6: stktable_lookup (stick_table.c:539)
==2450742== by 0x281470: smp_fetch_sc_stkctr (stick_table.c:3580)
==2450742== by 0x283083: smp_fetch_sc_get_gpc0 (stick_table.c:3788)
==2450742== by 0x2A805C: sample_process (sample.c:1376)
So let's prevent that by adding some comments in stktable_set_entry()
func description, and by adding a check in smp_fetch_sc_stkctr() to ensure
both source stksess and target table share the same key properties.
While it could be relevant to backport this in all stable versions, it is
probably safer to wait for some time before doing so, to ensure that no
existing configs rely on this ambiguity because the fact that the target
table and source stksess entry need to share the same key type and length
is not explicitly documented.
(cherry picked from commit
5bbdd14f56a2a672e8fef4449e4143b0f0642812)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit
f9023cbd1791e5a9828a85c23d829e6732ea2ca8)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Christopher Faulet [Tue, 22 Oct 2024 14:46:34 +0000 (16:46 +0200)]
DEBUG: stream: Add debug counters to track some client/server aborts
Not all aborts are tracked for now but only those a bit ambiguous. Mainly,
aborts during the data forwarding are concerned. Those triggered during the
request or the response analysis are easier to analyze with the stream
termination state.
(cherry picked from commit
c8aecc393bd125e476469d2b6159a1af542c78ae)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Amaury Denoyelle [Fri, 18 Apr 2025 16:02:48 +0000 (18:02 +0200)]
BUG/MINOR: quic: do not crash on CRYPTO ncbuf alloc failure
To handle out-of-order received CRYPTO frames, a ncbuf instance is
allocated. This is done via the helper quic_get_ncbuf().
Buffer allocation was improperly checked. In case b_alloc() fails, it
crashes due to a BUG_ON(). Fix this by removing it. The function now
returns NULL on allocation failure, which is already properly handled in
its caller qc_handle_crypto_frm().
This should fix the last reported crash from github issue #2935.
This must be backported up to 2.6.
(cherry picked from commit
4309a6fbf80240b0880c5adf091f0075c3bcd53f)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
(cherry picked from commit
1c913c47cad4b00c9fef726b32b1a4240161a89e)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Aurelien DARRAGON [Thu, 17 Apr 2025 12:21:20 +0000 (14:21 +0200)]
BUG/MEDIUM: hlua: fix hlua_applet_{http,tcp}_fct() yield regression (lost data)
Jacques Heunis from bloomberg reported on the mailing list [1] that
with haproxy 2.8 up to master, yielding from a Lua tcp service while
data was still buffered inside haproxy would eat some data which was
definitely lost.
He provided the reproducer below which turned out to be really helpful:
global
log stdout format raw local0 info
lua-load haproxy_yieldtest.lua
defaults
log global
timeout connect 10s
timeout client 1m
timeout server 1m
listen echo
bind *:9090
mode tcp
tcp-request content use-service lua.print_input
haproxy_yieldtest.lua:
core.register_service("print_input", "tcp", function(applet)
core.Info("Start printing input...")
while true do
local inputs = applet:getline()
if inputs == nil or string.len(inputs) == 0 then
core.Info("closing input connection")
return
end
core.Info("Received line: "..inputs)
core.yield()
end
end)
And the script below:
#!/usr/bin/bash
for i in $(seq 1 9999); do
for j in $(seq 1 50); do
echo "${i}_foo_${j}"
done
sleep 2
done
Using it like this:
./test_seq.sh | netcat localhost 9090
We can clearly see the missing data for every "foo" burst (every 2
seconds), as they are holes in the numbering.
Thanks to the reproducer, it was quickly found that only versions
>= 2.8 were affected, and that in fact this regression was introduced
by commit
31572229e ("MEDIUM: hlua/applet: Use the sedesc to report and
detect end of processing")
In fact in
31572229e 2 mistakes were made during the refaco.
Indeed, both in hlua_applet_tcp_fct() (which is involved in the reproducer
above) and hlua_applet_http_fct(), the request (buffer) is now
systematically consumed when returning from the function, which wasn't the
case prior to this commit: when HLUA_E_AGAIN is returned, it means a
yield was requested and that the processing is not done yet, thus we
should not consume any data, like we did prior to the refacto.
Big thanks to Jacques who did a great job reproducing and reporting this
issue on the mailing list.
[1]: https://www.mail-archive.com/haproxy@formilux.org/msg45778.html
It should be backported up to 2.8 with commit
31572229e
(cherry picked from commit
b81ab159a613323a0e8cd625d138955acfcbb666)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
ea97524977950be1162b022058f1dbcaa16167d3)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Amaury Denoyelle [Wed, 16 Apr 2025 13:27:03 +0000 (15:27 +0200)]
BUG/MINOR: h3: reject request URI with invalid characters
Ensure that the HTX start-line generated after parsing an HTTP/3 request
does not contain any invalid character, i.e. control or whitespace
characters.
Note that for now path is used directly as URI. Thus, the check is
performed directly over it. A patch will change this to generate an
absolute-form URI in most cases, but it won't be backported to avoid
configuration breaking in stable versions.
This must be backported up to 2.6.
(cherry picked from commit
1faa1285aacaaacd2c2e062d87f04aa481843862)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
3c1914822d9b98f244c3e603c050a19cc29eed36)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Amaury Denoyelle [Wed, 16 Apr 2025 09:17:20 +0000 (11:17 +0200)]
BUG/MINOR: h3: reject invalid :path in request
RFC 9114 specifies some requirements for :path pseudo-header when using
http or https scheme. This commit enforces this by rejecting a request
if needed. Thus, path cannot be empty, and it must either start with a
'/' character or contains only '*'.
This must be backported up to 2.6.
(cherry picked from commit
fc28fe7191701251115e817a7c4a673b88e49f65)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
cb5ff8cf01212b321b15f24a3f28b68154774955)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Amaury Denoyelle [Wed, 16 Apr 2025 09:20:42 +0000 (11:20 +0200)]
BUG/MINOR: h3: filter upgrade connection header
As specified in RFC 9114, connection headers required special care in
HTTP/3. When a request is received with connection headers, the stream
is immediately closed. Conversely, when translating the response from
HTX, such headers are not encoded but silently ignored.
However, "upgrade" was not listed in connection headers. This commit
fixes this by adding a check on it both on request parsing and response
encoding.
This must be backported up to 2.6.
(cherry picked from commit
6403bfbce8ea54ba83e23d34c5d52ff10fa7fe22)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
95749cdc2b7a90ba956e546b7c314e3a37dd971e)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Amaury Denoyelle [Wed, 16 Apr 2025 15:29:41 +0000 (17:29 +0200)]
BUG/MEDIUM: h3: trim whitespaces in header value prior to QPACK encoding
This commit does a similar job than the previous one, but it acts now on
the response path. Any leading or trailing whitespaces characters from a
HTX block header value are removed, prior to the header encoding via
QPACK.
This must be backported up to 2.6.
(cherry picked from commit
bd3587574dae43cc019d8e9998e0996c75550de4)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
a4f7bfa029ada954c8fae75c1426555a3a4012d4)
[ada: ctx adjt]
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Amaury Denoyelle [Wed, 16 Apr 2025 13:47:42 +0000 (15:47 +0200)]
BUG/MEDIUM: h3: trim whitespaces when parsing headers value
Remove any leading and trailing whitespace from header field values
prior to inserting a new HTX header block. This is done when parsing a
HEADERS frame, both as headers and trailers.
This must be backported up to 2.6.
(cherry picked from commit
a17e5b27c03230cc65ebedc6dc28dbfce5181c79)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
c76eff8ecd4853960e57f081fcfddbd87b8d666d)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Aurelien DARRAGON [Wed, 16 Apr 2025 19:59:04 +0000 (21:59 +0200)]
BUG/MINOR: sink: add tempo between 2 connection attempts for sft servers (2)
This is a follow up patch for
9e02e4d5 ("BUG/MINOR: sink: add tempo
between 2 connection attempts for sft servers").
When the patch was backported to 3.0, due to context mismatch it had to
be manually adjusted but a mistake was made. Indeed, the fix is a no-op
given that the last_conn is always set to TICK_ETERNITY (which translates
to 0) that corresponds to the starting value (sft is calloced) where the
applet is allowed to be created instantly. Instead, we must set last_conn
to now_ms as it was done in the upstream patch.
It should be backported everywhere
9e02e4d5 was.
This commit has no upstream id because it is specific to the 3.0
Christopher Faulet [Tue, 15 Apr 2025 06:18:48 +0000 (08:18 +0200)]
BUG/MEDIUM: http-ana: Report 502 from req analyzer only during rsp forwarding
A server abort must be handled by the request analyzers only when the
response forwarding was already started. Otherwise, it it the responsability
of the response analyzer to detect this event. L7-retires and conditions to
decide to silently close a client conneciotn are handled by this analyzer.
Because a reused server connections closed too early could be detected at
the wrong place, it was possible to get a 502/SH instead of a silent close,
preventing the client to safely retries its request.
Thanks to this patch, we are able to silently close the client connection in
this case and eventually to perform a L7 retry.
This patch must be backported as far as 2.8.
(cherry picked from commit
d160046e2c9caae7deff5d59abc9694b6e664446)
[ada: ctx adjt since COUNT_IF counters are still there in 3.1]
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
c0d92e80821a4af654c6a4230fa9ab24b35f9ee4)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Christopher Faulet [Tue, 15 Apr 2025 05:54:19 +0000 (07:54 +0200)]
BUG/MINOR: http-ana: Properly detect client abort when forwarding the response
During the response payload forwarding, if the back SC is closed, we try to
figure out if it is because of a client abort or a server abort. However,
the condition was not accurrate, especially when abortonclose option is
set. Because of this issue, a server abort may be reported (SD-- in logs)
instead of a client abort (CD-- in logs).
The right way to detect a client abort when we try to forward the response
is to test if the back SC was shut down (SC_FL_SHUT_DOWN flag set) AND
aborted (SC_FL_ABRT_DONE flag set). When these both flags are set, it means
the back connection underwent the shutdown, which should be converted to a
client abort at this stage.
This patch should be backported as far as 2.8. It should fix last strange SD
report in the issue #2749.
(cherry picked from commit
c672b2a297158bcd673feab2fd366709f9fc3d4f)
[ada: ctx adjt because COUNT_IF counters are still there in 3.1]
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
d99407770db42ac27f488093e9b419cd0839e48f)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Willy Tarreau [Mon, 14 Apr 2025 16:28:09 +0000 (18:28 +0200)]
DOC: config: add the missing "profiling.memory" to the global kw index
It was in the description but not in the index. This can be backported to
all versions where it applies.
(cherry picked from commit
640a6998046c4a94d3fb59469b1cdcccedbeb01a)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
ab7fb86e3cf7cde3f1a381c1324a75a7f2ce75c9)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Aurelien DARRAGON [Thu, 10 Apr 2025 15:35:53 +0000 (17:35 +0200)]
BUG/MINOR: hlua: fix invalid errmsg use in hlua_init()
errmsg is used with memprintf and friends, thus it must be NULL
initialized before being passed to memprintf, else invalid read will
occur.
However in hlua_init() the errmsg value isn't initialized, let's fix that
This is really minor because it would only cause issue on error paths,
yet it may be backported to all stable versions, just in case.
(cherry picked from commit
ea3c96369f4a5def90888c9207cd88010d473eb4)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
50838c51a97113d0c9b95d22b4312c9a80a0409d)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Willy Tarreau [Wed, 9 Apr 2025 08:57:54 +0000 (10:57 +0200)]
BUG/MINOR: backend: do not use the source port when hashing clientip
The server's "usesrc" keyword supports among other options "client"
and "clientip". The former means we bind to the client's IP and port
to connect to the server, while the latter means we bind to its IP
only. It's done in two steps, first alloc_bind_address() retrieves
the IP address and port, and second, tcp_connect_server() decides
to either bind to the IP only or IP+port.
The problem comes with idle connection pools, which hash all the
parameters: the hash is calculated before (and ideally withouy) calling
tcp_connect_server(), and it considers the whole struct sockaddr_storage
for the hash, except that both client and clientip entirely fill it with
the client's address. This means that both client and clientip make use
of the source port in the hash calculation, making idle connections
almost not reusable when using "usesrc clientip" while they should for
clients coming from the same source. A work-around is to force the
source port to zero using "tcp-request session set-src-port int(0)" but
it's ugly.
Let's fix this by properly zeroing the port for AF_INET/AF_INET6 addresses.
This can be backported to 2.4. Thanks to Sebastien Gross for providing a
reproducer for this problem.
(cherry picked from commit
7b6df86a834883f27f6f7d18d0caa8a6ea128c14)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
277e88cb78bb371fb8dbb1fff24a06e8a6de85d1)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Willy Tarreau [Mon, 7 Apr 2025 13:30:43 +0000 (15:30 +0200)]
BUG/MEDIUM: sample: fix risk of overflow when replacing multiple regex back-refs
Aleandro Prudenzano of Doyensec and Edoardo Geraci of Codean Labs
reported a bug in sample_conv_regsub(), which can cause replacements
of multiple back-references to overflow the temporary trash buffer.
The problem happens when doing "regsub(match,replacement,g)": we're
replacing every occurrence of "match" with "replacement" in the input
sample, which requires a length check. For this, a max is applied, so
that a replacement may not use more than the remaining length in the
buffer. However, the length check is made on the replaced pattern and
not on the temporary buffer used to carry the new string. This results
in the remaining size to be usable for each input match, which can go
beyond the temporary buffer size if more than one occurrence has to be
replaced with something that's larger than the remaining room.
The fix proposed by Aleandro and Edoardo is the correct one (check on
"trash" not "output"), and is the one implemented in this patch.
While it is very unlikely that a config will replace multiple short
patterns each with a larger one in a request, this possibility cannot
be entirely ruled out (e.g. mask a known, short IP address using
"XXX.XXX.XXX.XXX"). However when this happens, the replacement pattern
will be static, and not be user-controlled, which is why this patch is
marked as medium.
The bug was introduced in 2.2 with commit
07e1e3c93e ("MINOR: sample:
regsub now supports backreferences"), so it must be backported to all
versions.
Special thanks go to Aleandro and Edoardo for reporting this bug with
a simple reproducer and a fix.
(cherry picked from commit
3e3b9eebf871510aee36c3a3336faac2f38c9559)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
db87c8d9fe621539531f6f915ba9e1755a2a26cb)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Aurelien DARRAGON [Mon, 7 Apr 2025 10:00:03 +0000 (12:00 +0200)]
BUG/MINOR: log: fix CBOR encoding with LOG_VARTEXT_START() + lf_encode_chunk()
There have been some reports that using %HV logformat alias with CBOR
encoder would produce invalid CBOR payload according to some CBOR
implementations such as "cbor.me". Indeed, with the below log-format:
log-format "%{+cbor}o %(protocol)HV"
And the resulting CBOR payload:
BF6870726F746F636F6C7F48485454502F312E31FFFF
cbor.me would complain with: "bytes/text mismatch (ASCII-8BIT != UTF-8) in
streaming string") error message.
It is due to the version string being first announced as text, while CBOR
encoder actually encodes it as byte string later when lf_encode_chunk()
is used.
In fact it affects all patterns combining LOG_VARTEXT_START() with
lf_encode_chunk() which means %HM, %HU, %HQ, %HPO and %HP are also
affected. To fix the issue, in _lf_encode_bytes() (which is
lf_encode_chunk() helper), we now check if we are inside a VARTEXT (we
can tell it if ctx->in_text is true), in which case we consider that we
already announced the current data as regular text so we keep the same
type to encode the bytes from the chunk to prevent inconsistencies.
It should be backported in 3.0
(cherry picked from commit
9e8444b730e9b24e73f1f78d269f5001a34bd98d)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
6c042450a4465e0dea7dafbe43157f2346444a70)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Aurelien DARRAGON [Tue, 1 Apr 2025 17:32:58 +0000 (19:32 +0200)]
CLEANUP: log: adjust _lf_cbor_encode_byte() comment
_lf_cbor_encode_byte() comment was not updated in
c33b857df ("MINOR: log:
support true cbor binary encoding") to reflect the new behavior.
Indeed, binary form is now supported. Updating the comment that says
otherwise.
(cherry picked from commit
ce6951d6f9259f778c0271da54a615c892184691)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
3a715a4d6d2e8554015eac5d59c059275bc4c6e6)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Aurelien DARRAGON [Tue, 1 Apr 2025 09:01:45 +0000 (11:01 +0200)]
BUG/MINOR: hlua_fcn: fix potential UAF with Queue:pop_wait()
If Queue:pop_wait() excecuted from a stream context and pop_wait() is
aborted due to a Lua or ressource error, then the waiting object pointing
to the task will still be registered, so if the task eventually dissapears,
Queue:push() may try to wake invalid task pointer..
To prevent this bug from happening, we now rely on notification_* API to
deliver waiting signals. This way signals are properly garbage collected
when a lua context is destroyed.
It should be backported in 2.8 with
86fb22c55 ("MINOR: hlua_fcn: add Queue
class").
This patch depends on ("MINOR: task: add thread safe notification_new and
notification_wake variants")
(cherry picked from commit
c6fa061f22e0409a9c1e0dbe9d4bd9a30eff6ba1)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
51de928f9eda86631ef627d2a750a02857ccc38b)
[ada: ctx adjt]
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Aurelien DARRAGON [Tue, 1 Apr 2025 08:07:50 +0000 (10:07 +0200)]
MINOR: task: add thread safe notification_new and notification_wake variants
notification_new and notification_wake were historically meant to be
called by a single thread doing both the init and the wakeup for other
tasks waiting on the signals.
In this patch, we extend the API so that notification_new and
notification_wake have thread-safe variants that can safely be used with
multiple threads registering on the same list of events and multiple
threads pushing updates on the list.
(cherry picked from commit
b77b1a2c3ac70a719a2a06964e56a206ab9cc6ec)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
ad6e04a1b575a731b01913ff092352d958481775)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Olivier Houchard [Thu, 3 Apr 2025 14:20:11 +0000 (16:20 +0200)]
TESTS: Fix build for filltab25.c
Give a return type to main(), so that filltab25.c compiles with
modern compilers.
(cherry picked from commit
4715c557e98725e28faac7fce449f090e136ac0b)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
fb1086c101bddb9ab49bb09d9077802da37e80d8)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Christopher Faulet [Mon, 17 Mar 2025 13:49:45 +0000 (14:49 +0100)]
BUG/MEDIUM: stream: Fix a possible freeze during a forced shut on a stream
When a forced shutdown is performed on a stream, it is possible to freeze it
infinitly because it is performed in an unexpected way from process_stream()
point of view, especially when the stream is waiting for a server
connection. The events sequence is a bit complex but at the end the stream
remains blocked in turn-around state and no event are trriggered to unblock
it.
By trying to fix the issue, we considered it was safer to rethink the
feature. The idea is to quickly shutdown a stream to release resources. For
instance to be able to delete a server. So, instead of scheduling a
shutdown, it is more efficient to trigger an error and detach the stream
from the server, if neecessary. The same code than the one used to deal with
connection errors in back_handle_st_cer() is used.
This patch must be slowly backported as far as 2.6.
(cherry picked from commit
51611a5b702e6dbf2e5ac56cbcef211326414282)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
bf7dfa488dc3bf830179f13e1262386735a4be91)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Amaury Denoyelle [Mon, 31 Mar 2025 15:57:56 +0000 (17:57 +0200)]
BUG/MEDIUM: backend: fix reuse with set-dst/set-dst-port
On backend connection reuse, a hash is calculated from various
parameters, to ensure the selected connection match the requested
parameters. Notably, destination address is one of these parameters.
However, it is only taken into account if using a transparent server
(server address 0.0.0.0).
This may cause issue where an incorrect connection is reused, which is
not targetted to the correct destination address. This may be the case
if a set-dst/set-dst-port is used with a transparent proxy (proxy option
transparent).
The fix is simple enough. Destination address is now always used as
input to the connection reuse hash.
This must be backported up to 2.6. Note that for reverse HTTP to work,
it relies on the following patch, which ensures destination address
remains NULL in this case.
commit
e94baf6ca71cb2319610baa74dbf17b9bc602b18
BUG/MINOR: rhttp: fix incorrect dst/dst_port values
(cherry picked from commit
5fda64e87e7963fa65812e0583338191e4cc7c8b)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
b2d0abb7f22debf025e313b1943cda97294b3a16)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Amaury Denoyelle [Thu, 27 Mar 2025 16:06:06 +0000 (17:06 +0100)]
BUG/MINOR: backend: do not overwrite srv dst address on reuse
Previously, destination address of backend connection was systematically
always reassigned. However, this step is unnecessary on connection
reuse. Indeed, reuse should only be conducted with connection using the
same destination address matching the stream requirements.
This patch removes this unnecessary assignment. It is now only performed
when reuse cannot be conducted and a new connection is instantiated.
Functionnally speaking, this patch should not change anything in theory,
as reuse is performed in conformance with the destination address.
However, it appears that it was not always properly enforced. The
systematic assignment of the destination address hides these issues, so
it is now remove. The identified bogus cases will then be fixed in the
following patches.would
This should be backported up to all stable versions.
(cherry picked from commit
d7fa8e88c4cda6d115b05c65cded5acb4069ee97)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
00ada2c2806a007df77f0f4fa2aa85938ea0b19a)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Amaury Denoyelle [Mon, 31 Mar 2025 15:57:35 +0000 (17:57 +0200)]
BUG/MINOR: rhttp: fix incorrect dst/dst_port values
With a @rhttp server, connect is not possible, transfer is only possible
via idle connection reuse. The server does not have any network address.
Thus, it is unnecessary to allocate the stream destination address prior
to connection reuse. This patch adjusts this by fixing
alloc_dst_address() to take this into account.
Prior to this patch, alloc_dst_address() would incorrectly assimilate a
@rhttp server with a transparent proxy mode. Thus stream destination
address would be copied from the destination address. Connection adress
would then be rewrote with this incorrect value. This did not impact
connect or reuse as destination addr is only used in idle conn hash
calculation for transparent servers. However, it causes incorrect values
for dst/dst_port samples.
This should be backported up to 2.9.
(cherry picked from commit
c05bb8c967c395f88666ccc93c00726f59a690b6)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
711185ca396122e4bd19725ab6d13c3574ff2209)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Willy Tarreau [Wed, 2 Apr 2025 09:36:43 +0000 (11:36 +0200)]
BUILD: compiler: undefine the CONCAT() macro if already defined
As Ilya reported in issue #2911, the CONCAT() macro breaks on NetBSD
which defines its own as __CONCAT() (which is exactly the same). Let's
just undefine it before ours to fix the issue instead of renaming, but
keep ours so that we don't have doubts about what we're running with.
Note that the patch introducing this breaking change was backported
to 3.0.
(cherry picked from commit
4ec5509541e29956107011d45c971a9fe3a430f1)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
bf3252c1d48a63cccee17793b1f24e01409eaf01)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Willy Tarreau [Wed, 2 Apr 2025 09:17:05 +0000 (11:17 +0200)]
DOC: config: fix two missing "content" in "tcp-request" examples
As reported by Uku Sõrmus in GitHub issue #2917, two "tcp-request" rules
in an example were mistakenly missing the "content" hook, rendering them
invalid.
This can be backported.
(cherry picked from commit
3de99a09192e07a511c4956692a9cff0b252a77f)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
63384d0eb05fe8543277f8bca2d874311e867211)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Valentine Krasnobaeva [Thu, 27 Mar 2025 09:16:03 +0000 (10:16 +0100)]
BUG/MINOR: log: fix gcc warn about truncating NUL terminator while init char arrays
gcc 15 throws such kind of warnings about initialization of some char arrays:
src/log.c:181:33: error: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (17 chars into 16 available) [-Werror=unterminated-string-initialization]
181 | const char sess_term_cond[16] = "-LcCsSPRIDKUIIII"; /* normal, Local, CliTo, CliErr, SrvTo, SrvErr, PxErr, Resource, Internal, Down, Killed, Up, -- */
| ^~~~~~~~~~~~~~~~~~
src/log.c:182:33: error: initializer-string for array of 'char' truncates NUL terminator but destination lacks 'nonstring' attribute (9 chars into 8 available) [-Werror=unterminated-string-initialization]
182 | const char sess_fin_state[8] = "-RCHDLQT"; /* cliRequest, srvConnect, srvHeader, Data, Last, Queue, Tarpit */
So, let's make it happy by not giving the sizes of these char arrays
explicitly, thus he can accomodate there NUL terminators.
Reported in GitHub issue #2910.
This should be backported up to 2.6.
(cherry picked from commit
44f98f1747e8b2ef400dafa249b3f70a2844e8fe)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
d77be2405b6cd5d0cc6c29953cf43315967f02d2)
[ada: patch was applied manually because of multiple context conflicts]
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Amaury Denoyelle [Thu, 20 Mar 2025 17:10:56 +0000 (18:10 +0100)]
BUG/MINOR: mux-quic: remove extra BUG_ON() in _qcc_send_stream()
The following patch fixed a BUG_ON() which could be triggered if RS/SS
emission was scheduled after stream local closure.
7ee1279f4b8416435faba5cb93a9be713f52e4df
BUG/MEDIUM: mux-quic: fix crash on RS/SS emission if already close local
qcc_send_stream() was rewritten as a wrapper around an internal
_qcc_send_stream() used to bypass the faulty BUG_ON(). However, an extra
unnecessary BUG_ON() was added by mistake in _qcc_send_stream().
This should not cause any issue, as the BUG_ON() is only active if <urg>
argument is false, which is not the case for RS/SS emission. However,
this patch is labelled as a bug as this BUG_ON() is unnecessary and may
cause issues in the future.
This should be backported up to 2.8, after the above mentionned patch.
(cherry picked from commit
c5f8df8d55e6a85f72e415c63188345dc670e53b)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
ebeece75c2e138e7c7c111ee7f66a65e79d32ef3)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Amaury Denoyelle [Thu, 20 Mar 2025 15:01:16 +0000 (16:01 +0100)]
BUG/MEDIUM: mux-quic: fix crash on RS/SS emission if already close local
A BUG_ON() is present in qcc_send_stream() to ensure that emission is
never performed with a stream already closed locally. However, this
function is also used for RESET_STREAM/STOP_SENDING emission. No
protection exists to ensure that RS/SS is not scheduled after stream
local closure, which would result in this BUG_ON() crash.
This crash can be triggered with the following QUIC client sequence :
1. SS is emitted to open a new stream. QUIC-MUX schedules a RS emission
by and the stream is locally closed.
2. An invalid HTTP/3 request is sent on the same stream, for example
with duplicated pseudo-headers. The objective is to ensure
qcc_abort_stream_read() is called after stream closure, which results
in the following backtrace.
0x000055555566a620 in qcc_send_stream (qcs=0x7ffff0061420, urg=1, count=0) at src/mux_quic.c:1633
1633 BUG_ON(qcs_is_close_local(qcs));
[ ## gdb ## ] bt
#0 0x000055555566a620 in qcc_send_stream (qcs=0x7ffff0061420, urg=1, count=0) at src/mux_quic.c:1633
#1 0x000055555566a921 in qcc_abort_stream_read (qcs=0x7ffff0061420) at src/mux_quic.c:1658
#2 0x0000555555685426 in h3_rcv_buf (qcs=0x7ffff0061420, b=0x7ffff748d3f0, fin=0) at src/h3.c:1454
#3 0x0000555555668a67 in qcc_decode_qcs (qcc=0x7ffff0049eb0, qcs=0x7ffff0061420) at src/mux_quic.c:1315
#4 0x000055555566c76e in qcc_recv (qcc=0x7ffff0049eb0, id=12, len=0, offset=23, fin=0 '\000',
data=0x7fffe0049c1c "\366\r,\230\205\354\234\301;\2563\335\037k\306\334\037\260", <incomplete sequence \323>) at src/mux_quic.c:1901
#5 0x0000555555692551 in qc_handle_strm_frm (pkt=0x7fffe00484b0, strm_frm=0x7ffff00539e0, qc=0x7fffe0049220, fin=0 '\000') at src/quic_rx.c:635
#6 0x0000555555694530 in qc_parse_pkt_frms (qc=0x7fffe0049220, pkt=0x7fffe00484b0, qel=0x7fffe0075fc0) at src/quic_rx.c:980
#7 0x0000555555696c7a in qc_treat_rx_pkts (qc=0x7fffe0049220) at src/quic_rx.c:1324
#8 0x00005555556b781b in quic_conn_app_io_cb (t=0x7fffe0037f20, context=0x7fffe0049220, state=49232) at src/quic_conn.c:601
#9 0x0000555555d53788 in run_tasks_from_lists (budgets=0x7ffff748e2b0) at src/task.c:603
#10 0x0000555555d541ae in process_runnable_tasks () at src/task.c:886
#11 0x00005555559c39e9 in run_poll_loop () at src/haproxy.c:2858
#12 0x00005555559c41ea in run_thread_poll_loop (data=0x55555629fb40 <ha_thread_info+64>) at src/haproxy.c:3075
The proper solution is to not execute this BUG_ON() for RS/SS emission.
Indeed, it is valid and can be useful to emit these frames, even after
stream local closure.
To implement this, qcc_send_stream() has been rewritten as a mere
wrapper function around the new internal _qcc_send_stream(). The latter
is used only by QMUX for STREAM, RS and SS emission. Application layer
continue to use the original function for STREAM emission, with the
BUG_ON() still in place there.
This must be backported up to 2.8.
(cherry picked from commit
7ee1279f4b8416435faba5cb93a9be713f52e4df)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
c44b72d88a48f81a3b2fd1433f372226131a2e5a)
[ada: ctx adjt: since
3a00193b2 ("MINOR: mux-quic: split STREAM and RS/SS
emission") was not backported past 3.1, _qcc_send_stream() was based
on prior qcc_send_stream() implementation]
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Emeric Brun [Thu, 3 Apr 2025 08:32:30 +0000 (10:32 +0200)]
BUG/MEDIUM: peers: prevent learning expiration too far in futur from unsync node
This patch sets the expire of the entry to the max value in
configuration if the value showed in the peer update message
is too far in futur.
This should be backported an all supported branches.
(cherry picked from commit
b02b8453d15dfe2c45d132484e381c27f63d2fb1)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
7dfb6efbaf8c17a82be38208dc1e8ce693d21b74)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Emeric Brun [Thu, 3 Apr 2025 08:29:16 +0000 (10:29 +0200)]
BUG/MINOR: peers: fix expire learned from a peer not converted from ms to ticks
This is has now impact currently since MS_TO_TICKS macro does nothing
but it will prevent further bugs.
(cherry picked from commit
00461fbfbfd19e93af621658abcf7d1205f44182)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
d128753b6160f15f3b361e68fcbd3072c8ea681a)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Aurelien DARRAGON [Tue, 1 Apr 2025 18:25:08 +0000 (20:25 +0200)]
MINOR: log: support "raw" logformat node typecast
"raw" logformat node typecast is a special value (unlike str,bool,int..)
which tells haproxy to completely ignore logformat options (including
encoding ones) and force binary output for the current node only. It is
mainly intended for use with JSON or CBOR encoders in order to generate
nested CBOR or nested JSON by storing intermediate log-formats within
variables and assembling the final object in the parent log-format.
Example:
http-request set-var-fmt(txn.intermediate) "%{+json}o %(lower)[str(value)]"
log-format "%{+json}o %(upper)[str(value)] %(intermediate:raw)[var(txn.intermediate)]"
Would produce:
{"upper": "value", "intermediate": {"lower": "value"}}
(cherry picked from commit
423cca64b632771235f2633f5c2a8b9855317cf4)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
(cherry picked from commit
cf4488acff80814fa26e40fab24c06162e54bef4)
Signed-off-by: Aurelien DARRAGON <adarragon@haproxy.com>
Willy Tarreau [Thu, 20 Mar 2025 13:27:37 +0000 (14:27 +0100)]
[RELEASE] Released version 3.0.9
Released version 3.0.9 with the following main changes :
- BUG/MEDIUM: ssl: chosing correct certificate using RSA-PSS with TLSv1.3
- BUG/MEDIUM: mux-quic: do not attach on already closed stream
- MINOR: mux-quic: change return value of qcs_attach_sc()
- BUG/MINOR: mux-quic: handle closure of uni-stream
- BUG/MINOR: spoe: Check the shared waiting queue to shut applets during stopping
- BUG/MINOR: spoe: Allow applet creation when closing the last one during stopping
- BUG/MEDIUM: spoe: Don't wakeup idle applets in loop during stopping
- BUG/MEDIUM: fd: mark FD transferred to another process as FD_CLONED
- REGTESTS: Fix truncated.vtc to send 0-CRLF
- BUG/MEDIUM: htx: wrong count computation in htx_xfer_blks()
- DOC: htx: clarify <mark> parameter for htx_xfer_blks()
- DOC: option redispatch should mention persist options
- BUG/MEDIUM: server: properly initialize PROXY v2 TLVs
- TESTS: ist: fix wrong array size
- CI: github: fix h2spec.config proxy names
- BUG/MINOR: stream: fix age calculation in "show sess" output
- BUG/MINOR: cfgparse-tcp: relax namespace bind check
- MINOR: startup: adjust alert messages, when capabilities are missed
- BUG/MEDIUM: thread: use pthread_self() not ha_pthread[tid] in set_affinity
- DOC: management: rename some last occurences from domain "dns" to "resolvers"
- BUG/MINOR: server: fix the "server-template" prefix memory leak
- BUILD: ssl: allow to build without the renegotiation API of WolfSSL
- BUILD: ssl: more cleaner approach to WolfSSL without renegotiation
- BUG/MEDIUM: debug: close a possible race between thread dump and panic()
- BUG/MINOR: quic: reserve length field for long header encoding
- BUG/MINOR: quic: fix CRYPTO payload size calcul for encoding
- BUG/MINOR: ssl/cli: "show ssl crt-list" lacks client-sigals
- BUG/MINOR: ssl/cli: "show ssl crt-list" lacks sigals
- BUG/MINOR: cli: Wait for the last ACK when FDs are xferred from the old worker
- BUG/MEDIUM: filters: Handle filters registered on data with no payload callback
- BUG/MINOR: fcgi: Don't set the status to 302 if it is already set
- BUG/MINOR: quic: prevent crash on conn access after MUX init failure
- BUG/MINOR: mux-quic: prevent crash after MUX init failure
- BUG/MINOR: mux-h2: Properly handle full or truncated HTX messages on shut
- BUG/MINOR: tcp-rules: Don't forward close during tcp-response content rules eval
- BUG/MINOR: cli: Don't set SE flags from the cli applet
- BUG/MINOR: cli: Fix memory leak on error for _getsocks command
- BUG/MINOR: cli: Fix a possible infinite loop in _getsocks()
- BUG/MINOR: config/userlist: Support one 'users' option for 'group' directive
- BUG/MINOR: auth: Fix a leak on error path when parsing user's groups
- BUG/MINOR: flt-trace: Support only one name option
- BUG/MINOR: stats-json: Define JSON_INT_MAX as a signed integer
- BUG/MINOR: cfgparse: fix NULL ptr dereference in cfg_parse_peers
- BUG/MINOR: sink: add tempo between 2 connection attempts for sft servers
- MINOR: clock: always use atomic ops for global_now_ms
- BUG/MINOR: mux-h1: always make sure h1s->sd exists in h1_dump_h1s_info()
- MINOR: tinfo: add a new thread flag to indicate a call from a sig handler
- BUG/MEDIUM: stream: never allocate connection addresses from signal handler
- MINOR: freq_ctr: provide non-blocking read functions
- BUG/MEDIUM: stream: use non-blocking freq_ctr calls from the stream dumper
- BUG/MINOR: h2: always trim leading and trailing LWS in header values
- BUG/MINOR: h3: do not report transfer as aborted on preemptive response
- CLEANUP: h3: fix documentation of h3_rcv_buf()
- BUG/MINOR: server: check for either proxy-protocol v1 or v2 to send hedaer
- CLEANUP: log: removing "log-balance" references
- BUG/MINOR: log: set proper smp size for balance log-hash
- BUG/MEIDUM: startup: return to initial cwd only after check_config_validity()
- BUG/MINOR: cfgparse/peers: fix inconsistent check for missing peer server
- BUG/MINOR: cfgparse/peers: properly handle ignored local peer case
- BUG/MINOR: server: dont return immediately from parse_server() when skipping checks
- MINOR: cfgparse/peers: provide more info when ignoring invalid "peer" or "server" lines
- BUG/MINOR: stats: fix capabilities and hide settings for some generic metrics
- BUG/MINOR: namespace: handle a possible strdup() failure
- BUG/MINOR: ssl_crtlist: handle a possible strdup() failure
- BUG/MINOR: http-check: Don't pretend a C-L heeader is set before adding it
- BUG/MEDIUM: hlua/cli: fix cli applet UAF in hlua_applet_wakeup()
- BUG/MEDIUM: stream: don't use localtime in dumps from a signal handler
- MINOR: compiler: add a simple macro to concatenate resolved strings
- MINOR: compiler: add a new __decl_thread_var() macro to declare local variables
- MINOR: tools: resolve main() only once in resolve_sym_name()
- MINOR: tools: use only opportunistic symbols resolution
- BUILD: tools: silence a build warning when USE_THREAD=0
- MINOR: tinfo: split the signal handler report flags into 3
- MINOR: cli: export cli_io_handler() to ease symbol resolution
- MINOR: tools: improve symbol resolution without dl_addr
- MINOR: tools: ease the declaration of known symbols in resolve_sym_name()
- MINOR: tools: teach resolve_sym_name() a few more common symbols
- BUILD: tools: avoid a build warning on gcc-4.8 in resolve_sym_name()
Willy Tarreau [Fri, 14 Mar 2025 17:28:32 +0000 (18:28 +0100)]
BUILD: tools: avoid a build warning on gcc-4.8 in resolve_sym_name()
A build warning is emitted with gcc-4.8 in tools.c since commit
e920d73f59 ("MINOR: tools: improve symbol resolution without dl_addr")
because the compiler doesn't see that <size> is necessarily initialized.
Let's just preset it.
(cherry picked from commit
ed75148ca0479c7f87e1c149f0b5f0c8dd50eb3c)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
19b041e30ad9be9f00588d83c7d36a405f1e9cca)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Thu, 13 Mar 2025 16:29:16 +0000 (17:29 +0100)]
MINOR: tools: teach resolve_sym_name() a few more common symbols
This adds run_poll_loop, run_tasks_from_lists, process_runnable_tasks,
ha_dump_backtrace and cli_io_handler which are fairly common in
backtraces. This will be less relative symbols when dladdr is not
usable.
(cherry picked from commit
4e09789644efa6c0dacfbd278618594590019a6e)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
965df55f98ed70ec111fa5878eadc41938576c35)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Thu, 13 Mar 2025 15:46:10 +0000 (16:46 +0100)]
MINOR: tools: ease the declaration of known symbols in resolve_sym_name()
Let's have a macro that declares both the symbol and its name, it will
avoid the risk of introducing typos, and encourages adding more when
needed. The macro also takes an optional second argument to permit an
inline declaration of an extern symbol.
(cherry picked from commit
a3582a77f73a1c2ba84018ac361b4a763582ac44)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
2a9991d23f5ba6d60c725fed3f0705c9b18bdd15)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Thu, 13 Mar 2025 16:21:24 +0000 (17:21 +0100)]
MINOR: tools: improve symbol resolution without dl_addr
When dl_addr is not usable or fails, better fall back to the closest
symbol among the known ones instead of providing everything relative
to main. Most often, the location of the function will give some hints
about what it can be. Thus now we can emit fct+0xXXX in addition to
main+0xXXX or main-0xXXX. We keep a margin of +256kB maximum after a
function for a match, which is around the maximum size met in an object
file, otherwise it becomes pointless again.
(cherry picked from commit
e920d73f598bd770a5dc861074268d1b2db4de59)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
9df54d8c1b4de89df9b933d9f58c0c388fed4a5b)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Thu, 13 Mar 2025 16:28:12 +0000 (17:28 +0100)]
MINOR: cli: export cli_io_handler() to ease symbol resolution
It's common to meet this function in backtraces, it's a bit annoying
that it's not resolved, so let's export it so that it becomes resolvable.
(cherry picked from commit
1e99efccef21394f0b5e5940001aaee8b3cd6b4b)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
a827c7ad60ae65cf4867cecace9bbdc1952dc4ba)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Mon, 24 Feb 2025 12:37:52 +0000 (13:37 +0100)]
MINOR: tinfo: split the signal handler report flags into 3
While signals are not recursive, one signal (e.g. wdt) may interrupt
another one (e.g. debug). The problem this causes is that when leaving
the inner handler, it removes the outer's flag, hence the protection
that comes with it. Let's just have 3 distinct flags for regular signals,
debug signal and watchdog signal. We add a 4th definition which is an
aggregate of the 3 to ease testing.
(cherry picked from commit
fb7874c286dbe1594837fc75b9e96783dfe160f5)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
88b97519ab35e851888761960ac64dd10614ae86)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Wed, 12 Mar 2025 17:11:14 +0000 (18:11 +0100)]
BUILD: tools: silence a build warning when USE_THREAD=0
The dladdr_lock that was added to avoid re-entering into dladdr is
conditioned by threads, but the way it's declared causes a build
warning if threads are disabled due to the insertion of a lone semi
colon in the variables block. Let's switch to __decl_thread_var()
for this.
This can be backported wherever commit
eb41d768f9 ("MINOR: tools:
use only opportunistic symbols resolution") is backported. It relies
on these previous two commits:
bb4addabb7 ("MINOR: compiler: add a simple macro to concatenate resolved strings")
69ac4cd315 ("MINOR: compiler: add a new __decl_thread_var() macro to declare local variables")
(cherry picked from commit
b61ed9babe879fd5603af4643c1d60c8ab48d096)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
55ee696ca143e4900249525c5a45141dc8994e1d)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Fri, 21 Feb 2025 14:01:13 +0000 (15:01 +0100)]
MINOR: tools: use only opportunistic symbols resolution
As seen in issue #2861, dladdr_and_size() an be quite expensive and
will often hold a mutex in the underlying library. It becomes a real
problem when issuing lots of "show threads" or wdt warnings in parallel
because threads will queue up waiting for each other to finish, adding
to their existing latency that possibly caused the warning in the first
place.
Here we're taking a different approach. If the thread is not isolated
and not panicking, it's doing unimportant stuff like showing threads
or warnings. In this case we try to grab a lock, and if we fail because
another thread is already there, we just pretend we cannot resolve the
symbol. This is not critical because then we fall back to the already
used case which consists in writing "main+<offset>". In practice this
will almost never happen except in bad situations which could have
otherwise degenerated.
(cherry picked from commit
eb41d768f954d5c7360fd19ec69f9d707b900532)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
9882abeada0eef928ebdb9f7377e13bafaceb564)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Thu, 21 Nov 2024 13:14:49 +0000 (14:14 +0100)]
MINOR: tools: resolve main() only once in resolve_sym_name()
resolv_sym_name() calls dladdr(main) for each symbol in order to compare
the first address with other symbols. But this is pointless and quite
expensive in outputs to "show profiling" for example. Let's just keep a
local copy and have a variable indicating if the resolution is needed/
in progress/done to save the value for subsequent calls.
(cherry picked from commit
a205a91bb3daa8a84fa6b429c13d39563add950f)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Wed, 12 Mar 2025 17:08:12 +0000 (18:08 +0100)]
MINOR: compiler: add a new __decl_thread_var() macro to declare local variables
__decl_thread() already exists but is more suited for struct members.
When using it in a variables block, it appends the final trailing
semi-colon which is a statement that ends the variable block. Better
clean this up and have one precisely for variable blocks. In this
case we can simply define an unused enum value that will consume the
semi-colon. That's what the new macro __decl_thread_var() does.
(cherry picked from commit
69ac4cd315f05c69f3c34f824ef4e7f3221966cf)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
f7f2c888b113a3789a4c794f3380c8696e3bbf31)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Wed, 12 Mar 2025 17:06:55 +0000 (18:06 +0100)]
MINOR: compiler: add a simple macro to concatenate resolved strings
It's often useful to be able to concatenate strings after resolving
them (e.g. __FILE__, __LINE__ etc). Let's just have a CONCAT() macro
to do that, which calls _CONCAT() with the same arguments to make
sure the contents are resolved before being concatenated.
(cherry picked from commit
bb4addabb742f2305ad6667ed42ebb12f5df2af3)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
af1e0b247e64678927793e9cd6e10907b0eebb27)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Mon, 24 Feb 2025 10:43:15 +0000 (11:43 +0100)]
BUG/MEDIUM: stream: don't use localtime in dumps from a signal handler
In issue #2861, Jarosaw Rzeszótko reported another issue with
"show threads", this time in relation with the conversion of a stream's
accept date to local time. Indeed, if the libc was interrupted in this
same function, it could have been interrupted with a lock held, then
it's no longer possible to dump the date, and we face a deadlock.
This is easy to reproduce with logging enabled.
Let's detect we come from a signal handler and do not try to resolve
the time to localtime in this case.
(cherry picked from commit
2e0bac90da0013513969224db94661dda88d7b98)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
6cdac8a3512961fe6a62af27ac1a71d834daabe6)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Aurelien DARRAGON [Wed, 19 Mar 2025 15:41:08 +0000 (16:41 +0100)]
BUG/MEDIUM: hlua/cli: fix cli applet UAF in hlua_applet_wakeup()
Recent commit
e5e36ce09 ("BUG/MEDIUM: hlua/cli: Fix lua CLI commands
to work with applet's buffers") revealed a bug in hlua cli applet handling
Indeed, playing with Willy's lua tetris script on the cli, a segfault
would be encountered when forcefully closing the session by sending a
CTRL+C on the terminal.
In fact the crash was caused by a UAF: while the cli applet was already
freed, the lua task responsible for waking it up would still point to it.
Thus hlua_applet_wakeup() could be called even if the applet didn't exist
anymore.
To fix the issue, in hlua_cli_io_release_fct() we must also free the hlua
task linked to the applet, like we already do for
hlua_applet_tcp_release() and hlua_applet_http_release().
While this bug exists on stable versions (where it should be backported
too for precaution), it only seems to be triggered starting with 3.0.
(cherry picked from commit
21601f4a27c4a1c8da0dbbfa22329ec1f927670e)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
411e02e84eb36ca15292ad2337680dc986b57f34)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Christopher Faulet [Mon, 3 Feb 2025 17:36:17 +0000 (18:36 +0100)]
BUG/MINOR: http-check: Don't pretend a C-L heeader is set before adding it
When a GET/HEAD/OPTIONS/DELETE healthcheck request was formatted, we claimed
there was a "content-length" header set even when there was no payload,
leading to actually send a "content-length: 0" header to the server. It was
unexpected and could be rejected by servers.
When a healthcheck request is sent we must take care to state there is a
"content-length" header when it is explicitly added.
This patch should fix the issue #2851. It must be backported as far as 2.9.
(cherry picked from commit
fad68cb16d7f99acd2b327ff2f8a4d9ab88a68d8)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
d5ca78b83d7bfddd107e3243a6ff785dc92b9157)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Ilia Shipitsin [Tue, 3 Dec 2024 16:13:05 +0000 (17:13 +0100)]
BUG/MINOR: ssl_crtlist: handle a possible strdup() failure
This defect was found by the coccinelle script "unchecked-strdup.cocci".
It can be backported to all supported branches.
(cherry picked from commit
ce30bc17305e4ac2dec7d641eedcb301a237d863)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
f13318e2d28bcbd0929ab9e494cc52803c5db7d1)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Ilia Shipitsin [Tue, 3 Dec 2024 16:10:21 +0000 (17:10 +0100)]
BUG/MINOR: namespace: handle a possible strdup() failure
This defect was found by the coccinelle script "unchecked-strdup.cocci".
It can be backported to all supported branches.
(cherry picked from commit
abee5468509a77537015a7e5dc37268d4d564e03)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
1a61a186ea032c676934dbe846655b78bd3be9d2)
Signed-off-by: Willy Tarreau <w@1wt.eu>
Aurelien DARRAGON [Thu, 13 Mar 2025 10:09:00 +0000 (11:09 +0100)]
BUG/MINOR: stats: fix capabilities and hide settings for some generic metrics
Performing a diff on stats output before vs after commit
66152526
("MEDIUM: stats: convert counters to new column definition") revealed
that some metrics were not properly ported to to the new API. Namely,
"lbtot", "cli_abrt" and "srv_abrt" are now exposed on frontend and
listeners while it was not the case before.
Also, "hrsp_other" is exposed even when "mode http" wasn't set on the
proxy.
In this patch we restore original behavior by fixing the capabilities
and hide settings.
As this could be considered as a minor regression (looking at the commit
message it doesn't seem intended), better tag this as a bug. It should be
backported in 3.0 with
66152526.
(cherry picked from commit
8311be5ac60c10fc4af56e3df79031358236bc14)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
5f4fe64ea7d250f0927e3f250515319b3588ae10)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Aurelien DARRAGON [Fri, 7 Mar 2025 08:30:47 +0000 (09:30 +0100)]
MINOR: cfgparse/peers: provide more info when ignoring invalid "peer" or "server" lines
Invalid (incomplete) "server" or "peer" lines under peers section are now
properly ignored. For completeness, in this patch we add some reports so
that the user knows that incomplete lines were ignored.
For an incomplete server line, since it is tolerated (see GH #565), we
only emit a diag warning.
For an incomplete peer line, we report a real warning, as it is not
expected to have a peer line without an address:port specified.
Also, 'newpeer == curpeers->local' check could be simplified since
we already have the 'local_peer' variable which tells us that the
parsed line refers to a local peer.
(cherry picked from commit
dbb25720dd7157e0f180d17486f10340f80a9fda)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
27a92cffb34d534c91e6b9deee886102f4b57b6c)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Aurelien DARRAGON [Fri, 7 Mar 2025 08:11:21 +0000 (09:11 +0100)]
BUG/MINOR: server: dont return immediately from parse_server() when skipping checks
If parse_server() is called under peers section parser, and the address
needs to be parsed but it is missing, we directly return from the function
However since
0fc136ce5b ("REORG: server: use parsing ctx for server
parsing"), parse_server() uses parsing ctx to emit warning/errors, and
the ctx must be reset before returning from the function, yet this early
return was overlooked. Because of that, any ha_{warning,alert..} message
reported after early return from parse_server() could cause messages to
have an extra "parsing [file:line]" info.
We fix that by ensuring parse_server() doesn't return without resetting
the parsing context.
It should be backported up to 2.6
(cherry picked from commit
a76b5358f0400568b641dc373cbd582875cd6aa6)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
d7e564cb81be444025f4fe72202a842718059da5)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Aurelien DARRAGON [Thu, 6 Mar 2025 08:29:05 +0000 (09:29 +0100)]
BUG/MINOR: cfgparse/peers: properly handle ignored local peer case
In
8ba10fea6 ("BUG/MINOR: peers: Incomplete peers sections should be
validated."), some checks were relaxed in parse_server(), and extra logic
was added in the peers section parser in an attempt to properly ignore
incomplete "server" or "peer" statement under peers section.
This was done in response to GH #565, the main intent was that haproxy
should already complain about incomplete peers section (ie: missing
localpeer).
However,
8ba10fea69 explicitly skipped the peer cleanup upon missing
srv association for local peers. This is wrong because later haproxy
code always assumes that peer->srv is valid. Indeed, we got reports
that the (invalid) config below would cause segmentation fault on
all stable versions:
global
localpeer 01JM0TEPAREK01FQQ439DDZXD8
peers my-table
peer 01JM0TEPAREK01FQQ439DDZXD8
listen dummy
bind localhost:8080
To fix the issue, instead of by-passing some cleanup for the local
peer, handle this case specifically by doing the regular peer cleanup
and reset some fields set on the curpeers and curpeers proxy because
of the invalid local peer (do as if the peer was not declared).
It should still comply with requirements from #565.
This patch should be backported to all stable versions.
(cherry picked from commit
054443dfb908521e30aa57335dbcb5f9cd6f7218)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
6bd743efe66a2ce28b7bf125b7c145c30cb305f7)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Aurelien DARRAGON [Thu, 6 Mar 2025 08:05:23 +0000 (09:05 +0100)]
BUG/MINOR: cfgparse/peers: fix inconsistent check for missing peer server
In the "peers" section parser, right after parse_server() is called, we
used to check whether the curpeers->peers_fe->srv pointer was set or not
to know if parse_server() successfuly added a server to the peers proxy,
server that we can then associate to the new peer.
However the check is wrong, as curpeers->peers_fe->srv points to the
last added server, if a server was successfully added before the
failing one, we cannot detect that the last parse_server() didn't
add a server. This is known to cause bug with bad "peer"/"server"
statements.
To fix the issue, we save a pointer on the last known
curpeers->peers_fe->srv before parse_server() is called, and we then
compare the save with the pointer after parse_server(), if the value
didn't change, then parse_server() didn't add a server. This makes
the check consistent in all situations.
It should be backported to all stable versions.
(cherry picked from commit
2560ab892f355e958007b287946f787b578d3131)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
fb5130fc8f0d942acf2afd20a02f6d7b35eb5a35)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Valentine Krasnobaeva [Tue, 4 Mar 2025 10:04:01 +0000 (11:04 +0100)]
BUG/MEIDUM: startup: return to initial cwd only after check_config_validity()
In check_config_validity() we evaluate some sample fetch expressions
(log-format, server rules, etc). These expressions may use external files like
maps.
If some particular 'default-path' was set in the global section before, it's no
longer applied to resolve file pathes in check_config_validity(). parse_cfg()
at the end of config parsing switches back to the initial cwd.
This fixes the issue #2886.
This patch should be backported in all stable versions since 2.4.0, including
2.4.0.
(cherry picked from commit
e900ef987e4167cf0921e97b09059d757f2c0ea5)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
4d7fe11b70df072007eaa1633061e3581b9618fc)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Aurelien DARRAGON [Wed, 5 Mar 2025 11:01:34 +0000 (12:01 +0100)]
BUG/MINOR: log: set proper smp size for balance log-hash
result.data.u.str.size was set to size+1 to take into account terminating
NULL byte as per the comment. But this is wrong because the caller is free
to set size to just the right amount of bytes (without terminating NULL
byte). In fact all smp API functions will not read past str.data so there
is not risk about uninitialized reads, but this leaves an ambiguity for
converters that may use all the smp size to perform transformations, and
since we don't know about the "message" memory origin, we cannot assume
that its size may be greater than size. So we max it out to size just to
be safe.
This bug was not known to cause any issue, it was spotted during code
review. It should be backported in 2.9 with b30bd7a ("MEDIUM: log/balance:
support for the "hash" lb algorithm")
(cherry picked from commit
94a9b0f5deabd49020c8ff535a3404494345b399)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
7e5d59e8bf5f44c57e2cd13427aef5288777fbd2)
[ad: ctx adjustement]
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Aurelien DARRAGON [Wed, 5 Mar 2025 10:33:06 +0000 (11:33 +0100)]
CLEANUP: log: removing "log-balance" references
This is a complementary patch to
0e1f389fe9 ("DOC: config: removing
"log-balance" references"): we properly removed all log-balance
references in the doc but there remained some in the code, let's fix
that.
It could be backported in 2.9 with
0e1f389fe9
(cherry picked from commit
ddf66132f4d77e00c52c977d8a9e5c829965e7c7)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
24160712d38f54fe4114379ecbe034bcb25ee976)
[ad: ctx adjustment]
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Willy Tarreau [Mon, 3 Mar 2025 02:58:46 +0000 (03:58 +0100)]
BUG/MINOR: server: check for either proxy-protocol v1 or v2 to send hedaer
As reported in issue #2882, using "no-send-proxy-v2" on a server line does
not properly disable the use of proxy-protocol if it was enabled in a
default-server directive in combination with other PP options. The reason
for this is that the sending of a proxy header is determined by a test on
srv->pp_opts without any distinction, so disabling PPv2 while leaving other
options results in a PPv1 header to be sent.
Let's fix this by explicitly testing for the presence of either send-proxy
or send-proxy-v2 when deciding to send a proxy header.
This can be backported to all versions. Thanks to Andre Sencioles (@asenci)
for reporting the issue and testing the fix.
(cherry picked from commit
730641f7cad32bfff97875716efe4bd784bb006b)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit
7b2212f5547ba4268a0de7657ec0b9ca18d40445)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Amaury Denoyelle [Thu, 27 Feb 2025 10:28:07 +0000 (11:28 +0100)]
CLEANUP: h3: fix documentation of h3_rcv_buf()
Return value of h3_rcv_buf() is incorrectly documented. Indeed, it may
return a positive value to indicate that input bytes were converted into
HTX. This is especially important, as caller uses this value to consume
the reported data amount in QCS Rx buffer.
This should be backported up to 2.6. Note that on 2.8, h3_rcv_buf() was
named h3_decode_qcs().
(cherry picked from commit
0aa35289b3b51e09a5757c9991212ec416d281f2)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit
da6832fd2fb065cb32f030609b53fee87488aab7)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Amaury Denoyelle [Thu, 27 Feb 2025 15:38:39 +0000 (16:38 +0100)]
BUG/MINOR: h3: do not report transfer as aborted on preemptive response
HTTP/3 specification allows a server to emit the entire response even if
only a partial request was received. In particular, this happens when
request STREAM FIN is delayed and transmitted in an empty payload frame.
In this case, qcc_abort_stream_read() was used by HTTP/3 layer to emit a
STOP_SENDING. Remaining received data were not transmitted to the stream
layer as they were simply discared. However, this prevents FIN
transmission to the stream layer. This causes the transfer to be
considered as prematurely closed, resulting in a cL-- log line status.
This is misleading to users which could interpret it as if the response
was not sent.
To fix this, disable STOP_SENDING emission on full preemptive reponse
emission. Rx channel is kept opened until the client closes it with
either a FIN or a RESET_STREAM. This ensures that the FIN signal can be
relayed to the stream layer, which allows the transfer to be reported as
completed.
This should be backported up to 2.9.
(cherry picked from commit
f6648d478b632bbd243ab374e24c02d566a4112b)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit
00177c6f78d37a7fdbf22d86f1eb7c7403035fbb)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Willy Tarreau [Mon, 24 Feb 2025 08:17:22 +0000 (09:17 +0100)]
BUG/MINOR: h2: always trim leading and trailing LWS in header values
Annika Wickert reported some occasional disconnections between haproxy
and varnish when communicating over HTTP/2, with varnish complaining
about protocol errors while captures looked apparently normal. Nils
Goroll managed to reproduce this on varnish by injecting the capture of
the outgoing haproxy traffic and noticed that haproxy was forwarding a
header value containing a trailing space, which is now explicitly
forbidden since RFC9113.
It turns out that the only way for such a header to pass through haproxy
is to arrive in h2 and not be edited, in which case it will arrive in
HTX with its undesired spaces. Since the code dealing with HTX headers
always trims spaces around them, these are not observable in dumps, but
only when started in debug mode (-d). Conversions to/from h1 also drop
the spaces.
With this patch we trim LWS both on input and on output. This way we
always present clean headers in the whole stack, and even if some are
manually crafted by the configuration or Lua, they will be trimmed on
the output.
This must be backported to all stable versions.
Thanks to Annika for the helpful capture and Nils for the help with
the analysis on the varnish side!
(cherry picked from commit
bbf824933f71eca90b5f07a51fa93f4fa7ac2256)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit
2c9bb6fe295f2ce34718b566593cdc67d5880a9f)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Willy Tarreau [Fri, 21 Feb 2025 17:23:23 +0000 (18:23 +0100)]
BUG/MEDIUM: stream: use non-blocking freq_ctr calls from the stream dumper
The stream dump function is called from signal handlers (warning, show
threads, panic). It makes use of read_freq_ctr() which might possibly
block if it tries to access a locked freq_ctr in the process of being
updated, e.g. by the current thread.
Here we're relying on the non-blocking API instead. It may return incorrect
values (typically smaller ones after resetting the curr counter) but at
least it will not block.
This needs to be backported to stable versions along with the previous
commit below:
MINOR: freq_ctr: provide non-blocking read functions
At least 3.1 is concerned as the warnings tend to increase the risk of
this situation appearing.
(cherry picked from commit
3c22fa315bcb2945d588cb64302f6ba5c89b382e)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit
124c6cdb13e0f7c03fb9232f429eb58f7d3c4742)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Willy Tarreau [Fri, 21 Feb 2025 17:21:56 +0000 (18:21 +0100)]
MINOR: freq_ctr: provide non-blocking read functions
Some code called by the debug handlers in the context of a signal handler
accesses to some freq_ctr and occasionally ends up on a locked one from
the same thread that is dumping it. Let's introduce a non-blocking version
that at least allows to return even if the value is in the process of being
updated, it's less problematic than hanging.
(cherry picked from commit
29e246a84ce27af63779d98b305ad53877ae9acc)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit
dce43434eaf7a402148209c2f0d4371ae063c1fa)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Willy Tarreau [Fri, 21 Feb 2025 15:45:18 +0000 (16:45 +0100)]
BUG/MEDIUM: stream: never allocate connection addresses from signal handler
In __strm_dump_to_buffer(), we call conn_get_src()/conn_get_dst() to try
to retrieve the connection's IP addresses. But this function may be called
from a signal handler to dump a currently running stream, and if the
addresses were not allocated yet, a poll_alloc() will be performed while
we might possibly already be running pools code, resulting in pool list
corruption.
Let's just make sure we don't call these sensitive functions there when
called from a signal handler.
This must be backported at least to 3.1 and ideally all other versions,
along with this previous commit:
MINOR: tinfo: add a new thread flag to indicate a call from a sig handler
(cherry picked from commit
84d4c948fce4b31220bb9a30cbf676613bbbf4f2)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit
5f22fca932a1c3d5c92f30add31ffce81abdf758)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>