haproxy (2.8.15-0ubuntu0.24.04.1) noble; urgency=medium

  A new global setting was added to the H2 multiplexer to limit the total
  number of H2 streams per client connection. To do so,
  'tune.h2.fe.max-total-streams' can now be used. It sets the HTTP/2 maximum
  number of total streams processed per incoming connection. Once this limit
  is reached, HAProxy will send a graceful GOAWAY frame informing the client
  that it will close the connection after all pending streams have been
  closed. In practice, clients tend to close as fast as possible when
  receiving this, and to establish a new connection for next requests. Doing
  this is sometimes useful and desired in situations where clients stay
  connected for a very long time and cause some imbalance inside a farm. For
  example, in some highly dynamic environments, it is possible that new load
  balancers are instantiated on the fly to adapt to a load increase, and that
  once the load goes down they should be stopped without breaking established
  connections. By setting a limit here, the connections will have a limited
  lifetime and will be frequently renewed, with some possibly being
  established to other nodes, so that existing resources are quickly released.
  Of course nothing changes by default when not touching this setting.

  A new global setting "tune.quic.reorder-ratio" allows one to indicate what
  portion of the in flight window can appear to be reordered before declaring
  a loss. There's now a separate counter per connection for the reordered
  packets that will help us figure what's happening. Tests on degraded
  networks showed a x10 performance increase with the new default setting.

  An interesting security feature was backported to block traffic with clients
  which use privileged port as their source port value. Such behavior is
  highly suspect as it is often the sign of an amplification attack. This can
  be activated using harden.reject-privileged-ports.{tcp|quic} keywords. Note
  that on 3.0, we chose to set it by default for QUIC. However, it remains
  disabled on 2.9 and lesser versions to keep the current behavior on stable
  haproxy branches, but users are free to activate it if needed. It is
  particularly useful when QUIC listeners are active to prevent DNS/NTP
  amplification attack.  However, on TCP this protection may break some
  protocols such as FTP.

  The H2 glitches counter that allows one to detect protocol abuses and to
  automatically kill a connection was backported after having been reported as
  effective in field by some users. This should even help get rid faster of
  annoying script kiddies playing with CONTINUATION frames to try to see if we
  do funny things with them (response: no we don't). This also adds the new
  sample fetch functions "fc_glitches" and "bc_glitches". Note, however that
  the dynamic tracking with stick-tables was not backported, 3.0 will be
  preferred for this.
  This introduces two new configuration settings
    - tune.h2.be.glitches-threshold
    - tune.h2.fe.glitches-threshold
  to set a glitch threshold to eliminate bad behaving clients. The default
  value is set to zero, meaning no threshold is set, i.e., there is no change
  of behavior by default.

  The hard limit on the number of file descriptors now defaults to about 1
  million, in order to match what has been done for a very long time on many
  distros, and that recently changed to 1 billion on some of them, causing a
  huge startup time (or even a watchdog at boot) and a massive memory usage.
  You can override this default by configuring fd_hard_limit.

  The SSL stack is now always completely initialized. Due to a change of API
  in 3.x the old call was deprecated and used to emit a warning, but it was
  later found to still be required in some cases.

  Some invalid Transfer-Encoding values are now accepted during the H1
  response parsing when accept-invalid-http-response option is enabled, even
  if it is forbidden by the RFC-9112. So, now, with this option, multiple
  "chunked" values are accepted, as well as empty values. When several
  "chunked" values are found, the payload will still be considered as encoded
  once and the header will be sanitized when sent to the client. The request
  parsing was not changed. This remains forbidden because it is highly
  suspicious if a client is sending an invalid T-E header. On server side, we
  can consider the server as trusted. But you must still remain careful with
  such behavior. And, of course, the best is to fix the application.

  Several fixes are applied for better HTTP conformance. In some cases, 502
  server initial errors were incorrectly hidden and are now properly logged.
  CONNECT requests with a scheme are now rejected as they are invalid
  according to RFC 7230. Empty paths are normalized to "/" for aboslute-form
  URI.

  It is now possible to disable seamless reload on master-worker mode by using
  the argument '-x /dev/null'. This may be necessary for some usage since the
  introduction of automatic seamless reload for master-worker mode.

 -- Athos Ribeiro <athos.ribeiro@canonical.com>  Fri, 10 Oct 2025 10:20:19 -0300
