Skip to content

Conversation

@NelsonDivyamLobo
Copy link

Lineaje has automatically created this pull request to resolve the following CVEs:

Component CVE ID Severity Description
pyyaml:5.3.1:5.3.1 CVE-2020-14343 Critical A vulnerability was discovered in the PyYAML library in
versions before 5.4, where it is susceptible to arbitrary
code execution when it processes untrusted YAML files through
the full_load method or with the FullLoader loader.
Applications that use the library to process untrusted input
may be vulnerable to this flaw. This flaw allows an attacker
to execute arbitrary code on the system by abusing the
python/object/new constructor. This flaw is due to an
incomplete fix for CVE-2020-1747.
werkzeug:2.1.2:2.1.2 CVE-2024-49767 Medium Applications using Werkzeug to parse multipart/form-data
requests are vulnerable to resource exhaustion. A specially
crafted form body can bypass the
Request.max_form_memory_size setting. The
Request.max_content_length setting, as well as resource
limits provided by deployment software and platforms, are
also available to limit the resources used during a request.
This vulnerability does not affect those settings. All three
types of limits should be considered and set appropriately
when deploying an application.
werkzeug:2.1.2:2.1.2 CVE-2024-49766 Medium On Python < 3.11 on Windows, os.path.isabs() does not catch
UNC paths like //server/share. Werkzeug's safe_join()
relies on this check, and so can produce a path that is not
safe, potentially allowing unintended access to data.
Applications using Python >= 3.11, or not using Windows, are
not vulnerable.
werkzeug:2.1.2:2.1.2 CVE-2023-25577 High Werkzeug's multipart form data parser will parse an unlimited
number of parts, including file parts. Parts can be a small
amount of bytes, but each requires CPU time to parse and may
use more memory as Python data. If a request can be made to
an endpoint that accesses request.data, request.form,
request.files, or
request.get_data(parse_form_data=False), it can cause
unexpectedly high resource usage. This allows an attacker to
cause a denial of service by sending crafted multipart data
to an endpoint that will parse it. The amount of CPU time
required can block worker processes from handling legitimate
requests. The amount of RAM required can trigger an out of
memory kill of the process. Unlimited file parts can use up
memory and file handles. If many concurrent requests are sent
continuously, this can exhaust or kill all available workers.
werkzeug:2.1.2:2.1.2 CVE-2023-23934 Low Browsers may allow "nameless" cookies that look like =value
instead of key=value. A vulnerable browser may allow a
compromised application on an adjacent subdomain to exploit
this to set a cookie like =__Host-test=bad for another
subdomain. Werkzeug <= 2.2.2 will parse the cookie
=__Host-test=bad as __Host-test=bad. If a Werkzeug
application is running next to a vulnerable or malicious
subdomain which sets such a cookie using a vulnerable
browser, the Werkzeug application will see the bad cookie
value but the valid cookie key.
werkzeug:2.1.2:2.1.2 CVE-2023-46136 Medium Werkzeug multipart data parser needs to find a boundary that
may be between consecutive chunks. That's why parsing is
based on looking for newline characters. Unfortunately, code
looking for partial boundary in the buffer is written
inefficiently, so if we upload a file that starts with CR or
LF and then is followed by megabytes of data without these
characters: all of these bytes are appended chunk by chunk
into internal bytearray and lookup for boundary is performed
on growing buffer. This allows an attacker to cause a denial
of service by sending crafted multipart data to an endpoint
that will parse it. The amount of CPU time required can block
worker processes from handling legitimate requests. The
amount of RAM required can trigger an out of memory kill of
the process. If many concurrent requests are sent
continuously, this can exhaust or kill all available workers.
werkzeug:2.1.2:2.1.2 CVE-2024-34069 High The debugger in affected versions of Werkzeug can allow an
attacker to execute code on a developer's machine under some
circumstances. This requires the attacker to get the
developer to interact with a domain and subdomain they
control, and enter the debugger PIN, but if they are
successful it allows access to the debugger even if it is
only running on localhost. This also requires the attacker to
guess a URL in the developer's application that will trigger
the debugger.
cryptography:2.3:2.3 CVE-2024-0727 Medium Issue summary: Processing a maliciously formatted PKCS12 file
may lead OpenSSL to crash leading to a potential Denial of
Service attack Impact summary: Applications loading files in
the PKCS12 format from untrusted sources might terminate
abruptly. A file in PKCS12 format can contain certificates
and keys and may come from an untrusted source. The PKCS12
specification allows certain fields to be NULL, but OpenSSL
does not correctly check for this case. This can lead to a
NULL pointer dereference that results in OpenSSL crashing. If
an application processes PKCS12 files from an untrusted
source using the OpenSSL APIs then that application will be
vulnerable to this issue. OpenSSL APIs that are vulnerable to
this are: PKCS12_parse(), PKCS12_unpack_p7data(),
PKCS12_unpack_p7encdata(), PKCS12_unpack_authsafes() and
PKCS12_newpass(). We have also fixed a similar issue in
SMIME_write_PKCS7(). However since this function is related
to writing data we do not consider it security significant.
The FIPS modules in 3.2, 3.1 and 3.0 are not affected by this
issue.
cryptography:2.3:2.3 CVE-2020-25659 High RSA decryption was vulnerable to Bleichenbacher timing
vulnerabilities, which would impact people using RSA
decryption in online scenarios. This is fixed in cryptography
3.2.
cryptography:2.3:2.3 CVE-2023-0286 High pyca/cryptography's wheels include a statically linked copy
of OpenSSL. The versions of OpenSSL included in cryptography
0.8.1-39.0.0 are vulnerable to a security issue. More details
about the vulnerabilities themselves can be found in
https://www.openssl.org/news/secadv/20221213.txt and
https://www.openssl.org/news/secadv/20230207.txt. If you are
building cryptography source ("sdist") then you are
responsible for upgrading your copy of OpenSSL. Only users
installing from wheels built by the cryptography project
(i.e., those distributed on PyPI) need to update their
cryptography versions.
cryptography:2.3:2.3 CVE-2023-23931 Medium Previously, Cipher.update_into would accept Python objects
which implement the buffer protocol, but provide only
immutable buffers: pycon >>> outbuf = b"\x00" * 32 >>> c =<br>ciphers.Cipher(AES(b"\x00" * 32), modes.ECB()).encryptor()<br>>>> c.update_into(b"\x00" * 16, outbuf) 16 >>> outbuf<br>b'\xdc\x95\xc0x\xa2@\x89\x89\xadH\xa2\x14\x92\x84<br>\x87\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'<br> This would allow immutable objects (such as bytes) to
be mutated, thus violating fundamental rules of Python. This
is a soundness bug -- it allows programmers to misuse an API,
it cannot be exploited by attacker controlled data alone.
This now correctly raises an exception. This issue has been
present since update_into was originally introduced in
cryptography 1.8.
cryptography:2.3:2.3 GHSA-5cpq-8wj7-hf2v Low pyca/cryptography's wheels include a statically linked copy
of OpenSSL. The versions of OpenSSL included in cryptography
0.5-40.0.2 are vulnerable to a security issue. More details
about the vulnerability itself can be found in
https://www.openssl.org/news/secadv/20230530.txt. If you are
building cryptography source ("sdist") then you are
responsible for upgrading your copy of OpenSSL. Only users
installing from wheels built by the cryptography project
(i.e., those distributed on PyPI) need to update their
cryptography versions.
cryptography:2.3:2.3 GHSA-jm77-qphf-c4w8 Low pyca/cryptography's wheels include a statically linked copy
of OpenSSL. The versions of OpenSSL included in cryptography
0.8-41.0.2 are vulnerable to several security issues. More
details about the vulnerabilities themselves can be found in
https://www.openssl.org/news/secadv/20230731.txt,
https://www.openssl.org/news/secadv/20230719.txt, and
https://www.openssl.org/news/secadv/20230714.txt. If you are
building cryptography source ("sdist") then you are
responsible for upgrading your copy of OpenSSL. Only users
installing from wheels built by the cryptography project
(i.e., those distributed on PyPI) need to update their
cryptography versions.
cryptography:2.3:2.3 CVE-2023-50782 High A flaw was found in the python-cryptography package. This
issue may allow a remote attacker to decrypt captured
messages in TLS servers that use RSA key exchanges, which may
lead to exposure of confidential or sensitive data.
flask:0.12:0.12 CVE-2023-30861 High When all of the following conditions are met, a response
containing data intended for one client may be cached and
subsequently sent by a proxy to other clients. If the proxy
also caches Set-Cookie headers, it may send one client's
session cookie to other clients. The severity depends on
the application's use of the session, and the proxy's
behavior regarding cookies. The risk depends on all these
conditions being met. 1. The application must be hosted
behind a caching proxy that does not strip cookies or ignore
responses with cookies. 2. The application sets
session.permanent =<br>True.
2. The application does not access or modify the session at
any point during a request. 4.
SESSION_REFRESH_EACH_REQUEST
is enabled (the default). 5. The application does not set a
Cache-Control header to indicate that a page is private or
should not be cached. This happens because vulnerable
versions of Flask only set the Vary: Cookie header when the
session is accessed or modified, not when it is refreshed
(re-sent to update the expiration) without being accessed or
modified.
flask:0.12:0.12 CVE-2018-1000656 High The Pallets Project flask version Before 0.12.3 contains a
CWE-20: Improper Input Validation vulnerability in flask that
can result in Large amount of memory usage possibly leading
to denial of service. This attack appear to be exploitable
via Attacker provides JSON data in incorrect encoding. This
vulnerability appears to have been fixed in 0.12.3.
flask:0.12:0.12 CVE-2019-1010083 High The Pallets Project Flask before 1.0 is affected by
unexpected memory usage. The impact is denial of service. The
attack vector is crafted encoded JSON data. The fixed version
is 1. NOTE this may overlap CVE-2018-1000656.
requests:2.20.0:2.20.0 CVE-2024-47081 Medium ### Impact Due to a URL parsing issue, Requests releases
prior to 2.32.4 may leak .netrc credentials to third parties
for specific maliciously-crafted URLs. ### Workarounds For
older versions of Requests, use of the .netrc file can be
disabled with trust_env=False on your Requests Session
(docs).
### References psf/requests#6965
https://seclists.org/fulldisclosure/2025/Jun/2
requests:2.20.0:2.20.0 CVE-2023-32681 Medium ### Impact Since Requests v2.3.0, Requests has been
vulnerable to potentially leaking Proxy-Authorization
headers to destination servers, specifically during redirects
to an HTTPS origin. This is a product of how
rebuild_proxies is used to recompute and reattach the
Proxy-Authorization
header

to requests when redirected. Note this behavior has only
been observed to affect proxied requests when credentials are
supplied in the URL user information component (e.g.
https://username:password@proxy:8080). Current vulnerable
behavior(s):
1. HTTP → HTTPS: leak 2. HTTPS → HTTP:
no leak 3. HTTPS → HTTPS: leak 4. HTTP → HTTP:
no leak For HTTP connections sent through the proxy, the
proxy will identify the header in the request itself and
remove it prior to forwarding to the destination server.
However when sent over HTTPS, the Proxy-Authorization
header must be sent in the CONNECT request as the proxy has
no visibility into further tunneled requests. This results in
Requests forwarding the header to the destination server
unintentionally, allowing a malicious actor to potentially
exfiltrate those credentials. The reason this currently works
for HTTPS connections in Requests is the
Proxy-Authorization header is also handled by urllib3 with
our usage of the ProxyManager in adapters.py with
proxy_manager_for.
This will compute the required proxy headers in
proxy_headers and pass them to the Proxy Manager, avoiding
attaching them directly to the Request object. This will be
our preferred option going forward for default usage. ###
Patches Starting in Requests v2.31.0, Requests will no longer
attach this header to redirects with an HTTPS destination.
This should have no negative impacts on the default behavior
of the library as the proxy credentials are already properly
being handled by urllib3's ProxyManager. For users with
custom adapters, this may be potentially breaking if you
were already working around this behavior. The previous
functionality of rebuild_proxies doesn't make sense in any
case, so we would encourage any users impacted to migrate any
handling of Proxy-Authorization directly into their custom
adapter. ### Workarounds For users who are not able to update
Requests immediately, there is one potential workaround. You
may disable redirects by setting allow_redirects to False
on all calls through Requests top-level APIs. Note that if
you're currently relying on redirect behaviors, you will need
to capture the 3xx response codes and ensure a new request is
made to the redirect destination. import requests r =<br>requests.get('http://github.com/', allow_redirects=False)
### Credits This vulnerability was discovered and disclosed
by the following individuals. Dennis Brinkrolf, Haxolot
(https://haxolot.com/) Tobias Funke,
([email protected])
requests:2.20.0:2.20.0 CVE-2024-35195 Medium When making requests through a Requests Session, if the
first request is made with verify=False to disable cert
verification, all subsequent requests to the same origin will
continue to ignore cert verification regardless of changes to
the value of verify. This behavior will continue for the
lifecycle of the connection in the connection pool. ###
Remediation Any of these options can be used to remediate the
current issue, we highly recommend upgrading as the preferred
mitigation. * Upgrade to requests>=2.32.0. * For
requests<2.32.0, avoid setting verify=False for the first
request to a host while using a Requests Session. * For
requests<2.32.0, call close() on Session objects to
clear existing connections if verify=False is used. ###
Related Links * psf/requests#6655
urllib3:1.24.1:1.24.1 CVE-2025-50181 Medium urllib3 handles redirects and retries using the same
mechanism, which is controlled by the Retry object. The
most common way to disable redirects is at the request level,
as follows: python resp = urllib3.request("GET",<br>"https://httpbin.org/redirect/1", redirect=False)<br>print(resp.status) # 302 However, it is also possible to
disable redirects, for all requests, by instantiating a
PoolManager and specifying retries in a way that disable
redirects: python import urllib3 http =<br>urllib3.PoolManager(retries=0) # should raise MaxRetryError<br>on redirect http =<br>urllib3.PoolManager(retries=urllib3.Retry(redirect=0)) #<br>equivalent to the above http =<br>urllib3.PoolManager(retries=False) # should return the first<br>response resp = http.request("GET",<br>"https://httpbin.org/redirect/1") However, the retries
parameter is currently ignored, which means all the above
examples don't disable redirects. ## Affected usages Passing
retries on PoolManager instantiation to disable redirects
or restrict their number. By default, requests and botocore
users are not affected. ## Impact Redirects are often used to
exploit SSRF vulnerabilities. An application attempting to
mitigate SSRF or open redirect vulnerabilities by disabling
redirects at the PoolManager level will remain vulnerable. ##
Remediation You can remediate this vulnerability with the
following steps: * Upgrade to a patched version of urllib3.
If your organization would benefit from the continued support
of urllib3 1.x, please contact
[email protected]
to discuss sponsorship or contribution opportunities. *
Disable redirects at the request() level instead of the
PoolManager() level.
urllib3:1.24.1:1.24.1 CVE-2018-25091 Medium urllib3 before 1.24.2 does not remove the authorization HTTP
header when following a cross-origin redirect (i.e., a
redirect that differs in host, port, or scheme). This can
allow for credentials in the authorization header to be
exposed to unintended hosts or transmitted in cleartext.
NOTE: this issue exists because of an incomplete fix for
CVE-2018-20060 (which was case-sensitive).
urllib3:1.24.1:1.24.1 CVE-2019-11324 High The urllib3 library before 1.24.2 for Python mishandles
certain cases where the desired set of CA certificates is
different from the OS store of CA certificates, which results
in SSL connections succeeding in situations where a
verification failure is the correct outcome. This is related
to use of the ssl_context, ca_certs, or ca_certs_dir
argument.
urllib3:1.24.1:1.24.1 CVE-2019-11236 Medium In the urllib3 library through 1.24.2 for Python, CRLF
injection is possible if the attacker controls the request
parameter.
urllib3:1.24.1:1.24.1 CVE-2020-26137 Medium urllib3 before 1.25.9 allows CRLF injection if the attacker
controls the HTTP request method, as demonstrated by
inserting CR and LF control characters in the first argument
of putrequest(). NOTE: this is similar to CVE-2020-26116.
urllib3:1.24.1:1.24.1 CVE-2024-37891 Medium When using urllib3's proxy support with ProxyManager, the
Proxy-Authorization header is only sent to the configured
proxy, as expected. However, when sending HTTP requests
without using urllib3's proxy support, it's possible to
accidentally configure the Proxy-Authorization header even
though it won't have any effect as the request is not using a
forwarding proxy or a tunneling proxy. In those cases,
urllib3 doesn't treat the Proxy-Authorization HTTP header
as one carrying authentication material and thus doesn't
strip the header on cross-origin redirects. Because this is a
highly unlikely scenario, we believe the severity of this
vulnerability is low for almost all users. Out of an
abundance of caution urllib3 will automatically strip the
Proxy-Authorization header during cross-origin redirects to
avoid the small chance that users are doing this on accident.
Users should use urllib3's proxy support or disable automatic
redirects to achieve safe processing of the
Proxy-Authorization header, but we still decided to strip
the header by default in order to further protect users who
aren't using the correct approach. ## Affected usages We
believe the number of usages affected by this advisory is
low. It requires all of the following to be true to be
exploited: * Setting the Proxy-Authorization header without
using urllib3's built-in proxy support. * Not disabling HTTP
redirects. * Either not using an HTTPS origin server or for
the proxy or target origin to redirect to a malicious origin.
## Remediation * Using the Proxy-Authorization header with
urllib3's ProxyManager. * Disabling HTTP redirects using
redirects=False when sending requests. * Not using the
Proxy-Authorization header.
urllib3:1.24.1:1.24.1 CVE-2023-43804 High urllib3 doesn't treat the Cookie HTTP header special or
provide any helpers for managing cookies over HTTP, that is
the responsibility of the user. However, it is possible for a
user to specify a Cookie header and unknowingly leak
information via HTTP redirects to a different origin if that
user doesn't disable redirects explicitly. Users must
handle redirects themselves instead of relying on urllib3's
automatic redirects to achieve safe processing of the
Cookie header, thus we decided to strip the header by
default in order to further protect users who aren't using
the correct approach. ## Affected usages We believe the
number of usages affected by this advisory is low. It
requires all of the following to be true to be exploited: *
Using an affected version of urllib3 (patched in v1.26.17 and
v2.0.6) * Using the Cookie header on requests, which is
mostly typical for impersonating a browser. * Not disabling
HTTP redirects * Either not using HTTPS or for the origin
server to redirect to a malicious origin. ## Remediation
Upgrading to at least urllib3 v1.26.17 or v2.0.6 * Disabling
HTTP redirects using redirects=False when sending requests.
Not using the Cookie header.
urllib3:1.24.1:1.24.1 CVE-2023-45803 Medium urllib3 previously wouldn't remove the HTTP request body when
an HTTP redirect response using status 303 "See Other" after
the request had its method changed from one that could accept
a request body (like POST) to GET as is required by HTTP
RFCs. Although the behavior of removing the request body is
not specified in the section for redirects, it can be
inferred by piecing together information from different
sections and we have observed the behavior in other major
HTTP client implementations like curl and web browsers. From
RFC 9110 Section
9.3.1
:
> A client SHOULD NOT generate content in a GET request
unless it is made directly to an origin server that has
previously indicated, in or out of band, that such a request
has a purpose and will be adequately supported. ## Affected
usages Because the vulnerability requires a previously
trusted service to become compromised in order to have an
impact on confidentiality we believe the exploitability of
this vulnerability is low. Additionally, many users aren't
putting sensitive data in HTTP request bodies, if this is the
case then this vulnerability isn't exploitable. Both of the
following conditions must be true to be affected by this
vulnerability: * If you're using urllib3 and submitting
sensitive information in the HTTP request body (such as form
data or JSON) * The origin service is compromised and starts
redirecting using 303 to a malicious peer or the
redirected-to service becomes compromised. ## Remediation You
can remediate this vulnerability with any of the following
steps: * Upgrade to a patched version of urllib3 (v1.26.18 or
v2.0.7) * Disable redirects for services that you aren't
expecting to respond with redirects with redirects=False. *
Disable automatic redirects with redirects=False and handle
303 redirects manually by stripping the HTTP request body.

You can merge this PR once the tests pass and the changes are reviewed.

Thank you for reviewing the update! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants