HTTP attack on Azure

We will break the web server and fill it with bunches of HTTP requests. Slowly fill everything around with HTTP flood and observe complete degradation. Get ready Azure, there will be no laughing matter!





To be a little more serious, while performing standard labs on acquaintance with Azure as part of AZ-900, Microsoft Azure Fundamentals decided to see what one of the minimum configurations of Standard B1s virtual machines (1 GiB RAM, 1 vCPU) is capable of.





In standard labs, a web server like Apache or IIS is installed on the virtual machine, a simple site is launched and this is where it all ends. It seemed to me that such an acquaintance was not enough and it became interesting to see how the server will respond to a large number of requests, what will become of the response time and, most importantly, whether changing the size of the virtual machine will help improve the quality of work. In addition, to add worries to the server, WordPress (Apache, MySQL, PHP) was brought up on a virtual machine with Ubuntu. For the test, a Python script was used that continuously generated GET requests to the same address.





In the case of single requests, the server response time did not exceed 300-400 ms, which looked quite acceptable for such a configuration.





Another thing is how the server will react to bulk requests when GETs arrive in batches. In Python, such parallel requests can be implemented using the concurrent.futures module, which provides access to a high-level interface for asynchronous calls. The implementation idea was inspired by the creativedata.stream resource  . As a result, for the test, the server was attacked by a wave of GET requests with a linearly increasing number of simultaneous requests. For clarity, the response time for each request was limited to 10 seconds. For each attempt, 5000 requests were sent. There is a timeout of 3 minutes between attempts.





The test results for VM Standard B1s are shown in the table





Number of parallel GET requests





Test time (s)





Average response time (s)





Maximum response time (s)





Number of refusals 





ten





246 





0.482504





1.393406





0





20





183 





0.716227





1.775027





0





30





158 





0.925803





2.239563





0





40





133 





1.028995





10.389413





4773





40 , , โ€œ200โ€ 100%.





. . .





Standard B1s Performance
Standard B1s Perfomance

, . B1s Standard B2s (4GiB RAM, 2 vCPU). ?





, . 10000.





VM Standard B2s





- GET





()





()





()





 





20





198 





0.387310





1.377070





0





40





171 





0.660414





1.481950





0





60





140 





0.808657





1.674038





0





80





130 





1.001915





2.142157





0





100





119





1.163476





2.252231





0





120





119 





1.417223





2.703418





0





140





119 





1.654639





2.98774





0





160





119 





1.901040





5.622294





0





. , .   , .





Standard B2s Performance
Standard B2s Perfomance
Standard B2s Monitoring
Standard B2s Monitoring

160 5Mb/s .





Room for conclusions

This test with HTTP flood and the current implementation does not pretend to conquer space and follow the gold standards. However, the tests showed the expected direct relationship between the number of concurrent requests and the load on the CPU, memory, and average and maximum response times.





Apparently, a server with a large amount of RAM (4Gb versus 1Gb) copes with this kind of load better, and even with a 5-fold increase in the number of requests (160 versus 30), it regularly responds with 200 OK!





PS





An example of a test utility is available in my repository on  github








All Articles