Users were dissatisfied by the speed of the service.
No surprise. They usually are.
The application was delivering static resources for the client and JSON encoded data via REST interface. The underlying data structure was using relational database managed from Java using JOOQ. All good technologies were applied to make the service as fast as possible, still the performance was not accepted by the users. Users claimed that the system was slow, unusable, annoying, dead as fish frozen in a lake (yes, actually that was one of the expression we got in the ticketing system). We were aware that “unusable” was some exaggeration: after all there were thousands of queries running through the system daily. But “slow” and “annoying” are not measurable terms not to mention “dead fish”. First thing first: we had to measure!
Meeting the requirements may not be enough.
The actual measurements showed that the response times were in-line with the requirement so we created a report showing all good and shiny hoping that this will settle the story. We presented the results to the management and we almost got fired. They were not interested in measurements and response time milisecs. All they cared was user satisfaction. (Btw: At this point I understood why the name “user acceptance test” is not “customer acceptance test”.) We were blatantly directed not to mess with some useless measurements but go and stand by some of the users and experience direct eyes how slow the system was. It was a kind of shock. Standing by a user and “feeling” the system speed was not considered to be an engineering approach. But having nothing else in hand we did. And it worked!
We could see that some of the users were impatient. They clicked on a button and after a second when nothing happened they clicked on it again. It meant that the browser was sending a request to the server but before the response arrived the communication was cancelled on the client side and the request was sent again. Processing started from zero by the second button press but the wait time for the user accumulated.
After a week or so we executed the measurement again. It was not a big effort since all the tooling was already there. What we experienced was that the 10% difference between the number of transactions measured on the client and on the server practically vanished. Probably these were the transactions when the user pressed the button second time. It was a full processing run on the server side, but was not reported by the client since the transaction as well as the measurement on the client side was cancelled. These got eliminated with the improved user interface that also decreased the load on the server by 10%. Which finally resulted slightly faster response times.
Usual disclaimers apply.
|Optimize the client for the server’s sake from our JCG partner Peter Verhas at the Java Deep blog.