Simon Fell > Its just code > Internet web services - compression matters!
We've been long saying that one of the most important things you can do to improve the perf of your Sforce integration is to make sure you're using gzip compression on the http request/response. There's been a lot of talk both internally and externally this week about perf so I wanted to see exactly how much difference the various recommendations make. I put together a fairly simple Sforce client application in .NET that exports all the data for a single sObject to a local CSV file. It has options to control, gzip, batch sizes, HTTP keepalives. In addition it implements a "read-ahead" for query/queryMore calls, where you make the next queryMore call in parallel with processing the last set of results. (The Sforce Office toolkit also implements this). You can grab just the binary and/or source and check it out yourself. I was fairly shocked by some of the numbers i saw
|16, 16, 15|
|-nokeepalive||16, 19, 16|
|-nokeepalive -batchsize 100||22, 26, 25|
|-batchsize 100||19, 18, 20|
|-noreadahead||35, 16, 85, 15|
|-nogzip||106, 105, 102|
Things were pretty much as I expected, not using HTTP keep-alives becomes more expensive as you make more round trips. bigger batch sizes mean fewer round trips, therefore better throughput. The shocker though is how much difference turning off gzip makes, the gzip case is a whopping 6x faster overall. If you do nothing else, make sure you're using gzip compression, and I think its safe to say that anyone offering web services over the internet should make sure they're supporting HTTP compression as well. Now if the tools would only support it better.
SforceExporter username password sObjectType [-nogzip] [-nokeepalive] [-noreadahead] [-batchsize XXXX] [-url http://some.url/]