Your team may be investing a lot of time designing the most idiomatic API experience out there for developers. That’s not enough these days — you also need to provide an API client. Here are 6 reasons why…
1. Batch requests
If your customers are injesting or uploading data through your API, it’s far more efficient to batch the requests rather than make many individual requests. The benefits for the API provider are two fold:
- you can architecture your API to be more pull-based rather than push-based. You can stash the request away and process the payload at your convenience.
- by reducing the HTTP overhead, you’re able to do more work per-request.
To keep your API speedy and performant it is necessary to use caching whenever possible. The fastest request is no request. The second fastest is to simply check if the resource on the server has changed. For this you can make use of the HTTP caching headers like Date, ETags and If-Modified-Since.
A proper caching implementation is now a little more involved because you need to handle the entire request lifecycle. You’ll have to mask GET requests with a HEAD request and appropriately handle HTTP status code 304 for resource not modified.
Most APIs out there throttle access on a client or organization-level. This is to prevent DDoS attacks, maintain the quality of service and prevent buggy scripts from unnecessarily taxing the system.
When one your customers faces a HTTP status code 429 (Too Many Requests) or the like, there are no accepted practices on how to proceed. Your client can become the reference implementation on how to handle server-side throttling, or better still actually throttle client-side.
At some point or the other, your API is going to fail — such is the nature of networked applications. Sometimes, the failure could be due to factors outside your control, for example disconnection from the internet or overload of an intermediate gateway node.
This problem is specially exacerbated on mobile networks where latencies are unusually high.
Setting appropriate network and connection timeouts is an often overlooked issue and can lead to a negative user experience.
All modern browsers request gzipped content by default via the Accept-Encoding HTTP header. A new API client under development uses a less mature HTTP stack and supporting gzip is often an afterthought.
Serving uncompressed content is a waste for all parties, and there’s no valid reason to do so.
6. Retry on Error
By design, your API should be tolerant of errors. There are two classes of errors: 4xx for client-side and 5xx for server-side.
In exceptional cases, it makes sense to retry the call upon error. If your API has a mechanism to perform de-duplication on the calls, your client can be the reference implementation of this feature.
From my experience, I find that it’s a lot of work to write a well behaving API client. It’s in your best interests to provide a vendor-approved client to protect your API as well as save your customers a lot of time doing integration busy-work.
Photo Credit: comedy_nose.