Continuing the conversation on Software Application Speed, I look at one of the means of improvement - Caching. In this episode I introduce Caching - how prevalent it is within modern computing, why we use it in software development, the pros and cons, the dangers of staleness, and why its an important business decision.
Continuing the conversation on Software Application Speed, I look at one of the means of improvement - Caching.
In this episode I introduce Caching - how prevalent it is within modern computing, why we use it in software development, the pros and cons, the dangers of staleness, and why its an important business decision.
Or listen at:
Published: Wed, 21 Jul 2021 16:04:32 GMT
Hello and welcome back to the Better ROI from Software Development podcast.
In this episode, I want to continue talking about speed and I want to introduce the concept of Caching.
So what is Caching?
Let's start with the description from Wikipedia:
"a cache is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.
To be cost-effective and to enable efficient use of data, caches must be relatively small. Nevertheless, caches have proven themselves in many areas of computing, because typical computer applications access data with a high degree of locality of reference. Such access patterns exhibit temporal locality, where data is requested that has been recently requested already, and spatial locality, where data is requested that is stored physically close to data that has already been requested."
There's a lot there to cover, but the important thing we need to know is caching is everywhere. Your laptop will cache data in memory to avoid waiting on the disk because it's faster. Your laptop will download website images and cache the contents to disk to avoid reading it again from network - because it's faster.
Caching is an incredibly heavily used approach within computing.
So how does it work?
Let's talk about website images. When you load a page in your Web browser and there's a big image in the middle of it, the browser will check its cache to see if it already has that image.
And because, let's say it's the first time you've been to this website, it doesn't have it in its cache - thus, it's what we call a miss.
As such, the browser will then go and ask the web server for that image. It will then wait to download it. It will then save it into its cashe. And it will show it to you, the user, via the browser.
If you subsequently want to view that page again, the browser will repeat those actions.
It will check its cache to see whether it has that image - this time it does - it gets a hit. As such, it then serves that image from its cache to you, the user, rather than going all the way out to the original web server.
Because the browser is being able to retrieve that image from the cache on the second and subsequent attempts, its saving the time it would take to go and download it from the web server - which is considerably faster than waiting for it to come back from the web server.
As I mentioned in the introduction, caching can be used in many places, and even for a simple website, there are multiple places that caching can occur.
It can happen on the web server itself - it can cache the results of reading from the database or the result of an expensive computation, something that it knows it will take time and resources to recreate - so why do that? Once you've got that result, store it and then use that in your cache so you can serve again to subsequent quests.
There can be caching on the browser, as I've described by the example of the image caching.
And it can be cashed at various points in the network between the browser and the web server. In next week's episode I'll actually talk about something called a Content Delivery Network, CDN, which helps us to cache data closer to users. But more on that next episode.
So let's talk about some of the outcomes of using caching.
Let's start with the pros, why we want to use caching. On a hit, there's a speed improvement. There is also less data being transferred across the network. This makes it cheaper. The consumer benefits, in this case, the user of the Web browser. The supplier benefits, in this case, the owner of the website. Everybody benefits from being able to use that caching, whether it's speed or indeed cost of data being transferred across networks.
A miss will be slower - there's extra work involved if we're going to use caching as opposed to just going and getting the image from the website every time there's extra work to check to see whether that image is in the cache. This is largely negligible, so you can probably ignore this as being a con, but it does exist.
Another potential downside is that it consumes space. So in the case of your web browser, you'll be consuming space on your local desk for that website cache as you're loading images, pages, content from websites. It's filling up space on your disk. And the same is true if there's caching involved on the web server, it too will be taking up space. Again, I would stress this is largely negligible as an impact, again, not something that I would consider to be a reason not to use catching, but it does exist as a negative.
Bad caching - now, this is a real problem. This is where we're using the wrong key to specify our caching. The benefits of caching come in because the same piece of data is requested repeatedly, as such, it's much more efficient to serve it from that cache rather than compute or retrieve it from original storage over and over again.
So you need something that is going to be used and shown repeatedly. Take, for example, if you cache your logo on your website. Now, your logo gets shown on every single page, is being seen by every single user - that will have a very high ratio from how many times it's put into the cache versus how many times it's read from the cache because it's used lots of time. You're getting multiple hits for that same image.
If, however, your website has some very specific, unique functionality to an individual user where they create an image for that user base on that specific time, it's such a unique image that it will never be seen again. That is a poor example of something to cache. Because you're only ever going to see it the once, you wouldn't want to cache it because you're actually not getting the benefit.
It comes back to that overhead of having to go through, put it into a cache, store in the cache, and ultimately removed from the cache, if it's something that is only ever used once. Again, you want to think about what's known as a cash ratio in terms of how often items will be retrieved. How often it will be used and thus how often to be retrieved from the cache.
But the main thing I want to cover in this episode in terms of the cons, the downsides, is staleness.
Staleness is how long that item, say a website image, will stay in the cache. That can really matter depending on what you're caching.
Now, if its your website logo, it's probably not that important. You can cache it so that it's there and retrievable time and time again. It's unlikely your logo will change.
But let's say, for example, you have a big hero banner at the top of your website promoting your latest deal - and your deal changes hourly. Now, that's great if you're caching for only a short period of time. But imagine if you're caching that image for, say, two hours. You've got customers that will not see this new deal because of the caching problem. They are not getting the most relevant image because they're taking from the cache - thus it's stale.
Now, there are ways of controlling how long something can stay in a cache. You may be able to set it to seconds, weeks, months, years even - but you do need to think about any item that is being cached, whether it be at the website, in the server, or in the web browser for a customer, or anywhere in between, you have to think about how long you want to cache that piece of information for.
It may be that it changes rapidly and you may only want to cache it for maybe five seconds. It may be something that hardly ever changes or is very unlikely to change, in which case maybe you want to talk about months or even possibly even years.
But that's the importance of why I want to raise this to you.
That decision over the pros of using cache and that potential staleness being a problem is a business decision - or at least it should be.
This is a similar conversation, as I talked about in Episode 88, when I talked about Eventual Consistency and the CAP Theorem - there's a trade off between speed versus correctness. So how quickly you can get content of your website available, by taking the benefit of the cash, versus the correctness because potentially it's too stale, - potentially what is in the cache is no longer valid.
Its that trade off between what is important - do you want the speed or the correctness? In the same way as when we talked about in the CAP theorem, we're talking about availability vs. consistency.
These are often decisions left to the technical teams to make and implement on behalf of the business.
This one, again in a similar way as I said about the availability and the consistency within the CAP theorem, this speed versus correctness, I believe has a material business outcome. As such, I actually believe the business should be involved in this conversation. And you, as a business owner and a business leader, need to be aware of the trade offs that you're making - or your technical team are making on your behalf.
In this episode, I've given an introduction into caching. Caching is around us everywhere. It's used in almost every single piece of software and computer hardware we have. It's used everywhere as a means of performance, as a means of speed, as a means of avoiding repeating expensive operations - whether it be expensive computation or expensive activities to retrieve content.
And while it is a very, very valuable technique and is important in modern software applications, there is that dangerous staleness. This is why for me, it has to be something that needs to be discussed with you as a business leader, as a business owner, so you can have that conversation about speed versus correctness to get the appropriate trade offs.
In the next episode, I'm going to talk about the Content Delivery Network, the CDN, as I introduced earlier on in this episode.
Thank you for taking the time to listen to this episode and I look forward to speaking to you again next week.