19:53 obviously, we we leverage a lot of a lot of auto scaling to to account for that that business, but not everything is auto scale. So what the biggest issues that we we come up with in terms of dealing with that burstiness is generally to do with how fast we can get new EC to nodes online. But generally, it's with the few non auto scaling components within within the pipeline. We take GCP. For an example, we're using pub sub there now pub sub is this beautifully elastic auto scaling system where you can throw as much as you wanted it and it will scale to meet meet demand without any issues. Where we run into issues is on the flip side, and how we run AWS, which is using kinesis, which has kind of fixed fixed size and Kafka or as your Event Hubs would have kind of the same, the same sorts of issues where you've got much more fixed kind of sharding based Ingress limitations. there's kind of two ways we tend to tackle that one is with so we've built our own sort of proprietary auto scaling text that kind of goes and does recharges of kinesis as and when as and when needed. But that tends to fall over quite quickly at higher shining rates, where I'm talking, you know, if you're getting into the hundred or 200 plus shards, doing a resize can take anywhere up to 3035 minutes, which is often far too slow for a very big burst burst and traffic. So in these cases, we tend to work with clients and look at their traffic patterns and look at, you know, where they're going to be evolving up to, we do do quite a bit of trend analysis there. And and we can say, Well, if you want to keep the pipeline healthy, we're going to have to get this much of a buffer in place for this non auto scaling component. Otherwise, we're going to run into issues that are not going to be recoverable very quickly. This is obviously not the best strategy. You're instead of having a nice auto scaling elastic architecture, you've suddenly got hardcoded capacity, which means that we have to have around the clock ops ops availability to you know, Check for alerts check when you're starting to reach, reach those thresholds and go and scale that up. We're kind of actively looking at alternatives at the moment for for kind of how we can swap out those systems for something a bit more PubSub esque, especially, especially on Amazon we're looking at so you know, how could we maybe swap out kinesis for Sq s and s&s for that similar sort of elastic auto scaling queuing with with fan out rather than rather than leveraging leveraging something like kinesis, so that's in the streaming side, that's probably the biggest the biggest bottleneck. The rest of it all is quite easy to auto scaling is generally generally quite fast. The other areas we have issues is is done with downstream data stores that are by nature a lot more a lot more static in size. So snowflake DB in a lot of ways to solve that. And BigQuery obviously is sold out as well where it's kind of unlimited storage capacity, you can just throw whatever they You want into there and it's backed by blob storage. So you can as you have your data lake and in that sense, where we didn't run into some issues, which, you know, redshift is starting to address within you, you instance types that they've released, but redshift and Elasticsearch still serve as a weak point in the architecture because there is capped capacity. And especially when you're looking at a streaming pipeline, and you want to stream data in as quickly as it's arriving, big spikes and traffic can overwhelm CPU resource they can overwhelm suddenly the amount of provisioned, you know, capacity that you have for these systems, which can cause service interruption and downtime. So we have again, around the clock teams that are waiting for these alerts to go in upscale systems as and when as when we breach breach, I guess the strategies there are to look at patterns and look at you know, what, how is my evolution of tracking been developing over the last months how has the pipeline handled spikes in the past and then sizing it up with a healthy healthy buffer to make sure that when these things happen again, you're covered. But there's there's a limit to what we what we can do especially running so many of these systems to then, you know, try to catch all edge cases, which is why we still need that that around the clock ops team to deal with that.