12:41 So, like, if your service that you're relying on historically has been up the vast majority of the time, then maybe it's okay to assume that and, like, swallow a 500 error on the rare case that goes down just depending on how much effort it is to actually test that. I think that's, like, a case to case case by case basis.
17:40 As far as I know, it has not been put into practice outside telecoms. This is also really expensive. So circling back to the risk discussion, unless you have a whole lot of money riding on the line, if your system goes down, it's just not worth it. Whereas, like, if you're a telecom, you're an emergency service for the whole country, so you better stay up and throw money at it to make sure you do.
14:21 Yeah. I mean, like, a a very clear example of this from my professional experience is our platform at work, if a particular portion of it that people are interacting with live, if that goes down and we don't recover in, like, under a minute, then that's potentially a lot of money lost for both us and for our customers. Whereas, like, the analysis features, it's really nice if they're up, but it's not it doesn't matter quite as much if they go down because, like, people can wait. So it's not a timely thing, and that's where we made a deliberate trade off at one point that this slice of features is the the slice that we really, really need to know if anything was going to break it, and that's where we devoted most of our efforts for performance testing, just robustness testing for all of the changes in it. And then you have a lot more other small bugs slipped through in the other part that aren't necessarily as impactful.