The ship has very much sailed now with ballot SC63, and this is the result, but I still don't think CRLs are remotely a perfect solution (nor do I think OCSP was unfixable). You run into so many problems with the size of them, the updates not propagating immediately etc. It's just an ugly solution to the problem, that you then have to introduce further hacks (Bloom filters) atop of it all to make the whole mess work. I'm glad that Mozilla have done lots of work in this area with CRLite, but it does all feel like a bodge.
The advantages of OCSP were that you got a real-time understanding of the status of a certificate and you had no need to download large CRLs which become stale very quickly. If you set security.ocsp.require in the browser appropriately then you didn't have any risk of the browser failing open, either. I did that in the browser I was daily-driving for years and can count on one hand the number of times I ran into OCSP responder outages.
The privacy concerns could have been solved through adoption of Must-Staple, and you could then operate the OCSP responders purely for web-servers and folks doing research.
And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not?
The problem with requiring OCSP stapling is that it's not practically enforceable without breakage.
The underlying dynamics of any change to the Web ecosystem is that it has to be incrementally deployable, in the sense that when element A changes it doesn't experience breakage with the existing ecosystem. At present, approximately no Web servers do OCSP stapling, so any browser which requires it will just not work. In the past, when browsers want to make changes like this, they have had to give years of warning and then they can only actually make the change once nearly the entire ecosystem has switched and so you have minimal breakage. This is a huge effort an only worth doing when you have a real problem.
As a reference point, it took something like 7 years to disable SHA-1 in browsers [0], and that was an easier problem because (1) CAs were already transitioning (2) it didn't require any change to the servers, unlike OCSP stapling which requires them to regularly fetch OCSP responses [1] and (3) there was a clear security reason to make the change. By contrast, with Firefox's introduction of CRLite, all the major browsers now have some central revocation system, which works today as opposed to years from now and doesn't require any change to the servers.
I think you are correct. There were similar issues with Firefox rolling out SameSite=Lax by default, and I think those plans are now indefinitely on hold as a result of the breakage it caused. It's a hard problem to solve.
> As an aside it's not clear that OCSP stapling is better than short-lived certs.
OCSP stapling, when done correctly with fallback issuance, is just a worse solution than short-lived certificates. OCSP lifetimes are 10 days. I wrote about this some here [1].
I think the argument isn’t that it’s okay, but that one bad thing doesn’t mean we should do two bad things. Just because my DNS provider can see my domain requests doesn’t mean I also want arbitrary CAs on the Internet to also see them.
I never understood why they didn’t tried to push OSCP into DNS.
You have to trust the DNS server more than you trust the server you are reaching out to as the DNS server can direct you anywhere as well as see everything you are trying to access anyhow.
TLS is to protect you from malicious actors somewhere along your connection path. DNS can't help you.
Just imagine you succeeded in inventing a perfectly secure DNS server. Great, we know this IP address we just got back is the correct one for the server.
Ok, then I go to make a connection to that IP address, but someone on hop 3 of my connection is malicious, and instead of connecting me to the IP, just sends back a response pretending to be from that IP. How would I discover this? TLS would protect me from this, perfectly secure DNS won't.
Because one of the main things TLS is intended to defend against is malicious / MITM'd DNS servers? If DNS was trustworthy then the entirety of TLS PKI would be entirely redundant...
How would that work in the current reality of the DNS? The current reality is that it’s unauthenticated and indeterminately forwarded/cached, neither of which screams success for timely, authentic OCSP responses.
It's funny that putting some random records in DNS is enough to have enough "ownership" to make a cert for one but we can't use same method for publishing revoking
The entire existence of CAs is a pointless and mystical venture to ensure centralized control of the Internet that, since now entirely domain-validated, provides absolutely no security benefits over DNS. If your domain register/name server provider is compromised, CAs are already a lost cause.
This isn't correct, because your domain name server may be insecure even while the one used by the CA is secure. Moreover, CT helps detect misissuance but does not detect incorrect responses by your resolver.
Three browser companies on the west coast of the US effectively control all decisionmaking for WebPKI. The entire membership of the CA/B is what, a few dozen? Mostly companies which have no reason to exist except serving math equations for rent.
How many companies now run TLDs? Yeah, .com is centralized, but between ccTLDs, new TLDs, etc., tons. And domain registrars and web hosts which provide DNS services? Thousands. And importantly, hosting companies and DNS providers are trivially easy to change between.
The idea Apple or Google can unilaterally decide what the baseline requirements should be needs to be understood as an existential threat to the Internet.
And again, every single requirement CAs implement is irrelevant if someone can log into your web host. The entire thing is an emperor has no clothes thing.
Incoherent. Browser vendors exert control by dint of controlling the browsers themselves, and are in the picture regardless of the trust system used for TLS. The question is, which is more centralized: the current WebPKI, which you say is also completely dependent on the DNS but involves more companies, or the DNS itself, which is axiomatically fewer companies?
I always love when people bring the ccTLDs into these discussions, as if Google could leave .COM when .COM's utterly unaccountable ownership manipulates the DNS to intercept Google Mail.
"And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not?"
Running your own DNS server is rather easier than messing with OCSP. You do at least have a choice, even if it is bloody complicated.
SSL certs (and I refuse to call them TLS) will soon have a required lifetime of forty something days. OCSP and the rest becomes moot.
You can request 6-day certificates from Let's Encrypt. There's a clear path towards 24-hour certificates. This will be pretty much equivalent to the current status quo with the OCSP stapling.
This will not impact Chrome in any meaningful way because - in typical Google fashion - they invented their own bullshit called CRLSets that does not perform OCSP or CRL checks in any way, rather periodically downloads a preened blacklist from Google which it then uses to screen certificates.
Most people don't realize this.
It's quite insane given that Chrome will by default not check CRLs *at all* for internal, enterprise CAs.
Good. OCSP sucks. It's a fail-open design, and the fact that it exists means that a lot of security people have developed an auto-response for certificate lifetime problems, even in domains where OCSP is totally infeasible, like secure boot.
I can patiently explain why a ROM cannot query a fucking remote service for a certificate's validity, but it's a lot easier to just say "Look OCSP sucks, and Let's Encrypt stopped supporting it", especially to the types of people I argue with about these things.
Ocsp has always represented a terrible design. If clients require it, then it becomes just a override on the not after date included in the certificate, that requires online access to the cert server. If it is not required, then it is useless, because blocking the ocsp responses is well within the capabilities of any man in the middle attack, and makes the servers themselves DDOS attack targets.
The alternative to the privacy nightmare is ocsp stapling, which has the first problem once again - it adds complexity to the protocol just to add an override of the not after attribute, when the not after attribute could be updated just as easily with the original protocol, reissuing the certificate. It was a Band-Aid on the highly manual process of certain issuance that once dominated the space.
Good riddance to ocsp, I for one will not miss it.
Shortening the certificate lifespan to e.g. 24h would have a number of downsides:
Certificate volume in Certificate Transparency would increase a lot, adding load to the logs and making it even harder to follow CT.
Issues with domain validation would turn into an outage after 24h rather than when the cert expires, which could be a benefit in some cases (invalidating old certs quickly if a domain changes owner or is recovered after a compromise/hijack).
OCSP is simpler and has fewer dependencies than issuance (no need to do multi-perspective domain validation and the interaction with CT), so keeping it highly available should be easier than keeping issuance highly available.
With stapling (which would have been required for privacy) often poorly implemented and rarely deployed and browsers not requiring OCSP, this was a sensible decision.
You can delete old logs or come up with a way to download the same thing with less disk space. Even if the current architecture does not scale we can always change it.
Following CT (without relying on a third party service) right now is a scale problem, and increasing scale by at least another order of magnitude will make it worse.
I was trying to process CT logs locally. I gave up when I realized that I'd be looking at over a week even if I optimized my software to the point that it could process the data at 1 Gbps (and the logs were providing the data at that rate), and that was a while ago.
With the current issuance rate, it's barely feasible to locally scan the CT logs with a lot of patience if you have a 1 Gbps line.
https://letsencrypt.org/2025/08/14/rfc-6962-logs-eol states "The current storage size of a CT log shard is between 7 and 10 terabytes". So that's a day at 1 Gbps for one single log shard of one operator, ignoring overhead.
OCSP stapling was a good solution in the age of certificates that were valid for 10 years (which was the case for basic HTTPS certificates back in 2011 when OCSP stapling was introduced). In the age of 90 day certificates (to be reduced to a maximum of 47 days in a few years), it's not quite as necessary any more, but I don't think OCSP stapling is that problematic a solution.
Certificates in air-gapped networks are problematic, but that problem can be solved with dedicated CRL-only certificate roots that suffer all of the downsides of CRLs for cases where OCSP stapling isn't available.
Nobody will miss OCSP now that it's dead, but assuming you used stapling I think it was a decent solution to a difficult problem that plagued the web for more than a decade and a half.
But that 47-day lifetime is enforced by the certificate authority, not by the browser, right? So a bad actor can still issue a multi-year certificate for itself, and in the absence of side-channel verification the browser is none the wiser. Or will browsers be instructed to reject long-lived certificates under specific conditions?
Wrong. Enforcement is done by the browser. Yes, a CA's certificate policy may govern how long a certificate they will issue. But should an error occur, and a long-lived cert issued (even maliciously), the browser will reject it.
The browser-CA cartels stay relatively in sync.
You can verify this for yourself by creating and trusting a local CA and try issuing a 5 year certificate. It won't work. You'll have a valid cert, but it won't be trusted by the browser unless the lifetime is below their arbitrary limit. Yet that certificate would continue to be valid for non-browser purposes.
I just did this with a 20-year certificate and it worked fine in Chrome and Firefox. That said, my understanding is that the browsers exempt custom roots from these kinds of policies, which are only meant to constrain the behavior of publicly trusted CAs.
> the not after attribute could be updated just as easily with the original protocol, reissuing the certificate.
That's not a viable solution if the server you want to verify is compromised. The point of CRL and OCSP is exactly to ask the authority one higher up, without the entity you want to verify being able to interfere.
In non-TLS uses of X.509 certificates, OCSP is still very much a thing, by the way, as there is no real alternative for longer-lived certificates.
In this scenario, where oscp is required and stapled: The CA can simply refuse to reissue the certificate if the host is compromised. It does not matter if it is refusing to issue an ocsp ticket or a new short lived cert.
The ship has very much sailed now with ballot SC63, and this is the result, but I still don't think CRLs are remotely a perfect solution (nor do I think OCSP was unfixable). You run into so many problems with the size of them, the updates not propagating immediately etc. It's just an ugly solution to the problem, that you then have to introduce further hacks (Bloom filters) atop of it all to make the whole mess work. I'm glad that Mozilla have done lots of work in this area with CRLite, but it does all feel like a bodge.
The advantages of OCSP were that you got a real-time understanding of the status of a certificate and you had no need to download large CRLs which become stale very quickly. If you set security.ocsp.require in the browser appropriately then you didn't have any risk of the browser failing open, either. I did that in the browser I was daily-driving for years and can count on one hand the number of times I ran into OCSP responder outages.
The privacy concerns could have been solved through adoption of Must-Staple, and you could then operate the OCSP responders purely for web-servers and folks doing research.
And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not?
The problem with requiring OCSP stapling is that it's not practically enforceable without breakage.
The underlying dynamics of any change to the Web ecosystem is that it has to be incrementally deployable, in the sense that when element A changes it doesn't experience breakage with the existing ecosystem. At present, approximately no Web servers do OCSP stapling, so any browser which requires it will just not work. In the past, when browsers want to make changes like this, they have had to give years of warning and then they can only actually make the change once nearly the entire ecosystem has switched and so you have minimal breakage. This is a huge effort an only worth doing when you have a real problem.
As a reference point, it took something like 7 years to disable SHA-1 in browsers [0], and that was an easier problem because (1) CAs were already transitioning (2) it didn't require any change to the servers, unlike OCSP stapling which requires them to regularly fetch OCSP responses [1] and (3) there was a clear security reason to make the change. By contrast, with Firefox's introduction of CRLite, all the major browsers now have some central revocation system, which works today as opposed to years from now and doesn't require any change to the servers.
[0] https://security.googleblog.com/2014/09/gradually-sunsetting... [1] As an aside it's not clear that OCSP stapling is better than short-lived certs.
I think you are correct. There were similar issues with Firefox rolling out SameSite=Lax by default, and I think those plans are now indefinitely on hold as a result of the breakage it caused. It's a hard problem to solve.
> As an aside it's not clear that OCSP stapling is better than short-lived certs.
I agree this should be the end goal, really.
OCSP stapling, when done correctly with fallback issuance, is just a worse solution than short-lived certificates. OCSP lifetimes are 10 days. I wrote about this some here [1].
[1]: https://dadrian.io/blog/posts/revocation-aint-no-thang/
> Why is that somehow okay, but OCSP not?
I think the argument isn’t that it’s okay, but that one bad thing doesn’t mean we should do two bad things. Just because my DNS provider can see my domain requests doesn’t mean I also want arbitrary CAs on the Internet to also see them.
I never understood why they didn’t tried to push OSCP into DNS.
You have to trust the DNS server more than you trust the server you are reaching out to as the DNS server can direct you anywhere as well as see everything you are trying to access anyhow.
TLS is to protect you from malicious actors somewhere along your connection path. DNS can't help you.
Just imagine you succeeded in inventing a perfectly secure DNS server. Great, we know this IP address we just got back is the correct one for the server.
Ok, then I go to make a connection to that IP address, but someone on hop 3 of my connection is malicious, and instead of connecting me to the IP, just sends back a response pretending to be from that IP. How would I discover this? TLS would protect me from this, perfectly secure DNS won't.
Because one of the main things TLS is intended to defend against is malicious / MITM'd DNS servers? If DNS was trustworthy then the entirety of TLS PKI would be entirely redundant...
How would that work in the current reality of the DNS? The current reality is that it’s unauthenticated and indeterminately forwarded/cached, neither of which screams success for timely, authentic OCSP responses.
Similarly to how OCSP stapling was supposed to work.
“Supposed to” being operative, I think!
It's funny that putting some random records in DNS is enough to have enough "ownership" to make a cert for one but we can't use same method for publishing revoking
The entire existence of CAs is a pointless and mystical venture to ensure centralized control of the Internet that, since now entirely domain-validated, provides absolutely no security benefits over DNS. If your domain register/name server provider is compromised, CAs are already a lost cause.
This isn't correct, because your domain name server may be insecure even while the one used by the CA is secure. Moreover, CT helps detect misissuance but does not detect incorrect responses by your resolver.
The DNS is more centralized than the WebPKI.
Three browser companies on the west coast of the US effectively control all decisionmaking for WebPKI. The entire membership of the CA/B is what, a few dozen? Mostly companies which have no reason to exist except serving math equations for rent.
How many companies now run TLDs? Yeah, .com is centralized, but between ccTLDs, new TLDs, etc., tons. And domain registrars and web hosts which provide DNS services? Thousands. And importantly, hosting companies and DNS providers are trivially easy to change between.
The idea Apple or Google can unilaterally decide what the baseline requirements should be needs to be understood as an existential threat to the Internet.
And again, every single requirement CAs implement is irrelevant if someone can log into your web host. The entire thing is an emperor has no clothes thing.
Incoherent. Browser vendors exert control by dint of controlling the browsers themselves, and are in the picture regardless of the trust system used for TLS. The question is, which is more centralized: the current WebPKI, which you say is also completely dependent on the DNS but involves more companies, or the DNS itself, which is axiomatically fewer companies?
I always love when people bring the ccTLDs into these discussions, as if Google could leave .COM when .COM's utterly unaccountable ownership manipulates the DNS to intercept Google Mail.
"And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not?"
Running your own DNS server is rather easier than messing with OCSP. You do at least have a choice, even if it is bloody complicated.
SSL certs (and I refuse to call them TLS) will soon have a required lifetime of forty something days. OCSP and the rest becomes moot.
You still are reaching out to authoritative servers for that domain so someone else other than the destination knows what you are looking for.
The 47 day life expectancy isn’t going to come until 2029 and it might get pushed.
Also 47 days is still too long if certificates are compromised.
You can request 6-day certificates from Let's Encrypt. There's a clear path towards 24-hour certificates. This will be pretty much equivalent to the current status quo with the OCSP stapling.
This will not impact Chrome in any meaningful way because - in typical Google fashion - they invented their own bullshit called CRLSets that does not perform OCSP or CRL checks in any way, rather periodically downloads a preened blacklist from Google which it then uses to screen certificates.
Most people don't realize this.
It's quite insane given that Chrome will by default not check CRLs *at all* for internal, enterprise CAs.
What in the Sam Hill? This is a new one on me. Does anyone have any reading for their logic of why?
They can be enabled by policy. An enterprise will usually have a policy installing it's custom ca
https://www.chromium.org/Home/chromium-security/crlsets/
See here for some of the why: https://www.imperialviolet.org/2012/02/05/crlsets.html
I vaguely remember there was similar reasoning as why golang doesn't support them, they're "antiquated and useless".
I hit that road-block a lot when trying to do mTLS in the browser, that and dropping support for the [KeyGen](https://www.w3docs.com/learn-html/html-keygen-tag.html) tag.
What are you on about? Literally all browsers have adopted the same strategy. It's not some secret they're trying to hide from you. It's a good thing.
Sort of. Firefox doesn't filter the list of revoked certificates the way Chrome does.
Good. OCSP sucks. It's a fail-open design, and the fact that it exists means that a lot of security people have developed an auto-response for certificate lifetime problems, even in domains where OCSP is totally infeasible, like secure boot.
I can patiently explain why a ROM cannot query a fucking remote service for a certificate's validity, but it's a lot easier to just say "Look OCSP sucks, and Let's Encrypt stopped supporting it", especially to the types of people I argue with about these things.
Good riddance. That was always a power grab if I ever saw it. They should take it out of browsers too.
Does this mean I should turn "security.OCSP.require" back off in Firefox?
For Let’s Encrypt certificates which no longer contain an OCSP URL, I don’t think it’ll have any effect.
Firefox also already has more effective revocation checking in CRLite: https://blog.mozilla.org/en/firefox/crlite/
Ocsp has always represented a terrible design. If clients require it, then it becomes just a override on the not after date included in the certificate, that requires online access to the cert server. If it is not required, then it is useless, because blocking the ocsp responses is well within the capabilities of any man in the middle attack, and makes the servers themselves DDOS attack targets.
The alternative to the privacy nightmare is ocsp stapling, which has the first problem once again - it adds complexity to the protocol just to add an override of the not after attribute, when the not after attribute could be updated just as easily with the original protocol, reissuing the certificate. It was a Band-Aid on the highly manual process of certain issuance that once dominated the space.
Good riddance to ocsp, I for one will not miss it.
Shortening the certificate lifespan to e.g. 24h would have a number of downsides:
Certificate volume in Certificate Transparency would increase a lot, adding load to the logs and making it even harder to follow CT.
Issues with domain validation would turn into an outage after 24h rather than when the cert expires, which could be a benefit in some cases (invalidating old certs quickly if a domain changes owner or is recovered after a compromise/hijack).
OCSP is simpler and has fewer dependencies than issuance (no need to do multi-perspective domain validation and the interaction with CT), so keeping it highly available should be easier than keeping issuance highly available.
With stapling (which would have been required for privacy) often poorly implemented and rarely deployed and browsers not requiring OCSP, this was a sensible decision.
Well, OCSP is dead, so the real argument is over how low certificate lifetimes will be, not whether or not we might make a go of OCSP.
>would increase a lot
You can delete old logs or come up with a way to download the same thing with less disk space. Even if the current architecture does not scale we can always change it.
>even harder to follow CT.
It should be no harder to follow than before.
Following CT (without relying on a third party service) right now is a scale problem, and increasing scale by at least another order of magnitude will make it worse.
I was trying to process CT logs locally. I gave up when I realized that I'd be looking at over a week even if I optimized my software to the point that it could process the data at 1 Gbps (and the logs were providing the data at that rate), and that was a while ago.
With the current issuance rate, it's barely feasible to locally scan the CT logs with a lot of patience if you have a 1 Gbps line.
https://letsencrypt.org/2025/08/14/rfc-6962-logs-eol states "The current storage size of a CT log shard is between 7 and 10 terabytes". So that's a day at 1 Gbps for one single log shard of one operator, ignoring overhead.
You could extend the format to account for repetition of otherwise identical short ttl certs
OCSP stapling was a good solution in the age of certificates that were valid for 10 years (which was the case for basic HTTPS certificates back in 2011 when OCSP stapling was introduced). In the age of 90 day certificates (to be reduced to a maximum of 47 days in a few years), it's not quite as necessary any more, but I don't think OCSP stapling is that problematic a solution.
Certificates in air-gapped networks are problematic, but that problem can be solved with dedicated CRL-only certificate roots that suffer all of the downsides of CRLs for cases where OCSP stapling isn't available.
Nobody will miss OCSP now that it's dead, but assuming you used stapling I think it was a decent solution to a difficult problem that plagued the web for more than a decade and a half.
But that 47-day lifetime is enforced by the certificate authority, not by the browser, right? So a bad actor can still issue a multi-year certificate for itself, and in the absence of side-channel verification the browser is none the wiser. Or will browsers be instructed to reject long-lived certificates under specific conditions?
Wrong. Enforcement is done by the browser. Yes, a CA's certificate policy may govern how long a certificate they will issue. But should an error occur, and a long-lived cert issued (even maliciously), the browser will reject it.
The browser-CA cartels stay relatively in sync.
You can verify this for yourself by creating and trusting a local CA and try issuing a 5 year certificate. It won't work. You'll have a valid cert, but it won't be trusted by the browser unless the lifetime is below their arbitrary limit. Yet that certificate would continue to be valid for non-browser purposes.
I just did this with a 20-year certificate and it worked fine in Chrome and Firefox. That said, my understanding is that the browsers exempt custom roots from these kinds of policies, which are only meant to constrain the behavior of publicly trusted CAs.
Safari enforces a hard limit of just over two years.
> So a bad actor can still issue a multi-year certificate for itself, and in the absence of side-channel verification the browser is none the wiser.
How would a bad actor do that without a certificate authority being involved?
the browsers will verify, and every cert will be checked against transparency logs. you won't be able to hide a long lived cert for very long.
> the not after attribute could be updated just as easily with the original protocol, reissuing the certificate.
That's not a viable solution if the server you want to verify is compromised. The point of CRL and OCSP is exactly to ask the authority one higher up, without the entity you want to verify being able to interfere.
In non-TLS uses of X.509 certificates, OCSP is still very much a thing, by the way, as there is no real alternative for longer-lived certificates.
actually that's pretty close to where we're going with ever shorter certificate lifetimes...
In this scenario, where oscp is required and stapled: The CA can simply refuse to reissue the certificate if the host is compromised. It does not matter if it is refusing to issue an ocsp ticket or a new short lived cert.