In technology scaling, while the nominal value of a parameter is scaled down, the percentage of variation increases. This leads to diminished performance scaling in the traditional synchronous IC design methodology wherein the critical path delay must not exceed the clock cycle time. Alternatively, better-than-worst-case VLSI design allows the worst-case signal propagation path delay to exceed the clock cycle time, which does not compromise logic correctness if any timing error is predicted or detected, and a timing error avoidance or recovery scheme is present. Further, because most timing-critical paths have a tiny probability for a signal to propagate through it, overall performance improvement is achievable for a better-that-worst-case design with a timing error avoidance/recovery scheme. We achieve minimum-cost better-than-worst-case VLSI design by predicting a timing error occurrence based on the side-inputs of a timing-critical path which must take respective non-controlling logic values for a signal to propagate through the timing-critical path. For a SPARC V architecture LEON2 processor integer unit at 45nm, this technique leads to ~40% performance improvement at virtually no cost in energy consumption and silicon area.
We published this technique in a journal article with Intel Labs.