Prior art for "Style and layout caching of web content"



Microsoft's application aims to patent:

Methods and systems for style and/or layout caching of Web content are usable to build reusable style caching trees and cacheable layout calculations. Such style caching trees may be used to avoid recalculating style content of Web pages for document object model (DOM) elements that have not changed. Additionally, the cacheable layout calculations may be used to avoid recalculating the layout content of Web pages that are subsequently accessed.

Claim 1 is:

A computer-implemented method comprising: performed by one or more processors executing computer-readable instructions:

  • receiving a Web page file;
  • parsing the Web page file to create a document object model (DOM) tree comprising DOM tree nodes;
  • constructing a style caching tree comprising structure information of the DOM tree;
  • storing the style caching tree in a memory;
  • constructing a render tree comprising render objects based at least in part on the structure information of the DOM tree;
  • performing a layout calculation for render objects;
  • and storing the layout calculation results in the memory.

Can prior art be demonstrated from the WebKit and Gecko source trees from several years ago?


Posted 2012-09-20T16:05:17.783

Reputation: 261



Claim 1 doesn't seem novel, everything seems to be described in an article from October 2009 (before the October 2010 filing date of this patent):


Posted 2012-09-20T16:05:17.783

Reputation: 231

1I think the theme of this patent, which you didn't address, is the reuse of calculations across different pages or page changes. The answer to Where else you gonna put it?!? is: You don't put it anywhere. You discard it and rebuild it for every page and for every page change that occurs. The patent is about caching not simply storing data in memory. – Allon Guralnek – 2012-09-20T22:02:53.963

1Caching layout calculation between pages is an obvious idea, but as far as I know, none of the major browsers do it because it's a bad idea. Which means lack of prior art can't establish non-obviousness. Layout calculation code is already very fast, but it's plagued with bugs, which caching would make much worse. It also opens a timing attack side-channel, which would probably end up breaking some poor website's security. – jimrandomh – 2012-09-21T04:51:05.450

@AllonGuralnek That's the idea of the patent, but it starts at claim 8. Claims 1–7 do not contain the idea of reuse. My impression is that d2vid's prior art shows that claims 1–7 are not novel, but do not suffice to show that claims 8ff. are not novel. – Gilles 'SO- stop being evil' – 2012-09-21T21:35:19.297

@jimrandomh: It may indeed be a bad idea, but what's wrong with that? If I want to patent a square wheel, there would be no harm in having it granted to me. No one would shape their wheels square, so I would have no one to patent-troll. Everyone is happy. – Allon Guralnek – 2012-09-21T22:09:53.160

@AllonGuralnek The entire point of looking for prior art is that it's a proxy for obviousness: if something's obvious there's probably prior art, if there's no prior art it's probably not obvious. But when you start talking about square wheels, or perpetual motion machines, that no longer holds; it can be obvious, but have no prior art. Where there's a problem is if, some time in the future, circumstances change such that it becomes a good idea, or becomes possible. Then someone's left with a patent on something obvious, because the wrong test was applied. – jimrandomh – 2012-09-21T22:31:47.447

@AllonGuralnek Usefulness is a criteria for granting a patent, so if caching is a bad idea in this instance then that could invalidate the application:

– d2vid – 2012-09-28T03:42:13.460


One could argue there is prior art from the server-side "widget" cache of phpNuke

The term cache can be ambigous. The browser stores the page in memory (since Netscape 1.0 or earlier ), which is a form of cache.

Additionally, modifying DOM nodes without re-rendering the whole page has been done for a long time. Certainly before 2010 that Microsoft are suggesting in the patent.


Posted 2012-09-20T16:05:17.783

Reputation: 21

"modifying DOM nodes without re-rendering the whole page has been done for a long time" - source for this claim? – Joshua Drake – 2012-09-20T18:59:35.970

This has been around since Netscape days

– John – 2012-09-20T19:03:10.803

2Aside from the comment about "an implementation must at least buffer the document that has been read so far (or some parsed form of it)." which is followed by a citation needed tag, I do not see authoritative support for your claim. I don't disagree that the patent may be deeply flawed, I just don't see the kind of sourcing I'm used to on SE. – Joshua Drake – 2012-09-20T19:13:55.287


There may be prior art inherent in the product of the late, lamented FineGround.

The product was a reverse proxy you could put in front of your web site to improve performance for end users. In addition to stuff like caching gzipped copies of images, it could analyze HTML pages, determine which parts changed on subsequent loads and which didn't, and inject JavaScript into the page so that end browsers would cache most of the page and only reload the changes from the server.


Posted 2012-09-20T16:05:17.783

Reputation: 121