Caching in blocks ================= $arla: caching-in-blocks,v 1.3 2000/12/10 23:08:29 lha Exp $ Why blockcache - There is just one reson that one should have blockcache. You want to edit filer larger then you blockcache. Example below is with a cache-size of 100M. : lha@nutcracker ; less kdc.3log kdc.3log: No space left on device : lha@nutcracker ; ls -l 'kdc.3log' -rw-r--r-- 1 314 daemon 179922925 Sep 18 00:05 kdc.3log - Speed is not really an issue since most files are accessed for head of the file to tail of the file. Then you can use incremental opening of the file. This would be less gross change. Prior work adaptive buffercache, usenix99 - this will apply to both reading and writing Double buffering problem ======================== One way that might work is that files that are accessed read-only are done on the underlaying file (and in that vnode's page-cache). But as files that are write-ed too are dubblebuffered. If the file has been accessed before the node's and the underlaying node's all pages are flushed to make sure we are double buffering. Incremental open ================ This still doesn't solve the problem with large files, it only solve the problem that its takes long time to read the first few bytes of a large file. * Opening a file wait until there is a token that you want or wakeup returns an error >open open getdata(wanted-offset) getdata(beginning-offset)