• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Virtual memory question

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
some people say set it to the same for both - but you need to see what you use in it for starters - some people say 1.5 x your ram amount.

checjk your PF usage when using your computer alot and judge based on that.
 
I set it at 1536. I had it at 2560 because someone told me its 2.5 x ammount of memory. Reason I was wondering is because I upgraded from a barton 2500 @ 2.4 ghz and the boot time is about 20 seconds longer than the barton rig and I don't know why. I read somewhere that a lot of virtual memory causes longer boot times. Setting it to 1536 didn't help and made it a bit longer I think.
 
You are refering to the pagefile, not Virtual Memory. MS uses the term incorrectly in their UI. The only place they seem to use it correctly is in MSDN articles.

VM, like the name suggests, is the virtualization of memory addresses. Each process sees it's own set of memory addresses, on a 32-bit system there's 4GB worth. 2GB is reserved by the NT kernel and the other 2GB is available for the process to use.

The pagefile is just a backing store for some data, NT requires everything in memory to have a backing store on disk so that it can free up that memory if it ever needs to. Most things can be paged back in from their original respective files (i.e. executables, shared libraries, non-changed files, etc) but any data in memory that's been altered needs a place to go on disk and that place is the pagefile.

As for the optimal pagefile size, it all depends on how you use your computer. To find the best size, first run your most intensive applications. Just use your computer for a while with programs open that you may have open all at one time and maybe play a few games or something. After this either use perfmon to measure PF usage, or use this. The latter would be easier and I don't have to explain as much.

Now that you know the PF usage, it is time to set the pagefile size. The initial size should be set to 4x the observed PF usage. The max should be 2x that number.
 
Alrigt according to you, I should set initial size to 1500 and max to 3000. Couldn't I just set both to 3000 ? would that be better/worse. Also still cannot fix my insane ammount of booting time. I did my usual routine to get the fastest boot times. run spyware/anti-virus, disk cleanup, delete prefetch, defrag, reboot still to no avail.
 
It would be worse, since a file that large could increase seek times to other files on that partition and it is unnecessary to have it that large.

Also, why in the world do you delete the Prefetch files? This does nothing to improve boot times. It actually would do the exact opposite.
 
Deleting the prefetch files actually saved 3 seconds off my boot time. It was recommended to me by a good buddy of mine and an experienced computer guy. Hes doing a mormon mission or something on the east coast so i can't ask for his advice anymore ;/.
 
Meathead said:
Deleting the prefetch files actually saved 3 seconds off my boot time. It was recommended to me by a good buddy of mine and an experienced computer guy. Hes doing a mormon mission or something on the east coast so i can't ask for his advice anymore ;/.

The Prefetch directory analyzes files that you use at startup and when you run programs to speed up these processes. Inside the Prefetch directory are indexes to file locations on disk and the order in which they were loaded. So, does it make any sense to clean it periodically? No, it doesn't. Just ignore your "good buddy" and "experienced computer guy." Just completely ignore anybody who tells you do do some bogus "tweak" to improve performance.

Here, you can do even more reading:

Windows XP: Kernel Improvements Create a More Robust, Powerful, and Scalable OS - Read under "Prefetch"
http://msdn.microsoft.com/msdnmag/issues/01/12/XPKernel/default.aspx

Misinformation and the The Prefetch Flag
http://blogs.msdn.com/ryanmy/archive/2005/05/25/421882.aspx

Here is what the second link says:

It is a bad idea to periodically clean out that folder as some tech sites suggest. For one thing, XP will just re-create that data anyways; secondly, it trims the files anyways if there’s ever more than 128 of them so that it doesn’t needlessly consume space. So not only is deleting the directory totally unnecessary, but you’re also putting a temporary dent in your PC’s performance.

Just totally forget bogus "tweaks" people tell you to do. Even if it is a "good buddy" or "experienced computer guy," it is most likely bad advise.
 
Just totally forget bogus "tweaks" people tell you to do. Even if it is a "good buddy" or "experienced computer guy," it is most likely bad advise.


uh-hu - liekly some info they got off some bogus tweak site anyways - 99% of "tweak" site are there for click throughs and offer no real world performance gains.
 
BrutalDrew, you have some good input, and I especially like the links you provied. Some of your input is also slightly off base though.

Its a generally held belief by those in the know that you should set the pagefile to an equal min and max. Doing so forces the pagefile to reside in a static area on the disk, rather than fragmenting all over the drive and increasing seek times... This fragmentation occurs when windows automatically adjusts the pagefile size. If there is enough free space on the disk and it is defragmented, setting equal min and max can also ensure that the pagefile resides in a contiguous section of the drive... I always make sure that my pagefile only has one section, rather than having pagefile-data-pagefile.

If you are using applications which saturate your RAM and make extensive use of the pagefile, it would also do you favors to move the pagefile onto a non-system disk on a seperate channel. Some factors about this are detailed here:

http://faq.storagereview.com/tiki-index.php?page=MultiplePagefiles
 
Last edited:
Windows only resizes the pagefile when absolutely necessary. If the initial size is set large enough (it is if using my recommendation) it will not be necessary, so the pagefile will never resize, thus it will not become fragmented.

Also, it would take a very extreme degree of pagefile fragmentation to make any difference in performance. This is because Windows NEVER reads or writes more then 64KB per buffer to the pagefile and Windows almost never reads or writes to the pagefile in sequential 64KB chunks. So after reading or writing one such buffer it WILL have to move the heads, regardless if the pagefile is fragmented or not. These pagefile IOs will also be interspersed with IOs to many other files. In between ALL such IOs the heads will have to move anyway. This is why fragmentation of individual files matters far less than many people want you to believe.

If you are using applications which saturate your RAM and make extensive use of the pagefile, it would also do you favors to move the pagefile onto a non-system disk on a separate channel.

I agree with this. For best performance the pagefile should be on the least-used drive and the most-used partition. If you only have one drive the pagefile should be on the partition with your OS. Splitting them apart would only increase your average seeking distance, thus decreasing performance. Your applications should also be in this partition for the same reason.
 
THe way I look at it, (and have done for years with no problems), i start out the page-file at the same size as my RAM for hte minimum, and if i have under 1Gb of RAM, i set the max page-file to 1GB. If i have 1Gb or RAM or more, i set the page file to 1Gb min/max.
 
Windows only resizes the pagefile when absolutely necessary? Interesting thought in theory, but in practical application it just doesn't hold true. Windows pops all sorts of things into the pagefile as it sees fit - WINDOWS determines what is absolutely necessary. We all know windows isn't infallible, and judging by my usage patterns, it will resize the pagefile in a way which I've found to be unnecessary.

Regardless of that, my point is that setting min/max equal ensure you know what is going on with your system, and eliminates any guesswork about what the pagefile is up to and where it resides - a static pagefile is just better than a dynamic one. The only reason to have a dynamic pagefile is to deal with users who have no idea of what their pagefile needs are - the typical windows user. Your statement about the pagefile not resizing anyways if you follow your advice compliments the fact that having a static pagefile size is best to avoid fragmentation, and it shows that setting the max equal to the min is preferred to gaurantee it.

I have 1.25GB of RAM in my system, and a static pagefile. My pagefile usage is far heavier than many peoples, as few people run news readers capable of handling header information the way newsbin pro is. When loading a large newsgroup, all of my RAM is saturated and pageing can get heavy for a while. I've still never had a problem where I've run out of pagefile space.

You also stated how head travel increases seek times in reference to the non-sequential writes windows does to the pagefile. You then stated how pagefile fragmentation would have to be considerable to affect performance noticeably, and I would agree, though its slightly contradictory to your last statement I just highlighted. However, pagefile management is all academic if you have enough RAM in your system for most users on here, so we're talking about establishing best practices here really - noticeable system performance isn't that relevant here. My point is that if all your pagefile seeks are in a contiguous 1GB (for example) area of the disk rather than different areas spanning several GB, your head travel is going to be minimized on average.

Having a static, contigous pagefile is a best practice, and ensuring this requires the min and max to be set equal. If one finds they made a mistake when setting this value, it can always be adjusted. The bottom line in application, is that if your pagefile utilization is affecting system performance mentionably, you should upgrade to a larger amount of RAM because tweaking the pagefile just won't help you much. To me, this is an issue of taking control from an OS who assumes the operator doesn't really know what he's doing... Its a basic difference of approach which can be contrasted with that of most Linux distros.

BTW, if anyone thinks that pagefile tweaking does affect performance mentionably... Refer to part III in the following original post of this thread:

http://forums.anandtech.com/messageview.aspx?catid=34&threadid=1678445&enterthread=y
 
I've always put the pagefile in its own partition at the front of my disk. And the next partition is for the OS. This way the first 10GB or so of the disk are page file parttion and OS (most used). Keeps seeks short and also uses the fastest part of the disk (outer edge).

On general systems with 512MB of ram I usually set a 400MB page file. If the intended useage of the system is gonna require a lot of mememory and paging space, then I increase the size accordingly. Like the linux server I use for remastering Overclockix iso's has 1GB of ram and a 1GB swap partition. I always use a fixed size pagefile in windows and the whole partition I've created.
 
Last edited:
Back