Sunday, February 19, 2012

How to use 3.2Gb ram when 4gb is available

What switch should I be setting up on Win2003 Server when Physical Mem available is 4Gb and I want to allocate 3.2GB to SQL Server. Would be very helpful if someone can send the copy of the line from boot.ini
Thanks"Use fixed memory size" in Server Properties/Memory. BOOT.INI settings do not affect memory allocation for SQL Server (unless I am WAY OFF, in which case someone would set me straight ;))|||To be able to make use of more than 2 gb of memory, you need to set AWE on. Also you need to turn on the PAE on you Windows. Here is you step by step guide,

http://www.sql-server-performance.com/awe_memory.asp|||I'd check KB article #555068 (http://support.microsoft.com/default.aspx?scid=kb;en-us;555068) in this case. I think that performance wise you'd be much better off using /3GB in this particular case.

-PatP|||All these answers are correct providing a different question was asked, something like "How to use more than 2GB of memory, or more"

The question was "How to allocate 3.2GB for SQL Server using BOOT.INI" So, /PAE is the one that is needed to ensure that over 3GB can be allocated, and then in the properties (see my original post) to use either "Dynamic..." and set the max, or use fixed memory allocation.|||Here is the problem which I don't understand.

I setup the Switch /3GB (just to make sure I was on safe side before using /PAE) . Set my AWE Enabled to 1 Set Fixed Mem to 2995MB but
REconfigure
Restart SQl

But I still see SQLServ.exe is only using 65MB . What is going on

Thanks|||The reason why you see it that way is because by default SQL only takes the amount of memory it currently needs, not all the memory available for it. If you want to see it use all the memory that the OS can give, then you need to change its memory setting from dynamic to fixed.|||I set it to Fixed only not Dynamic|||What are your min and max sizes?|||The question was "How to allocate 3.2GB for SQL Server using BOOT.INI"You are correct, and you definitely answered the question as it was literally asked correctly. I tried to read a bit more into it, looking to give the best performance for SQL Server rather than the maximum memory utilization. Based on our testing, the "switching cost" of PAE/AWE usually makes it counter productive unless you've got over 4 Gb.

Sorry if I caused any confusion.

-PatP|||As a general rule, I do not advise specifying a fixed memory allocation to any application in a virtual-memory environment. "That is what virtual memory is for."

Most applications work best when they view their memory allocation as, essentially, limitless. They allocate memory as they need it (recycling memory whenever possible) and rely upon the operating-system to provide it. Applications concern themselves with their own affairs and ignore the presence of their neighbors. They rely upon the OS to be aware of those neighbors, and to appropriately allocate available resources among them. If SQL server is running "without contention" on a particular box, then Windows should allocate the lion's share of resources to that application without being "told" to do so.

Ergo... if you observe that Windows is not giving SQL Server the resources you expect, there must be some contention for those resources of which you are not aware. The correct solution is to find out what those limits are and to correct them at their source, so that Windows, again on its own, makes the "correct" decision.

If you simply exert your Divine Powers to force the issue... well, there's a price. Your Godly Perogatives will be exercised at the extreme detriment of ... someone. And it may well be that this "someone" is, for example, the networking software upon which SQL Server utterly depends.

Leave the OS to its own devises. Let it do what it does best. If you find that things are not what you desire, find out why.|||You have no idea how SQL Server deals with memory sundial. If they follow your advise, they'll just have a server locked up with nowhere to go.|||Ahh, but I do know whereof I speak, good sir or madam. And in fact, it is precisely when you try to give a particular process (no matter what it is) "a fixed amount of memory" that you run the greatest risk of leaving a server "locked up with no place to go."

We encountered exactly the same kinds of problems, oh, forty years ago or so, with the "V=R Region" concept (referred to by various three-letter acronyms) in OS/360 and its predecessors. This allowed you to allocate a specific amount of memory to a specific process, forcing the OS to make do with whatever else was left. And the trouble was, and is, that if an OS runs out of memory for its own purposes the results are never pretty.

It was true then, it is true now, and the reasons really haven't changed.

Initially, don't specify any particular memory-allocation limits for SQL Server. Simply run it for a while, on a non-contested machine, and observe how much memory Windows allows it to have. That's its "comfort zone." If that's smaller than you expect, find out why. (Remember that your own session will use up a fair amount of RAM while you are in the act of watching, but you do need to log-in interactively sometimes...)
Then, if you want to ensure that swapping doesn't happen, specify an upper limit to SQL Server that is slightly less than the observed "comfort zone." This specifies that SQL Server should monitor its own affairs so that it never quite approaches the thrash-point where swapping will begin to occur.
Periodically monitor system performance logs to see if there is contention, e.g. at certain times of day-or-night, that cause the true "comfort zone" to be less than you thought it would be.

Any operating-system will, left to its own devises, permit any runnable process to run as long as it needs to and to have as much memory as it wants, absent any contention for those resources. In other words, entirely on its own it will converge toward a state, if it can, where there is near-100% utilization of physical RAM and almost no swapping or paging. (Application-startup is always an exception, so give the machine about five minutes or so, with a typical load, to stabilize.)

If you start "thinking too much" (so to speak), and specify limits for a particular process that are what you want them to be, you create the proverbial "million pound elephant." You create a serious problem while endeavouring to do exactly the opposite. The system will tell you what the "comfort zone" will be, just by observing. If it's not what you expected it to be, find out why and adjust.

If you don't take this approach, you cause serious thrashing, and you stipulate by fiat that the system is not allowed to do anything about it. So it goes into "Maytag mode" (visualize a washing machine on the spin-cycle... I've actually seen old-style drives wiggling across the raised floor...) and croaks. This "thrashing" is what I have actually seen transform a three-minute run into nineteen hours.

Always remember that you need to be able to log on to the system to be able to control it. So the "comfort zone" will probably be a smaller percentage of available RAM than you would prefer; however, "chips are cheap" and it's much better that the comfort-zone (whatever it turns out to be) is "long term sustainable in actual practice" than for it to be any particular number or percentage.

No comments:

Post a Comment