• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

VMware Player Internal Monitor Error

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Bomber5000

Registered
Joined
Jan 5, 2009
Location
Maryland, USA
I've gotten 2 of these VMware Player Internal Monitor Errors so far and I'm trying to figure out what is happening. I was hoping that some of you guys my have run into this or have heard of issues like this.

Here is the info:

- Win 7, 64-bit
- VMPlayer 3.0
- Ubuntu 9.04
- 8 GB RAM assigned to VM
- 7 GB HD space assigned to VM
- bigadv

This is happening on Folding Rig #2 in my sig. The first time I thought that maybe I ran out of RAM and swap space, so I upped it for the 2nd try with a fresh install of Ubuntu on a new VM.

The VM Player log file has this gibberish:

Code:
Feb 26 00:52:38.873: vcpu-4| MONITOR PANIC: vcpu-4:VMM64 fault 14: src=MONITOR rip=0xfffffffffc000000 regs=0xfffffffffc008730
Feb 26 00:52:38.873: vcpu-4| Core dump with build build-203739
Feb 26 00:52:38.873: vcpu-4| Writing monitor corefile "C:\Users\Bryce i7\Documents\Virtual Machines\Ubuntu64 Folder\vmware-core0.gz"
Feb 26 00:52:38.876: vcpu-4| Saving busmem frames
Feb 26 00:52:38.876: vcpu-4| Saving anonymous memory
Feb 26 00:52:38.881: vcpu-4| Beginning monitor coredump
Feb 26 00:52:38.980: vcpu-4| End monitor coredump
Feb 26 00:52:38.980: vcpu-4| Beginning extended monitor coredump
Feb 26 00:52:38.980: vcpu-4| Writing anonymous pages at pos: 401000
Feb 26 00:52:41.213: vcpu-4| Writing monitor corefile "C:\Users\Bryce i7\Documents\Virtual Machines\Ubuntu64 Folder\vmware-core1.gz"
Feb 26 00:52:41.214: vcpu-4| Saving busmem frames
Feb 26 00:52:41.214: vcpu-4| Saving anonymous memory
Feb 26 00:52:41.218: vcpu-4| Beginning monitor coredump
Feb 26 00:52:41.290: vcpu-4| End monitor coredump

And it continues on for pages listing dumps of the other cores and various .dlls in windows.

I'm not sure if this is an instability manifesting itself, or just some issue with VM Player?

I have a GPU folding as well and it hasn't missed a beat and windows itself is fine. Just get this popup saying VM ware error and the VM shuts down.

Any ideas?
 
Last edited:
I've bumped up Vcore and QPI/DRAM a few notches as its the only thing I can think of right now and turned her loose on -bigadvs again.

I've also been keeping an eye on the Ubuntu and Win 7 resource monitors and everything is looking good. The VM is using 3.6GB of RAM, 0% swap.

If this has issues, I'll go back and rerun stress tests. Still looking for anyone with any ideas about things I can check out or test here, I'm completely lost.

Damn you VM :bang head:
 
Stability with OC could be your culprit...
Folding is the TRUE ultimate OC stability test for your rig... i say that because my volts when i first OC'd my rig was 1.31 vcore, and 1.28 QPI/Dram and 24hr prime95 stable, then when i started folding i got errors; by the time i finished with the voltage bumps my QPI had to jump to 1.37volts, and my Vcore is around 1.34... also, once a week or so, i may get an error, but to prevent that, every other bigadv or so i will shut it down for a few min, and fire it back up or restart it.
 
Interesting. What happened to you VM when you had OC errors? Did your whole system bluescreen, or did just your VM crash?
 
Actually, i noticed the only instability was within the VM; It would display a long error message. However, hitting "OK" on the error message or when i see it gave an error, the entire system would lock up... not just the VM. even with no blue scree, when i would get the system back up, i would sometimes have an error code on the BSoD report... 7E's and 124/121's... (Vcore/Qpi bump typically with the last two)

EDIT: especially with you running at 4.2ghz, you may need extra volts for folding. you should try lowering your clock to 4.1 or 4.0 with the volts you use for 4.2... then slowly bump the volts down by a notch each day or two (preferably 2 days), this will truly determine stability...
 
Last edited:
Thanks for the feedback Norcalsteve! That's essentially what was happening to me, but I didn't see the problem for what it was.

I've got my Vcore up to 1.41V and QPI up to 1.4V and all is well in the world. I knew folding would take a few more notches on the voltage dials, but I didn't realize it would require THAT much more voltage to stay stable!

50k ppd, here I come!
 
True that! glad i could help... Funny, they say P95 stability for 24hrs is good, but you are not "fully" stable, until you fold. thats just my opinion though ;-)
 
Back