• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

VMware Player 3.0 beta: 4 CPU support & can create VMs

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I had both
"priority.grabbed = "normal"
"priority.ungrabbed = "normal"

already present in my .vmx file

I changed both to 'idle' and now it runs at low priority like FAH used to automatically do. It still uses 100% of all available CPU cycles, but will get out of the way if I need to do something else. This is not a dedicated folder, but my workstation and I frequently need it for other things at times.

I'm also using about 1GB less memory now.
 
"idle" is not a valid argument for grabbed priority. Normal is as low as it goes. If you leave the VM window open, it'll run at normal priority and your gpu's will suffer.
 
Nothing was manually changed in Task Manager. This is what the priority is naturally with the window open or closed. Is this what you meant by window open?
 

Attachments

  • VM Player 3.0.jpg
    VM Player 3.0.jpg
    192.6 KB · Views: 1,170
One thing I've noticed with Player 3.0, is that I can no longer access VMs created in server or workstation over the network. THe VMs can can see the internet and can access all the other netwoek computers, but I can get into them, so HFM can't monitor them. I'm working on what changed and how to change it back.

Audio,
If you read up in vmware community forums, you will find that what I said is true. Grabbed priority default is normal and this is also as low as it will go. It makes sense that you need normal priority to gain mouse and keyboard context in the VM. I think you'll find that with v2.10 of the a2 core, if you leave the VM window open and grabbed (input devices attached to the VM) your gpu production will plummet because there are no left over cycles to run the GPU client. THis didn't happen with v2.08 and earlier because they always left enough idle cycles for the GPU.
 
Last edited:
As long as I can look at them without grabbing mouse/imput control and they stay low/idle in priority, I'm happy that I don't have to manually set it to low in Task Manger like I was doing in the last version of VMserver 1.0.9.
 
Last edited:
Hey chaser you tried "Replicate physical network connection state" ?
 

Attachments

  • vmnic.JPG
    vmnic.JPG
    47.2 KB · Views: 1,176
I've tried every network setting but NAT. THe Ubuntu VMs can see and access all the other machines on the network. Other machines can see the VM guest, ping the VM guest, but can't access the VM.
 
Woot! :D This is the cat's meow! :beer:

I have two rigs converted... sooo much nicer on a Win7 host. No need to disable driver signing on boot. Boot times on an XP64 host are much faster... no more long waits for the network to become available. VMs boot much, much, much faster... only 1 VM == less memory being hogged. Better PPD!!!! The list goes on and on... and if this is the RC, I can't wait to see the final product. :)

I haven't had any trouble getting network access to my VMs. Although I've been using Voidn's suggestion on both that I've setup so far. I'm waiting on a unit to finish, then I'm setting up on my Q9550. I'll try leaving that option unchecked and see if it makes any difference trying to access from a remote machine.
 
Woot! :D This is the cat's meow! :beer:

I have two rigs converted... sooo much nicer on a Win7 host. No need to disable driver signing on boot. Boot times on an XP64 host are much faster... no more long waits for the network to become available. VMs boot much, much, much faster... only 1 VM == less memory being hogged. Better PPD!!!! The list goes on and on... and if this is the RC, I can't wait to see the final product. :)

I haven't had any trouble getting network access to my VMs. Although I've been using Voidn's suggestion on both that I've setup so far. I'm waiting on a unit to finish, then I'm setting up on my Q9550. I'll try leaving that option unchecked and see if it makes any difference trying to access from a remote machine.

Absolutely harlam! I've lost some 400PPD from my GPU folding but getting 3K back (9mim. TPF), maybe not what some got from running a pair of VM's though I prefer this for the ease of managing it.

I also had trouble getting anything to recognize the VM in FaHMon or HFM, even installed Samba but no joy. I'll try Voidn's suggestion.

At any rate, VMWare is now stable for me on my quad W7 X64 RTM. I can do what I want in Ubuntu now without worrying that 1.08 would always crash back to 1 core.

I'm going to have a beer now.:clap::beer:
 
Well... I just started upgrading. ;) So I'm bound to run into some issues... had a notfred VM give up the ghost... well, at least the client itself did. The output in the VM window suggested lack of memory. So I refired it with 1024MB and we'll see how that goes. So far, all others seem to be running fine.

Now I've run into something similar to ChasR... on my rig where I run HFM, I cannot access the VM that is running on it from that rig (Edit: XP64 host). Other rigs can access the VM just fine. :shrug: Oh yeah, and Voidn's suggestion seems to have no effect.

The only rig I've found where I can run a VM and access it via its network name is my Win7 box where I have a notfred VM running. Going to try replicating the vmx settings of the notfred VM to see if I have any luck... otherwise, maybe the VM needs a VMWare Tools Upgrade (or removal- since notfred's has no VMWare Tools). I didn't want to do that yet in case I need to go back to v2.5.x. If I try it... I'm going to replicate the VM first so I don't destroy the original.
 
Last edited:
OK, updating VMWare Tools had no effect... nor did replicating the notfred's VM settings. So, I've gone back to v2.5.3 on my main rig where I'm running HFM (just so I can monitor this machine's clients). I guess in this case I'll just have to wait for the next Player release to see if the issue is corrected. My guess is that the problem is related to the host OS in combination with the Virtual Network Driver.

Since this Player release brings new support for Windows 7, it wouldn't surprise me if a lot of the testing done was to validate functionality for Windows 7... which again, seems to have no problem accessing samba shares on guest VMs from the host OS. I'll have to try a Ubuntu VM to be sure though... that way I'm talking apples to apples vs. my other boxes.

All in all... I gained a few gigs on memory back... and ~3000ppd to boot. :)
 
I've never been able to see my VMs on my network or get access to them unless they were Windows guest OS.

They still have access to the internet and run WUs.
 
I haven't had any trouble accessing the VM with any flavor of VMware with Windows or Linux as host with Linux as a guest using bridged networking until Player 3.0. I'm going to try it in Vista x32 and see if it works there.
 
I've narrowed it down to this:
The VM is accessible over the network by all computers except the host. I've only installed Player on the hosts with HFM running so I have 3 failures in 3 attempts. If it weren't for HFM, it wouldn't matter much.
 
For those with network connection issues: Did you create a brand new vm with Player 3.0 or did you use a vmdk or complete vm that was created with a different version (server, workstation, ect)?

I created my vm from scratch with player 3.0 and that may be why it works. I can SSH, share folders, monitor with HFM ect. Using an old vm might mostly work but could contain invalid settings from a different version of vmware and cause problems.
 
I've upgraded several more and can now say the only issue I'm having is that the host can't see its own guests via the network. Other network connectivity is unaffected. These are all vms created with earlier versions of Server or Workstation. Right now I have 13 x 4 core VMs at work and it looks like I'll be able to monitor all of them except the one VM on the rig running HFM, until a fix comes around.

I posted a bug report on the VMware site.
 
I've narrowed it down to this:
The VM is accessible over the network by all computers except the host. I've only installed Player on the hosts with HFM running so I have 3 failures in 3 attempts. If it weren't for HFM, it wouldn't matter much.

That's what I said. :D Just not as simply. ;)

Now I've run into something similar to ChasR... on my rig where I run HFM, I cannot access the VM that is running on it from that rig (Edit: XP64 host). Other rigs can access the VM just fine. :shrug: Oh yeah, and Voidn's suggestion seems to have no effect.

I want to point out to others that this is not an HFM issue. It's a networking issue with Player 3.0 (or so it would seem). One would experience the same problem if using FahMon (or any other software that relies on network shares) to monitor.

@Voidn - what Windows OS are you running? I'm having no issue monitoring a notfred guest on a Win7 host (also running HFM).
 
Back