In a few weeks the S2XP will be replaced with the S3XP (at least in my country, don’t know how it is in other countries).
One of the differences between the S2XP and the S3XP is that the graphics adapter will be a GeForce 6200 Go.
Does anyone have any information if the GeForce 6200 Go is faster than the Radeon 9700? Judging from some reports i’ve read the 6200 isn’t really that fast but I haven’t found any benchmarks that compare the two against eachother…
Just wondering if anyone has found out anything about the Nvida 6200 to go. The New FS models with 15.4” screens are due soon with these cards, and they are very nice machines.. (amazingly small for such a big screen)... They are supposed to have 128mb Shared RAM, but since the 9700 only has 64mg RAm, I wonder if there would be much difference.
128mb intel extreme 3 vs 64mb 9700 is a HUGE difference. I’ve used the FS (my fiancee’s machine), and it seems about on par (graphics-wise) with my 64mb mx440 (3 y/o graphics solution). you’d be able to play hl2 on the extreme 3, but you’d support far higher graphics on the radeon chipset. you can’t just compare 64mb vs 128mb, just as you can’t compare system ram specs or processor speed by numbers.
Even the radeon 9700 128mb model vs the 64mb model has very limited marginal gains.
I posted this in another thread but it’s appropriate for this one. It basically tests the VGN-Sxxx series (855 chipset + Mobility Radeon 9700 64MB) versus a VGN-FSxxx series (915 chipset + Nvidia 6200TC 128MB). The CPU, memory and everything is essentially the same or close enough.
Out of curiousity and based on benchmarks in this article , I decided to see how good the performance of the Sonoma chipset with the new Nvidia 6200 chipset is.
Mind you, this is an F/FS model but it’s using the same CPU, chipset, graphics chip as the High-Spec S92 model. So, I think it’s a fair comparison.
I’m testing my VGN-S170P (at 1.7GHz) and Radeon 9700 against the numbers in that review of the Type F/FS with Pentium M 740 (1.73GHz) and the GeForce Go 6200 TC. I figure these systems are fairly close in specifications in terms of clock speed. Technically, the F/FS should be faster with the faster memory and tiny clock advantage. I used the stock S drivers on my S170P so there’s funny business involved.
I don’t have all the benchmarks since I don’t have all of the programs they used.
Quake III Arena - fps
I’m assuming they ran the standard timedemos. I’m not sure if it was demo 1 or 2 but regardless, the scores were about the same for each time demo.
640x480
VGN-FS70B - 293.3
VGN-S170P - 325.4
800x600
VGN-FS70B - 215.6
VGN-S170P - 253.5
1024x768
VGN-FS70B - 141.4
VGN-S170P - 202.1
So, one could argue that the good old ATI Mobility Radeon 9700 is a superior part and the High-Spec S92 really isn’t all that high. The 9700 has the advantage of extra “real” memory and that seems a little evident in the 1024x768 Quake III benchmark where the margin difference grows markedly. So, even with the newer chipset with a faster memory subsystem and marginally faster CPU, the S series is still equally capable despite using “older” technology. So, owners of current S’s probably won’t have much envy for the new Sonoma based S’s other than the fact that it can come in a silver finish.
The S92 does bring SO-DIMM DDR2 to the table as well as SATA 2.5” hard disk support. However, those are two technologies that aren’t necessarily readily available and affordable.
gr00vy, do you surmise that the new 915pm chipset with integrated graphics will consume noticeably less battery power versus the 9200 and 9700 dedicated set?
[quote author=“Anonymous”]tomshardware.com has a good article on the 915 and the geforce 6200 and the impact of battery consumption vs that of performance
Thank you, guest poster. I almost forgot about that article as I read it over a month ago. It looks like the upgraded S380 is more of a downgrade to a maxed out S270 or even a maxed out S170! I say this mainly because the graphics are onboard instead of dedicated. Not only that, but, battery times are less because the PCIe GPU draws more power. Battery times are down almost 20%.
Here is the article in case anyone missed it:
This looks like a good time to pick up an S270 on clearance or refurbed. I’m tempted. Anyone interested in my TR3A?
[quote author=“TruthSeeker”]gr00vy, do you surmise that the new 915pm chipset with integrated graphics will consume noticeably less battery power versus the 9200 and 9700 dedicated set?
You probably answered this yourself but it’s likely that the integrated graphics will use less power but I wouldn’t say it’s noticeably less battery.
The coolbits tweak does make all the difference, basically it overclocks the gpu to a safe level. From what I can tell, sony really underclocked this card. The built in clocking feature in the nvidia drivers (available after coolbits tweak) automatically detect a safe operating level which was 3 times faster than the sony clockspeed. All I did was select 3D performance and click the “Detect optimal Frequencies” and it set the clock speed to 390mhz (3x faster) and the RAM to 790mhz (2x faster).
I have been running with these settings for about 4 weeks now, playing games like half life 2, splinter cell CT, doom II etc, with no problems what-so-ever, no display corruption, no extra fan noise, overheating etc. Infact the unit doesn’t get any louder or warmer than it did before. You can really notice the difference in performance, as the frame rate and quality have increased dramatically in all the games I have tried.
I haven’t heard of anyone else on the FS having problems either. But take care though if you do decide to try it out!!!!!...
That’s good to know that there’s some extra headroom in the 6200 chipset. I’m also sure they underclock it so you can squeeze more battery life out of the notebook and to keep the fan speeds low.
FWIW, the latest Omega Radeon drivers have boosted my 3DMark03 score to over 2850 points on my lowly 1.7GHz S170P (on Adaptive Level 5) which isn’t too shabby. I’m not sure why your 2GHz score is so much lower than mine. I figure an overclock could easily bump me over 3000. It’s still shy of what you’re getting on the FS but closer. I don’t think I can get as good of a 3DMark05 since the 6200 has more DX9 hardware than the Mobile 9700 does and the PCI Express bus gets better utilized than the AGP bus.
So all that bitching and flaming about the nvidia go 6200 should be laid to rest now? Cant blame sony, they had to deliver impressive figures in the battery column.
I sold my S last week, so unfortunately I can’t test out the newer Omega Drivers. I tried a couple of times to overclock the 9700 but it was very frustrating and the increase was minimal (maybe 30mhz or less before corruption). I noticed that the current version nvidia omega drivers are supposed to support the 6200 go, but I can’t get them to install. I’m sure that the stock sony drivers still aren’t providing the best performance since they are dated Nov 04. If I can get the omega’s working I’m sure that it will improve on that score.
Also on another quick point, I guess the battery life on the FS is a big issue. I am using the double capacity battery, and when playing games, I only get 1 1/2hrs max life (with the processor running full pelt, wi-fi, screen 100% etc). Just going on internet, word etc with the processor scalled back, I’m getting roughly 4 1/2hours (screen 100%, wi-fi etc).
Using that same battery on the S with the 9700 running games, I would usually get 4 hours full pelt, and upto 8 hours just surfing the net etc.
I guess I hardly ever use my FS without it plugged in though, so it’s not an issue for me. I was getting sick of the smaller S screen as I used it 8hrs everyday between clients, and found the FS to be a nice compromise.
I’ll try to get the omega’s working and see if they make a difference to the score’s I’m getting.
Right i tried to overclock my 6200 Go but the results were strange. First of all does anybody have any clue how far the slider in the overclocking tab can be stretched ?
I tried to increase the core clock frequency to 300 from 100 and there was a weird distortion on the screen, i did leave the memory clock frequency settings untouched.
I think I read on the website that you just have to slowly increase the sliders up and then try a game and look for any ‘anomalies’. There should be a point when they start, move the sliders down a little from there!
[quote author=“Mighty Matt”]I think I read on the website that you just have to slowly increase the sliders up and then try a game and look for any ‘anomalies’. There should be a point when they start, move the sliders down a little from there!
Well i tried using the Detect optimal frequencies setting in the tweak and i havent noticed any difference (I played half life 2 for about half an hour before and after the tweak). Ill 3Dmark it and see if there is any increase in the performance.
The values shoot upto 354 MHZ (from 100 MHZ) for core clock frequency and 709 MHZ (from 332 MHZ) for memory clock frequency.
I would be very careful when tweaking the speeds…especially the core speeds. Going from 100 to 300 is a drastic change that could potentially damage the chip and memory. Visual artifacts (distortion) are a result of trying to run the chip too hard and it’s not good.
You’ll probably want to try much smaller increments (i.e. like 5-10MHz) and get it to a point where there’s no distortion. Again, it may not make a huge difference in performance (and will likely drain the battery faster) so you’ll need to balance it accordingly. In some cases, the performance bottleneck may not be the video chipset and may be the CPU (which you really can’t do anything about).
I tried the “Detect optimal frequencies” first too, and not had any probs, but that is on a FS series, I can’t guarentee that your card is the same.
I noticed a great performance jump when I changed my settings, major increase in 3d mark etc. I still haven’t had any problems with it, but I would take groovy’s advice and ramp up the speed slowly!.