I was discussing something with anlther member on clock speeds and something was revealed. You know for the prime, normal performance top speeds are 1.4Ghz on a single core and 1.3Ghz on 4 cores. well Nvidia site that has spec on Tegra3 says something different now. It now states that the top speed is 1.5Ghz on a single core and 1.4Ghz on multiple cores.
Can people who are stock or rooted and haven't overclocked check this out? you can use CPU spy. put your prime in performance mode then use CPU spy to see what the top speed enabled actually is. don't report the speed not in use on the bottom. only look at the top speed actually in use and not in sleep. another way to tell is use an app like system tuner or system panel light and look at the Max speed the bar meter hits. it'll be fluctuating. CPU spy the best way.
I want to see if this new update might of actually increased the top speeds on stock devices. here is the link to look at the new tegra3 spec showing the increased speeds.
http://www.nvidia.com/object/tegra-3-processor.html
Ok buddy. Trying to work with ya here. I am completely stock, and not rooted.
Just installed CPU spy, wasnt sure if I needed to be rooted to run this, but it installed fine and seems to be working.
Anything you need me/us to run and get some clock speeds?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
-Jason
Here are my results on performance, doesn't appear to break 1400.
Sent from my Transformer Prime TF201 using XDA Premium HD app
David Dee said:
Here are my results on performance, doesn't appear to break 1400.
Sent from my Transformer Prime TF201 using XDA Premium HD app
Click to expand...
Click to collapse
Wow, sure would love me some pie after seeing your attachment.....
Why does it say unused states..1600 and 1500mhz....are these available and not used?
Mcoupe said:
Wow, sure would love me some pie after seeing your attachment.....
Why does it say unused states..1600 and 1500mhz....are these available and not used?
Click to expand...
Click to collapse
This is using system tuner. You just asked the million dollar question??
This is the specs: for the Tegra 3 chip, Asus chose not to use the 1.5 and clock it down to 1.4/1.3 to save battery life most likely!
Mcoupe said:
Wow, sure would love me some pie after seeing your attachment.....
Why does it say unused states..1600 and 1500mhz....are these available and not used?
Click to expand...
Click to collapse
it says unused states because although we do have those speeds in stock kernel, Asus has them disabled by default. only way to activate them is to root and use an app like EzOverclock or ATP tweaks App or vipermod to enable those overclocked speeds. the best one is Ezoverclock app. all you need to be is rooted. you install the app and you're instantly overclocked to top speed of 1.6Ghz. it changes the speeds on Asus quicksettings. after install, Performance mode will now be 1.6Ghz, balanced is 1.4Ghz and battery savings stays the same at 1Ghz. the other great thing about this app is that you can manually change what speed you want each power mode to be. after installing app, by default it'll be set to what I listed above. all those apps or programs are free and in developement section of prime.
edit: also rooting, unlocking device, and running a custom rom from developemnet section can overclock you also.
based on what people posted so far, it seems nvidia is just leasting the top speed capable on the chip, period. that top speed spec is actually for the new infinity pad. infinity pad will come stock with those top speeds of 1.5Ghz on single or 1.4Ghz on 4 cores.
My max is also at 1.4 GHz. Nothing above that has ever been reached.
Obviously I'm running stock.
Had also seen that they bump it to 1.5ghz..
And the specifications for HTC ONE X with Tegra 3 also shows the 1.5ghz..
http://www.htc.com/www/smartphones/htc-one-x/#specs
Wonder If the devices that coming now have a little updated Tegra 3 chip. Can it be Tegra 3+ ?
Asus Prime & Tapatalk
Ok, got ya.
Hope we helped with the discussion.
So at 1.6 are people noticing a big improvement in performance?
I guess the developed ROMS that we have available come with custom kernels? I see people pushing 1.8 and hoping for 2.0. How stable is such numbers, and are these modes raising temps and crushing battery life?
---------- Post added at 11:27 AM ---------- Previous post was at 11:24 AM ----------
Andreas527 said:
Had also seen that they bump it to 1.5ghz..
And the specifications for HTC ONE X with Tegra 3 also shows the 1.5ghz..
http://www.htc.com/www/smartphones/htc-one-x/#specs
Wonder If the devices that coming now have a little updated Tegra 3 chip. Can it be Tegra 3+ ?
Asus Prime & Tapatalk
Click to expand...
Click to collapse
As far as I understand, on this side of the pond we wont have the Tegra 3 in the One X. It looks like we will get a dual core chip.
Andreas527 said:
Had also seen that they bump it to 1.5ghz..
And the specifications for HTC ONE X with Tegra 3 also shows the 1.5ghz..
http://www.htc.com/www/smartphones/htc-one-x/#specs
Wonder If the devices that coming now have a little updated Tegra 3 chip. Can it be Tegra 3+ ?
Asus Prime & Tapatalk
Click to expand...
Click to collapse
its not an updated tegra3 chip. exact same one in prime except on the software side they unlocked the higher speeds by default. out the box. with prime we already seeing those speeds and even higher being rooted and/or unlocked running custom roms or kernels. we even have achieved 1.8Ghz overclock now. really all you need is root to achieve 1.6Ghz. So technically prime already more powerful than these new tegra3 chips I'm future tablets, if rooted of course. Nvidia is skipping over the tegra3+ chip. they going straight tl Tegra4 which is expected to be keplar based. it will be on the 28 or 22nm die. meaning more power and more battery efficient. plus whatever bells n whistles nvidia adds tl it or tweaks it for. Tech sites expect Tegra4 to release q4 of this year. Tegra4 was already sent out to all major manufacturers back in December. so they testing it out now. Tegra4 will definitely change the market up again and something to be excited for.
Mcoupe said:
Ok, got ya.
Hope we helped with the discussion.
So at 1.6 are people noticing a big improvement in performance?
I guess the developed ROMS that we have available come with custom kernels? I see people pushing 1.8 and hoping for 2.0. How stable is such numbers, and are these modes raising temps and crushing battery life?
---------- Post added at 11:27 AM ---------- Previous post was at 11:24 AM ----------
As far as I understand, on this side of the pond we wont have the Tegra 3 in the One X. It looks like we will get a dual core chip.
Click to expand...
Click to collapse
yes, thanks for the quick tests from everyone. it was helpful in trying to figure out if new specs was included in new update.
yes 1.6Ghz gives it a very good boost in performance. thing is prime performs great on stock speeds also. but overclocking to 1.6Ghz will make it that much more faster. web pages will load alot faster. ui will be alot faster.
another trick to speed up your prime is to go into settings. then go into developer options. enable the force gpu rendering. I've been running this for a while and seen a big boost in performance and fluidity of the UI and other things. even in web browsing. you can also cut off the windows and transition animations. cutting those off will make everything nearly instant. like switching between apps, going from one screen to homescreen, etc...all of that will dramatically make your prime feel faster. all transitions will be instant instead of slight pause or gradual transition. screens will instantly pop to the next.
tylermaciaszek said:
This is the specs: for the Tegra 3 chip, Asus chose not to use the 1.5 and clock it down to 1.4/1.3 to save battery life most likely!
Click to expand...
Click to collapse
Or to improve yields; this is the first Tegra 3 product ever. I'm sure there were fabrication teething issues.
I thought those were newer revision/step of the same Tegra3 chip. ARM recently started offering a "Processor Optimization Pack" for A9s.
htcplussony said:
I thought those were newer revision/step of the same Tegra3 chip. ARM recently started offering a "Processor Optimization Pack" for A9s.
Click to expand...
Click to collapse
Any links? As I've never seen any tech articles speak of it or mentioned an upgraded tegra3 chip. All talk has been on tegra3 itself or more recently tegra4 talk has been popping up alot more lately.
demandarin said:
it says unused states because although we do have those speeds in stock kernel, Asus has them disabled by default. only way to activate them is to root and use an app like EzOverclock or ATP tweaks App or vipermod to enable those overclocked speeds. the best one is Ezoverclock app. all you need to be is rooted. you install the app and you're instantly overclocked to top speed of 1.6Ghz. it changes the speeds on Asus quicksettings. after install, Performance mode will now be 1.6Ghz, balanced is 1.4Ghz and battery savings stays the same at 1Ghz. the other great thing about this app is that you can manually change what speed you want each power mode to be. after installing app, by default it'll be set to what I listed above. all those apps or programs are free and in developement section of prime.
edit: also rooting, unlocking device, and running a custom rom from developemnet section can overclock you also.
based on what people posted so far, it seems nvidia is just leasting the top speed capable on the chip, period. that top speed spec is actually for the new infinity pad. infinity pad will come stock with those top speeds of 1.5Ghz on single or 1.4Ghz on 4 cores.
Click to expand...
Click to collapse
Thanks for the info on overclocking. We had talked about it before when I first say the higher speeds (disabled) and we talked about battery life - Good if you only run overclock for games & stuff, if I recall correctly. I have wanted to overclock for some time, but have been resistant to leaving it oc'd. What is the best way to toggle the OC on and off, vs. removing the app: Should I just run balanced for most use, then flip to performance for gaming, etc?
EDIT: Also if Nvidia is stating those are the top speed capabilities, does that dash hopes for craking these up to 1.7 / 1.8 and beyond? (To Asus Eee Pad Infinity... And beyond!) If so, that kinda sucks, though the tablet is plenty fast as it stands... we all like to see how far we can take stuff though... It's our nature. I have a 12 Cylinder Jaguar and it was hopelessly under tuned and oversmogged putting out just over 300 hp. Not any more, hehe...
Mcoupe said:
As far as I understand, on this side of the pond we wont have the Tegra 3 in the One X. It looks like we will get a dual core chip.
Click to expand...
Click to collapse
ACTUALLY the AT&T HTC One X with S4 snap dragon chip, release date just got pushed back toMay 6th. http://www.talkandroid.com/103253-a...idForums+(Android+News,+Rumours,+and+Updates)
A GOOD REASON for that could be because there were various reports a few weeks back that qualcomm was having manufacturing issues with the S4 snapdragon chip. Production had completely stopped for some reason. Some kind of manufacturing problem that was said to lead to most device announced having Snapdragon S4 chip being delayed. The source on manufacturing issue was posted in that thread. That was a couple or a few weeks back when I posted that. I think S4 chip hasn't been produced in any real bulk amount yet, especially with issues with their manufacturing process that caused S4 production to stop.
So the longer S4 chip gets delayed, the better it will be for Tegra3. It'll be that much longer Tegra3 holding superiority over other android chips. If the first S4 chip having tablet doesn't show till may or June at earliest, then Tegra3 would've secured top spot in Android for 6months. FOR a chip to be successful in mobile market it really only needs to be dominant for 4-6 months. This is based off of how fast the mobile tech industry is moving.
I ran a benchmark on Antutu and it showed a CPU frequency of 1600.. See attached.
Sent from my Transformer Prime TF201 using XDA Premium HD app
demandarin said:
So the longer S4 chip gets delayed, the better it will be for Tegra3. It'll be that much longer Tegra3 holding superiority over other android chips. If the first S4 chip having tablet doesn't show till may or June at earliest, then Tegra3 would've secured top spot in Android for 6months. FOR a chip to be successful in mobile market it really only needs to be dominant for 4-6 months. This is based off of how fast the mobile tech industry is moving.
Click to expand...
Click to collapse
I had been meaning to start a thread about something like this......
Galaxy S3....what is Nvidea's thought on the Exynos quad core chip. Lots of rumors have that clocked at 1.8-2.0 mhz and with 2gb of RAM.
If this holds true, it looks like this phone will be more powerful than our Prime.
David Dee said:
I ran a benchmark on Antutu and it showed a CPU frequency of 1600.. See attached.
Sent from my Transformer Prime TF201 using XDA Premium HD app
Click to expand...
Click to collapse
OK, what exactly is going on here....
SmartAs$Phone said:
Thanks for the info on overclocking. We had talked about it before when I first say the higher speeds (disabled) and we talked about battery life - Good if you only run overclock for games & stuff, if I recall correctly. I have wanted to overclock for some time, but have been resistant to leaving it oc'd. What is the best way to toggle the OC on and off, vs. removing the app: Should I just run balanced for most use, then flip to performance for gaming, etc?
EDIT: Also if Nvidia is stating those are the top speed capabilities, does that dash hopes for craking these up to 1.7 / 1.8 and beyond? (To Asus Eee Pad Infinity... And beyond!) If so, that kinda sucks, though the tablet is plenty fast as it stands... we all like to see how far we can take stuff though... It's our nature. I have a 12 Cylinder Jaguar and it was hopelessly under tuned and oversmogged putting out just over 300 hp. Not any more, hehe...
Click to expand...
Click to collapse
the best and easiest overclocking method is simply installing EzOverclock app. all you have to be is rooted. as soon as you install it. you're instantly overclocked. your Asus power mode quicksettings are instantly changed also. immediately after install, your performance mode is 1.6Ghz, balanced is 1.4Ghz, and battery savings mode stays the same at 1Ghz. Now you can easily change any of those modes within the app if you choose. my setup is Performance 1.6Ghz, balanced is 1.2Ghz, and battery savings is 1Ghz. So I normally run on balanced mode(1.2Ghz). whenever I want to run on 1.6Ghz, I just press Asus quicksetting Performance mode. remember that when ever the tablet resets or powers off and on, it'll default to balanced mode(for me 1.2Ghz).
so this app makes it seemless and simple to overclock. you use the regular asus power modes like you always do. then the ability to changes the speeds of those modes of you choose is priceless.
I have been using EZoverclock since day 1 I got my Prime. Running Eco Mode at 1 GHz, Balanced at 1 GHz too and Performance to 1.6GHz.
I hate how the screen looks in eco mode, so im almost always in balanced @1ghz, so far i can play anything, even 1080p videos thru HDMI withouth lag, and still have great battery.
For heavier stuff, multi task and so on, then to super 1.6ghz mode it goes
Related
I have a question about the 3D's dual core that I'd like more clarification on the vague answers I'm getting by searching this site and google. So I've read that the core is asynchronous so basically meaning the second core doesn't do much work unless needed as others like the tegra 2 and exynos have both cores running or something similar to that, and that this is affecting the benchmark scores. I also read that one would basically double the score of the 3D to get a more accurate reading. Can anyone confirm or further explain this?
Yes, asynchronous is when something operates on another thread whereas the main thread is still available for operating. This allows for better performance in terms of managing tasks. Now just because it doesn't score high on a benchmark, it doesn't mean it is going to perform. Also this allows for better performance for the battery.
I haven't slept for the past 12 hours so if this doesn't help you, just let me know and I will fully elaborate on how the processor will operate on the phone. Now time for bed :'(
In short, asynchronous operation means that a process operates independently of other processes.
Think of transferring a file. A separate thread will utilized for doing so. You will then be able to do background things such as playing with the UI, such as Sense since you will be using the main thread. If anything were to happen to the transferring file (such as it failing), you will be able to cancel it because it is independent on another thread.
I hope this makes sense man, kind of tired. Now I'm really going to bed.
Sent from my PC36100 using XDA App
To be more specific by asynchronous they mean that each core can run at different clock speeds. Core 1 could be at 1.2 ghz while core 2 is at 200 mhz. Most multi core processors are synchronous meaning all the cores are running at the same speed.
donatom3 said:
To be more specific by asynchronous they mean that each core can run at different clock speeds. Core 1 could be at 1.2 ghz while core 2 is at 200 mhz. Most multi core processors are synchronous meaning all the cores are running at the same speed.
Click to expand...
Click to collapse
^This too
Sent from my PC36100 using XDA App
I was also very curious to learn a little more about the async cores and how it differes from a standard "Always-On" dual core arctechiure.
Thh first page/video I found talks about the SnapDragon core specifically.
http://socialtimes.com/dual-core-snapdragon-processor-qualcomm-soundbytes_b49063
From what I've gathered, it comes down to using the second core and thus more power, only when needed. Minimizing voltage and heat to preserve battery life.
The following video goes into similar and slightly deeper detail about the processor specifically found in the EVO 3D. The demo is running a processor benchmark with a visual real time usage of the two cores. You can briefly see how the two cores are trading off the workload between each other. It was previously mentioned somewhere else on this forum, but I believe by seperating a workload between two chips, the chip will use less power across the two chips vs putting the same workload on a sinlge chip. I'm sure someone else will chime in with some additional detail. Also, after seeing some of these demos, I'm inclined to think that the processor found in the EVO 3D is actually stable at 1.5 but has been underclocked to 1.2 to conserve battery. Only time spent within our hands will tell.
Another demo of the MSM8660 and Adreno 220 GPU found in the EVO 3D. Its crazy to think we've come this far for mobile phone technology.
What occurred to me is how complex Community ROMs for such a device may become with the addition of Video Drivers that may continue to be upgraded and improved (think early Video Card tweaks for PC). Wondering how easy/difficult it will be to get our hands on them, possibly through extraction of updated stock ROMs.
EDIT: As far as benchmarks are concerned, I blame the inability of today's bench marking apps to consider async cores or properly utilize them during testing to factor the over all score. Because the current tests are most likely to be spread across cores which favors efficiency, the scores are going to be much lower than what the true power and performance of the chips can produce. I think of it as putting a horsepower governor on a Ferrari.
thanks for the explanation everyone
The best demonstration is in the first video posted, notice when Charbax looks at the monitor. There on the top right are the frequencies of the two cores, and you'll notice the both of them jumping around a lot, independent of the other. Using the cores "on-demand" only when needed ends up saving a lot of battery power, but doesn't give you any performance loss.
Harfainx said:
The best demonstration is in the first video posted, notice when Charbax looks at the monitor. There on the top right are the frequencies of the two cores, and you'll notice the both of them jumping around a lot, independent of the other. Using the cores "on-demand" only when needed ends up saving a lot of battery power, but doesn't give you any performance loss.
Click to expand...
Click to collapse
Actually I was thinking that not just the battery savings but there could be a performance gain. Think of this if the manufacturer knows they only have to clock one core up to speed when needed they can be more aggressive about their timings and have the core clock up faster than a normal dual core would since they know they don't have to clock up both processors when only one needs the full speed.
I wonder if the drop to 1.2 GHz also serves to keep heat under control. It might not just be battery savings, maybe the small case of a phone doesn't allow for proper cooling to hit 1.5 safely.
I'd love to see some confirmation that the asynchronous nature of this chipset is what's responsible for the seemingly lackluster benchmarking.
mevensen said:
I wonder if the drop to 1.2 GHz also serves to keep heat under control. It might not just be battery savings, maybe the small case of a phone doesn't allow for proper cooling to hit 1.5 safely.
I'd love to see some confirmation that the asynchronous nature of this chipset is what's responsible for the seemingly lackluster benchmarking.
Click to expand...
Click to collapse
The "horrible" benchmark scores are simply due to the tests inability to consider async core performance. Wait till the tests are able to take this into consideration.
Sent from my HERO200 using XDA Premium App
RVDigital said:
The "horrible" benchmark scores are simply due to the tests inability to consider async core performance. Wait till the tests are able to take this into consideration.
Sent from my HERO200 using XDA Premium App
Click to expand...
Click to collapse
I went through all of your links, I didn't see anything that confirms that the benches are somehow affected by the asynchronous nature of the chipset. It's not that I don't believe you, I actually had that same theory when the benches first came out. I just don't have any proof or explanation of it. Do you have a link that provides more solid evidence that this is the case?
NVIDIA actually tells a different story (of course)
http://www.intomobile.com/2011/03/24/nvidia-tegra-2-outperforms-qualcomm-dualcore-1015/
AnandTech's article does explain some of the differences
http://www.anandtech.com/show/4144/...gra-2-review-the-first-dual-core-smartphone/4
It appears that Snapdragon (Scorpion) will excel in some tasks (FPU, non-bandwith constrained applications), but will fall short in others .
I'm pretty sure none of the benchmark apps have even been updated past the release of the sensation so yeah....How could they update the app to use the asynchronus processors the if the only phones to use them have only recently been released.
Sent from my zombified gingerbread hero using XDA Premium App
I had the G2x for like 3 days and never got to root. Poor service where I live. But could the cores be set to a specific frequency independently when rooted like computers?
tyarbro13 said:
I had the G2x for like 3 days and never got to root. Poor service where I live. But could the cores be set to a specific frequency independently when rooted like computers?
Click to expand...
Click to collapse
Yea, if someone were to develop an app for that. I do not see why not.
Sent from my PC36100 using XDA App
Hmm...
If a program such as Smart bench (which takes advantage of dual cores) is stressing both cores to 1.2ghz then regardless of if both cores are active or not the bench will be accurate.
I would rather NOT have asyncronus cores as there would be lag during frequency changes...
Ex:
2 cores running at 500mhz vs 1 core @ 1ghz and other not active.
The 2 cores will produce less heat and use less energy...
Maedhros said:
Hmm...
If a program such as Smart bench (which takes advantage of dual cores) is stressing both cores to 1.2ghz then regardless of if both cores are active or not the bench will be accurate.
I would rather NOT have asyncronus cores as there would be lag during frequency changes...
Ex:
2 cores running at 500mhz vs 1 core @ 1ghz and other not active.
The 2 cores will produce less heat and use less energy...
Click to expand...
Click to collapse
There dual, it would be better for them to run asynchronous. Not only that, but it is a phone so there will be no lag between frequency changing. 2 Cores running at 500mhz will perform better than 1 core at 1ghz.
Sent from my PC36100 using XDA App
tyarbro13 said:
I had the G2x for like 3 days and never got to root. Poor service where I live. But could the cores be set to a specific frequency independently when rooted like computers?
Click to expand...
Click to collapse
This is something that the hardware needs to be capable of. Software can only do so much. As far as I've seen Tegra isn't capable of it.
I read the anandtech article and I came with conclusion that everyday task you might not see the difference between the two and while tegra2 might bench higher. The main thing people dont talk about is the GPU. Adreno 220 is a powerhouse GPU, it will probably stand strong when tegra 3 comes out.
DDiaz007 said:
There dual, it would be better for them to run asynchronous. Not only that, but it is a phone so there will be no lag between frequency changing. 2 Cores running at 500mhz will perform better than 1 core at 1ghz.
Sent from my PC36100 using XDA App
Click to expand...
Click to collapse
Huh... what are u saying? Sorry dont understand... On one hand you say asynchronous is better and on the other ur saying 2 cores @ 500 will work better?
nkd said:
I read the anandtech article and I came with conclusion that everyday task you might not see the difference between the two and while tegra2 might bench higher. The main thing people dont talk about is the GPU. Adreno 220 is a powerhouse GPU, it will probably stand strong when tegra 3 comes out.
Click to expand...
Click to collapse
What?!?
Andreno 220 is a horrible GPU. AT BEST it is equal to the GPU in the Original SGS.
The reason benches are so different is because Qualcomm has made NO improvements in the CPU. Desire HD CPU is the same as Sensations. While... SGS2 + Tegra have IMPROVED CPUs.
Arm 7 vs arm 9?
Maedhros said:
Huh... what are u saying? Sorry dont understand... On one hand you say asynchronous is better and on the other ur saying 2 cores @ 500 will work better?
What?!?
Andreno 220 is a horrible GPU. AT BEST it is equal to the GPU in the Original SGS.
The reason benches are so different is because Qualcomm has made NO improvements in the CPU. Desire HD CPU is the same as Sensations. While... SGS2 + Tegra have IMPROVED CPUs.
Arm 7 vs arm 9?
Click to expand...
Click to collapse
Dude go back to sleep. You have no clue what you are talking about.
Sent from my PC36100 using XDA Premium App
X10 is garbage! this is outrageous!
Yes really, they got it working, you want it so bad try porting it yourself
Sent from my MB860 using XDA App
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
dLo GSR said:
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
Click to expand...
Click to collapse
Oh snap. That was awesome.
Sent from my MB860 using XDA App
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
firefox3 said:
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Good news man
Sent from my MB860 using XDA App
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Javi97100 said:
Good news man
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Its turning out to be harder then i though... I think no one will get it until offical updates come out for other phones
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
So EGL = gpu driver? If thats the only setback, would it be possible to get an ICS rom with software rendering as a proof of concept, or are there other pieces missing?
GB/CM7 is pretty good on the Atrix, if we dont see ICS for a few months it doesn't hurt us in any way. I'd like to think most of us can be patient if we lack the skills to help.
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
According to anandtech, Tegra 2 support is essentially ready, so I think as long as nvidia releases the source for ics (libs?), someone will try to port it. Hell, I have a good 5 weeks during break, I might as well try then.
Sent from my MB860 using XDA App
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Buddy, check out any of the kernels available in the dev thread and you'll see that the GPUs are overclocked.
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
Doubt the iPhone will see ICS, the newest model that can run android as far as I know is the iPhone 3G, which was incredibly slow under Gingerbread.
mac208x said:
X10 is garbage! this is outrageous!
Click to expand...
Click to collapse
222 posts and zero thanks? Is this what you do, go around XDA and post useless threads like the guy complaining about returning home early despite nobody asking him to "to get MIUI ported on his grandma's phone"?
Are you guys related by any chance?
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Azurael said:
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Click to expand...
Click to collapse
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
Don't you get tired if writing those long rants? We understand you know something about CPU architecture, and that Tegra isn't the best one out there, but damn man, it's the same thing in every thread. Just chill out and try to stay on topic for once
Sent from my MB860 using Tapatalk
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
I think you are not seeing the whole picture...
The Tegra 3 (Et-Al) is not just about its quad core implementation, remember that the GPU will offer 12 cores that will translate in performance not seeing as of yet on any other platform.
Benchmarks don't tell the whole story! Specially those benchmarking tools which are not Tegra 3 optimized yet.
Cheers!
Sent from my Atrix using Tapatalk
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
LOL I ran all the iDroid ports on my iphone. Not one was even in alpha stage, I would not even count iDroid as a port since you cant use anything on it.
OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
markimar said:
OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
Click to expand...
Click to collapse
ill check on this when i get home. this issue im assuming is with honeycomb itself. we would assume that ICS would properly use those cores
Sent from my Samsung Galaxy S II t989
i don't have it yet (mine gets delivered on wed), but what you observed makes perfect sense. Can they change it to run on say an 800 MHZ constant "down" to 500MHZ when doing the most simple tasks? obviously i to do not believe that 500MHZ will be sufficient at all times to do screen scrolling and such on it's own.
I'm really hoping that the few performance issues people are seeing is resolved in firmware updates and a tegra 3 optimized version of ICS. Maybe asus/nvidia needs to do more tweaking to HC before the ICS build is pushed if it will take a while for ICS to arrive to the prime (past january).
The cores are optimized just fine. They kick in when rendering a web page or a game, but go idle and use the 5th core when done. Games always render.
ryan562 said:
ill check on this when i get home. this issue im assuming is with honeycomb itself. we would assume that ICS would properly use those cores
Sent from my Samsung Galaxy S II t989
Click to expand...
Click to collapse
Nothing's changed over HC in the way ICS uses h/w acceleration. And I'd assume apps using h/w acceleration do so via calls to the OS, not to the chip directly. So it appears what you've got is what you're going to get.
---------- Post added at 06:59 PM ---------- Previous post was at 06:55 PM ----------
markimar said:
OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
Click to expand...
Click to collapse
Do you have Pulse installed? A bunch of people using it were reporting stuttering where their lower powered devices aren't. If you run it at full speed, does it stutter? One of the hypothesis is that it's the core's stepping up and down that's causing the stuttering.
BarryH_GEG said:
Nothing's changed over HC in the way ICS uses h/w acceleration. And I'd assume apps using h/w acceleration do so via calls to the OS, not to the chip directly. So it appears what you've got is what you're going to get.
Click to expand...
Click to collapse
Also, correct me if I'm wrong, but I don't think that the OS knows about the fifth core? I believe the chip's own scheduler manages the transition between the quad-core and the companion core, not the Android scheduler.
Mithent said:
Also, correct me if I'm wrong, but I don't think that the OS knows about the fifth core? I believe the chip's own scheduler manages the transition between the quad-core and the companion core, not the Android scheduler.
Click to expand...
Click to collapse
That's the way I'd guess it would work. I don't think Android addresses different chips differently. I'd assume it's up to the SoC to manage the incoming instructions and react accordingly. If Android was modified for dual-core, I don't think it diffentiates between the different implementations of dual-core chips. Someone with more h/w experience correct me if I'm wrong. Also, does anyone know if the chip manufacturer can add additional API's that developers can write to directly either instead of or in parallel with the OS? I ask because how can a game be optimized for Tegra if to the OS all chips are treated the same?
I tried out the power savings mode for a while.it seemed to perform just fine. Immediate difference is that it lowers the contrast ratio on display. This happens as soon as you press the power savings tab. Screen will look like brightness dropped a bit but if you look closely, you'll see it lowered the contrast ratio. Screen still looks good but not as sharp as in other 2 modes. UI still seems to preform just fine. Plus I think the modes doesn't affect gaming or video playback performance. I read that somewhere, either anandtech or Engadget. When watching vids or playing games, it goes into normal mode. So those things won't be affected no matter what power mode you in, I think..lol
I was thinking of starting a performance mode thread. To see different peoples results and thoughts on different power modes. I read some people post that they just use it in power/battery savings mode. Some keep it in normal all the time. Others in balanced mode. Would be good to see how these different modes perform in real life usage. From user perspective. I've noticed, so far, that In balanced mode, battery drains about 10% an hour. This is with nonstop use including gaming, watching vids, web surfing, etc. now in battery savings mode, it drains even less per hour. I haven't ran normal mode long enough to see how it drains compared to others. One thing though, web surfing drains battery just as fast as gaming.
BarryH_GEG said:
I ask because how can a game be optimized for Tegra if to the OS all chips are treated the same?
Click to expand...
Click to collapse
I hate quoting myself but I found the answer on Nvidia's website. Any otimizations are handled through OpenGL. So games written to handle additional calls that Teg2 can support are making those calls through OpenGL with the OS (I'm guessing) used as a pass-through. It would also explain why Tegra optimized games fail on non-Teg devices because they wouldn't be able process the additional requests. So it would appear that Teg optimization isn't being done through the OS. Again, correct me if I'm wrong.
BarryH_GEG said:
That's the way I'd guess it would work. I don't think Android addresses different chips differently. I'd assume it's up to the SoC to manage the incoming instructions and react accordingly. If Android was modified for dual-core, I don't think it diffentiates between the different implementations of dual-core chips.
Click to expand...
Click to collapse
I did some research on it; here's what Nvidia say:
The Android 3.x (Honeycomb) operating system has built-in support for multi-processing and is
capable of leveraging the performance of multiple CPU cores. However, the operating system
assumes that all available CPU cores are of equal performance capability and schedules tasks
to available cores based on this assumption. Therefore, in order to make the management of
the Companion core and main cores totally transparent to the operating system, Kal-El
implements both hardware-based and low level software-based management of the Companion
core and the main quad CPU cores.
Patented hardware and software CPU management logic continuously monitors CPU workload
to automatically and dynamically enable and disable the Companion core and the main CPU
cores. The decision to turn on and off the Companion and main cores is purely based on current
CPU workload levels and the resulting CPU operating frequency recommendations made by the
CPU frequency control subsystem embedded in the operating system kernel. The technology
does not require any application or OS modifications.
Click to expand...
Click to collapse
http://www.nvidia.com/content/PDF/t...e-for-Low-Power-and-High-Performance-v1.1.pdf
So it uses the existing architecture for CPU power states, but intercepts that at a low level and uses it to control the companion core/quad-core switch?
Edit: I wonder if that means that tinkering with the scheduler/frequency control would allow the point at which the companion core/quad-core switch happens to be altered? If the OP is correct, this might allow the companion core to be utilised less if an increase in "smoothness" was desired, at the cost of some battery life?
Mithent said:
I wonder if that means that tinkering with the scheduler/frequency control would allow the point at which the companion core/quad-core switch happens to be altered? If the OP is correct, this might allow the companion core to be utilised less if an increase in "smoothness" was desired, at the cost of some battery life?
Click to expand...
Click to collapse
So what we guessed was right. The OS treats all multi-cores the same and it's up to the chip maker to optimize requests and return them. To your point, what happens between the three processors (1+1x2+1x2) is black-box and controlled by Nvidia. To any SetCPU type program it's just going to show up as a single chip. People have tried in vain to figure how to make the Qualcomm dual-core's act independently so I'd guess Teg3 will end up the same way. And Nvidia won't even publish their drivers so I highly doubt they'll provide any outside hooks to control something as sensitive as the performance of each individual core in what they're marketing as a single chip.
[/COLOR]
Do you have Pulse installed? A bunch of people using it were reporting stuttering where their lower powered devices aren't. If you run it at full speed, does it stutter? One of the hypothesis is that it's the core's stepping up and down that's causing the stuttering.[/QUOTE]
I have been running mine in balanced mode, have had pulse installed since day one, no lag or stuttering in anything. games, other apps work fine.
Well my phones when clocked at 500 so I wouldn't be surprised
Sent from my VS910 4G using xda premium
I have used the cpu5.sh script to overclock the prime to 1.6Ghz, and benchmark shows a good improvement. I started streaming a 720P movie, mkv format, and everything played perfect but after only 5minutes the whole aluminum frame started to heat up noticeably . So I suggest waiting for some tested overclocked rom's before overclocking to be sure you don't blow it up.
You cant say it is the CPU tweak.
1.5 & 1.6 are built into this CPU. So, technically, your not "over clocking" as your not taking this chip beyond it's design spec. It is a 1.6 capable chip.
Moto did the same thing in the first Droid. They sent the phone out "under clocked" and then later did an update and gave users a claimed speed boost, but all they did was ope up the chip to it's full capable speed & sold it as a speed improvement. That is all that has been done here, we just beat Asus to it.
Lock-N-Load said:
You cant say it is the CPU tweak.
1.5 & 1.6 are built into this CPU. So, technically, your not "over clocking" as your not taking this chip beyond it's design spec. It is a 1.6 capable chip.
Moto did the same thing in the first Droid. They sent the phone out "under clocked" and then later did an update and gave users a claimed speed boost, but all they did was ope up the chip to it's full capable speed & sold it as a speed improvement. That is all that has been done here, we just beat Asus to it.
Click to expand...
Click to collapse
You're looking at it way to simple. The fact that the tegra3 chip is capable of running at higher speed, does not mean that in the Prime it is designed to do so.
Not saying you can't run at those speeds, but at extreme conditions (warm weather) , it could be a problem.
Yeah, I don't buy into that the "Prime is not designed to" argument nor that it is the proven issue here for what I said and given the CPU tweak he is talking about does not ramp all cores so I can't see an issue that would cause one to fret... but we can agree to disagree.
What the OP should have done was use CPU Spy and look and see for how long and IF he was even using 1.5 or 1.6 speeds.
Hey - I do think we need to be careful using these tweaks and things should be watched, but give the chip is 1.6 capable, I fully believe the Prime and the cpu's can take it just fine. And either way, this is not "over clocking" technically speaking.
I would imagine (could be wrong) that this chip has overheat and quite possibly overvolt protection. Almost every chip in the last 6-8 years has had this. I remeber turning my athlon x2 on with no heatsink and it actually stayed on for about 10 minutes before turning off. Even at 1.3-1.4 it can overheat. Infact the small overclock will be minimal on heat gain until additional voltage is applied. I would be suprised.if this overclock heated this chip up more than 5 c vs stock.
Sent from my DROID RAZR using Tapatalk
benefit14snake said:
I would imagine (could be wrong) that this chip has overheat and quite possibly overvolt protection. Almost every chip in the last 6-8 years has had this. I remeber turning my athlon x2 on with no heatsink and it actually stayed on for about 10 minutes before turning off. Even at 1.3-1.4 it can overheat. Infact the small overclock will be minimal on heat gain until additional voltage is applied. I would be suprised.if this overclock heated this chip up more than 5 c vs stock.
Sent from my DROID RAZR using Tapatalk
Click to expand...
Click to collapse
IT DOES. Viperboy already confirmed this while looking around in the kernel or whatever. He posted that response in his thread in developement section. IT HAS a failsafe mechanism. I would love to know more details about it though.
PLUS what op said could be said about stock also. THE PRIME BACKPLATE GETS hot in regular performance mode. PLUS I've ran the "Real True" Overclocking. That's the combination of viperboy control mod n system tuner where all 4 cores are maxed out to 1.6Ghz. VIPERBOY control mod alone doesn't do that. PRIME runs fine n does get hot sometimes. It depends on what you doing. PLUS its always good to have a battery temperature widget so you always see the temp. IF CPU is working like crazy. You will see the battery temp go up alot. PRIME was actually designed with 1.6ghz in mind. THats why its part of the kernel. ALL ASUS did was disable it for battery longevity purposes.
Regarding the 1.6Ghz and cpuspy I had just reset the timer before I started streaming and afterwords it showed it had used 99% of the time at 1.6Ghz
Sendt from my Transformer Prime TF201 with Tapatalk
Guys, look up "speed binning". Not every CPU is capable of max speed.
tinky1 said:
Guys, look up "speed binning". Not every CPU is capable of max speed.
Click to expand...
Click to collapse
http://www.pcpitstop.com/news/maxpc/overclock.asp
I read it. Long story short, Tegra3 can easily handle it. Its all about money. WHY would Asus/Nvidia release its first quad core device already maxed out? They want people to be more hyped up down the road when they easily enable the 1.6ghz on future devices, which will have the exact same tegra3 chip in them. Prime was their testing device for future speed increases down the road on future devices. Another reason theybdidnt enable it was because tl promote battery longevity. IF IT wasn't possible or supported or tested already, it wouldn't be in the kernel for us to easily enable it with root access.
If speed binning was an issue most people would need additional voltage to get to 1.6ghz. While this may be the case i in my experiences (HUGE PC overclocker) dont believe this to be the case. I have personally achieved a 1.8 to 3.4ghz overclock on -25c temps. So i have a bit of experience.
Sent from my DROID RAZR using Tapatalk
I posted this in another thread, but the Lenovo tablet running the Tegra 3, shown at CES, is being advertised as running at 1.6ghz.
No worries about running at that speed. Once the bootloader is opened up, I'd feel perfectly fine running at 2.0ghz myself.
Sent from my cellular telephone using magic
So I've been lurking on the prime's forums for a while now and noticed the debate of whether the new qualcomm dual core will be better or the current tegra 3 that the prime has. Obviously if both were clocked the same then the tegra 3 would be better. Also I understand that the gpu of the tegra 3 is better. However, for normal user (surf web, play a movie, songs etc) isn't dual core at 1.5 ghz better in that an average user will rarely use more 2 cores? The way I understand it each core is able to handle 1 task so in order to activate the 3rd core you would have to have 3 things going on at the same time? Could someone please explain this to me?
First of all, the tegra 3 can go up to 1.6 ghz. Secondly, all 4 cores can be utilized by a multi threading app. Lastly, battery is great on tegra III due to teh companion core.
jdeoxys said:
First of all, the tegra 3 can go up to 1.6 ghz. Secondly, all 4 cores can be utilized by a multi threading app. Lastly, battery is great on tegra III due to teh companion core.
Click to expand...
Click to collapse
But the native clock for that qualcomm would be 1.5 meaning o/c can take it higher. Also doesn't being dual core compared to quad core give it an edge in battery? You do bring up a good point with the multi threading app. Also to clarify I am not standing up for the qualcomm chip or putting down the tegra 3 just trying to get things straight.
Thanks
Hey I'm the ....idiot aboard here....lol
But the tegra 3 has a companion core, being a fifth core, to take over when the tablet is not stressed. Thus saving the battery.
I am just repeating what I have read, I have no knowledge of how it all works. I guess that is how we can get better battery life.
Just trying to help the OP, maybe some one way smarter can chime in. Shouldn't be hard....lol
Quad core is better by far. On low level tasks, simple things, and screen off/deep sleep the companion core takes over. Meaning its running on a low powered single core. This companion core only has a Max of 500Mhz speed. So when in deep sleep or low level tasks, companion core alone is running everything at only 102mhz -500Mhz. Most of the time on the lower end. Therefore tegra3 has the better battery life since all it's low power level tasks are ran by a single low powered companion core. That's 1 low powered core compared to 2 high powered cores trying to save battery. Quad core better all around. We hsvent even begun real overclocking yet. The 1.6Ghz speed was already in the kernel. So if you rooted n using vipercontrol or ATP tweaks or virtuous rom, we can access those speeds at any time. Once we really start overclocking higher than 1.6Ghz we will have an even more superior advantage. Anyone knows 4 strong men are stronger than 2..lol. tegra3 and nvidia is the future. Tegra3 is just the chip that kicked down the door on an evolution of mobile chip SoC.
---------- Post added at 10:13 PM ---------- Previous post was at 10:06 PM ----------
If you really want to learn the in and outs of tegra3, all the details, and how its better than any dual core, check out this thread I made. I have a whitepaper attachment in that thread you can download and read. Its made by nvidia themselves and goes into great detail on tegra3 by the people who created it, Nvidia. Check it out.
http://forum.xda-developers.com/showthread.php?t=1512936
aamir123 said:
But the native clock for that qualcomm would be 1.5 meaning o/c can take it higher. Also doesn't being dual core compared to quad core give it an edge in battery? You do bring up a good point with the multi threading app. Also to clarify I am not standing up for the qualcomm chip or putting down the tegra 3 just trying to get things straight.
Thanks
Click to expand...
Click to collapse
The maximum clock speed isn't all that important, since during tasks like web browsing, watching videos & movies and listening to music you will never push the processor to its highest available clock speed anyway. All mobile devices will underclock their processors so that you rarely have unused clock cycles eating up battery life. So all things being relatively equal performance would be about the same between both tablets during these types of lightweight tasks.
If you have a lot of background processes running, then the quad-core system might have an edge in performance since theoretically different tasks can be pushed off to different processors. However this use case is rarely found in Android. You might have an app checking weather or syncing photos in the background, or you might have music playing while you web surf, but those are generally fairly lightweight tasks that usually won't test the processor performance of your device.
In tasks that will stress you processor, such as 3D gaming, then quad cores have a very large advantage over dual core systems, despite the slight difference in maximum clock speeds. In addition the Tegra 3 has a more powerful GPU than the new Qualcomm chip, which will definitely make a noticeable difference in gaming performance.
Now when it comes to ultra-low power tasks or when the tablet is on Standby, the Tegra 3 uses its "companion core" which has incredibly low power requirements, so it can continue to sync your email, twitter and weather updates for days (or weeks) while having very little impact on the Transformer Prime's battery.
So in short, the Tegra 3 is more likely to outperform the Qualcomm in situations where you actually need extra performance. In light tasks performance between the two should be about the same. Battery life is yet to be definitively determined, however the Tegra's 3 ultra-low power companion core should give it an edge when only doing light tasks or on standb.
Keep in mind, the Tegra 3 in the TF Prime has a maximum clock speed of 1300Mhz. One core has a maximum clock speed of 1400Mhz. If all things were equal, a difference of 100-200 Mhz n a 1Ghz+ processor is practically unnoticeable in daily usage.
almightywhacko said:
The maximum clock speed isn't all that important, since during tasks like web browsing, watching videos & movies and listening to music you will never push the processor to its highest available clock speed anyway. All mobile devices will underclock their processors so that you rarely have unused clock cycles eating up battery life. So all things being relatively equal performance would be about the same between both tablets during these types of lightweight tasks.
If you have a lot of background processes running, then the quad-core system might have an edge in performance since theoretically different tasks can be pushed off to different processors. However this use case is rarely found in Android. You might have an app checking weather or syncing photos in the background, or you might have music playing while you web surf, but those are generally fairly lightweight tasks that usually won't test the processor performance of your device.
In tasks that will stress you processor, such as 3D gaming, then quad cores have a very large advantage over dual core systems, despite the slight difference in maximum clock speeds. In addition the Tegra 3 has a more powerful GPU than the new Qualcomm chip, which will definitely make a noticeable difference in gaming performance.
Now when it comes to ultra-low power tasks or when the tablet is on Standby, the Tegra 3 uses its "companion core" which has incredibly low power requirements, so it can continue to sync your email, twitter and weather updates for days (or weeks) while having very little impact on the Transformer Prime's battery.
So in short, the Tegra 3 is more likely to outperform the Qualcomm in situations where you actually need extra performance. In light tasks performance between the two should be about the same. Battery life is yet to be definitively determined, however the Tegra's 3 ultra-low power companion core should give it an edge when only doing light tasks or on standb.
Keep in mind, the Tegra 3 in the TF Prime has a maximum clock speed of 1300Mhz. One core has a maximum clock speed of 1400Mhz. If all things were equal, a difference of 100-200 Mhz n a 1Ghz+ processor is practically unnoticeable in daily usage.
Click to expand...
Click to collapse
Wow! Thanks for taking the time for breaking it down for me like that! I understand exactly where your coming from and now have to agree.
demandarin said:
Quad core is better by far.
Click to expand...
Click to collapse
At least that is what Nvidia would like you to think.
The Tegra 3 uses an older ARM core for it's quad core design while Qualcomm uses their own ARM instruction set compatible core for their Krait S4 design. For most current benchmarks the Qualcomm Krait S4 dual core seems to outpace the Tegra 3 by quite a large margin. And of course Krait will be expanded to quad core later this year.
http://www.anandtech.com/show/5563/qualcomms-snapdragon-s4-krait-vs-nvidias-tegra-3
Dave_S said:
At least that is what Nvidia would like you to think.
The Tegra 3 uses an older ARM core for it's quad core design while Qualcomm uses their own ARM instruction set compatible core for their Krait S4 design. For most current benchmarks the Qualcomm Krait S4 dual core seems to outpace the Tegra 3 by quite a large margin. And of course Krait will be expanded to quad core later this year.
http://www.anandtech.com/show/5563/qualcomms-snapdragon-s4-krait-vs-nvidias-tegra-3
Click to expand...
Click to collapse
There's already another thread on what you just mentioned and the Krait claims were easily shot down. Tegra3 still a better chip overall. Plus krait gpu was subpar to tegra3. We have more links and stuff in other thread showing Prime still right up there
demandarin said:
There's already another thread on what you just mentioned and the Krait claims were easily shot down. Tegra3 still a better chip overall. Plus krait gpu was subpar to tegra3. We have more links and stuff in other thread showing Prime still right up there
Click to expand...
Click to collapse
As unlikely as that seems considering the slower cores that Nvidia uses, links to real benchmarks ( not self serving white papers ) would be appreciated. I have glanced at your Tegra3 thread but have not read it all the way through after I saw that it seemed to depend a lot on a white paper and not real comparison tests. It is true that the current GPU the Krait uses is not as fast as the one in the Tegra 3, but graphics is only one element of overall performance. The only benchmarks that I have seen Tegra beat out Krait on were benchmarks that emphasized more than two threads, and then not by much.
Dave_S said:
As unlikely as that seems considering the slower cores that Nvidia uses, links to real benchmarks ( not self serving white papers ) would be appreciated. I have glanced at your Tegra3 thread but have not read it all the way through after I saw that it seemed to depend a lot on a white paper and not real comparison tests. It is true that the current GPU the Krait uses is not as fast as the one in the Tegra 3, but graphics is only one element of overall performance. The only benchmarks that I have seen Tegra beat out Krait on were benchmarks that emphasized more than two threads, and then not by much.
Click to expand...
Click to collapse
Its not my tegra3 thread I'm talking about. I think its the Prime alternatives thread created by shinzz. We had a huge debate over it. More benchmarks n supporting arguments in that thread. Check it out if you get the chance.
demandarin said:
Its not my tegra3 thread I'm talking about. I think its the Prime alternatives thread created by shinzz. We had a huge debate over it. More benchmarks n supporting arguments in that thread. Check it out if you get the chance.
Click to expand...
Click to collapse
Thanks, Will do. Gotta run for a doctor appointment right now though.
I frankly think the power savings with the fifth core is mostly hype. According to many battery tests I've read online and my own experiences with my Prime, it doesn't get much different battery life from dual core tablets.
Quad core is better for future but problem for backwards compatibility... it's definitely good for tablet.
jedi5diah said:
Quad core is better for future but problem for backwards compatibility... it's definitely good for tablet.
Click to expand...
Click to collapse
Here is another benchmark that shows that there is a least one current dual core that can soundly beat the Nvida quad core at benchmarks that are not heavily multithreaded.
http://www.extremetech.com/computin...ragon-s4-has-the-competition-on-the-defensive
Buddy Revell said:
I frankly think the power savings with the fifth core is mostly hype. According to many battery tests I've read online and my own experiences with my Prime, it doesn't get much different battery life from dual core tablets.
Click to expand...
Click to collapse
No dual core android tablet battery last longer than an ipad1. My prime easily outlasts my Ipad in battery life. The battery hype is real. Tons of people here seeing 9-11hrs+ on a single charge with moderate to semi heavy use on balanced mode. Even longer on power savings mode.
demandarin said:
No dual core android tablet battery last longer than an ipad1. My prime easily outlasts my Ipad in battery life. The battery hype is real. Tons of people here seeing 9-11hrs+ on a single charge with moderate to semi heavy use on balanced mode. Even longer on power savings mode.
Click to expand...
Click to collapse
Really? I get 9-12 hours constant use on balanced. Plus 6 more with the dock.
Sent from my PG8610000 using xda premium
I think if Krait were to come out with quad core then it would beat out tegra 3 otherwise no. Also they are supposed to improve the chip with updated gpu to 3.xx in future releases. Also benchmarks have been proven to be wrong in the past so who knows? Not like benchmarks can determine real life performance, nor does the average user need that much power.
Companion core really does work
jdeoxys said:
Really? I get 9-12 hours constant use on balanced. Plus 6 more with the dock.
Sent from my PG8610000 using xda premium
Click to expand...
Click to collapse
Strange, we just started uni here (Australia) and I've been using my prime all day, showing it off to friends (to their absolute amazement!) showing off glowball, camera effects with eyes, mouth etc. 2 hours of lecture typing, gaming on the train, watched a few videos and an episode of community played music on speaker for about 40 mins, webbrowsed etc etc started using at lightly at 9 am (only properly at say 1:30 pm) and it's 10:00pm now and GET THIS!!:
72% battery on tablet and 41% on the dock. It's just crazy man. No joke, it just keeps going, I can't help but admit the power saving must be real :/
Edit: Whoops, I quoted the wrong guy, but you get the idea.
That's what I'm saying. Battery life on prime is great. Add a dock n battery life is sick!
I do agree a quad core variant of krait or S4 will give tegra3 a really good battle. Regardless I'm more than satisfied with power of tegra3. I'm not the type as soon as i see a newer or higher spec tab, ill feel like mines is useless or outdated. With have developement going hard now for this device. Just wait till the 1.8-2ghz+ overclock roms n kernels drop. Then we would even give new quad core higher speed chips a good run.
Above all of that, Android needs to developement more apps to take advantage of the more powerful chips like tegra3 and those that's upcoming. Software is still trying to catch up to hardware spec. Android apps haven't even all been made yet to take advantage of tegra2 power..yet lol. With nvidia/tegra3 we have advantage because developers are encouraged to make apps n games to take advantage of tegra3 power.
Regardless we all Android. Need to focus more on the bigger enemy, apple n IOS