Related
I have a question about the 3D's dual core that I'd like more clarification on the vague answers I'm getting by searching this site and google. So I've read that the core is asynchronous so basically meaning the second core doesn't do much work unless needed as others like the tegra 2 and exynos have both cores running or something similar to that, and that this is affecting the benchmark scores. I also read that one would basically double the score of the 3D to get a more accurate reading. Can anyone confirm or further explain this?
Yes, asynchronous is when something operates on another thread whereas the main thread is still available for operating. This allows for better performance in terms of managing tasks. Now just because it doesn't score high on a benchmark, it doesn't mean it is going to perform. Also this allows for better performance for the battery.
I haven't slept for the past 12 hours so if this doesn't help you, just let me know and I will fully elaborate on how the processor will operate on the phone. Now time for bed :'(
In short, asynchronous operation means that a process operates independently of other processes.
Think of transferring a file. A separate thread will utilized for doing so. You will then be able to do background things such as playing with the UI, such as Sense since you will be using the main thread. If anything were to happen to the transferring file (such as it failing), you will be able to cancel it because it is independent on another thread.
I hope this makes sense man, kind of tired. Now I'm really going to bed.
Sent from my PC36100 using XDA App
To be more specific by asynchronous they mean that each core can run at different clock speeds. Core 1 could be at 1.2 ghz while core 2 is at 200 mhz. Most multi core processors are synchronous meaning all the cores are running at the same speed.
donatom3 said:
To be more specific by asynchronous they mean that each core can run at different clock speeds. Core 1 could be at 1.2 ghz while core 2 is at 200 mhz. Most multi core processors are synchronous meaning all the cores are running at the same speed.
Click to expand...
Click to collapse
^This too
Sent from my PC36100 using XDA App
I was also very curious to learn a little more about the async cores and how it differes from a standard "Always-On" dual core arctechiure.
Thh first page/video I found talks about the SnapDragon core specifically.
http://socialtimes.com/dual-core-snapdragon-processor-qualcomm-soundbytes_b49063
From what I've gathered, it comes down to using the second core and thus more power, only when needed. Minimizing voltage and heat to preserve battery life.
The following video goes into similar and slightly deeper detail about the processor specifically found in the EVO 3D. The demo is running a processor benchmark with a visual real time usage of the two cores. You can briefly see how the two cores are trading off the workload between each other. It was previously mentioned somewhere else on this forum, but I believe by seperating a workload between two chips, the chip will use less power across the two chips vs putting the same workload on a sinlge chip. I'm sure someone else will chime in with some additional detail. Also, after seeing some of these demos, I'm inclined to think that the processor found in the EVO 3D is actually stable at 1.5 but has been underclocked to 1.2 to conserve battery. Only time spent within our hands will tell.
Another demo of the MSM8660 and Adreno 220 GPU found in the EVO 3D. Its crazy to think we've come this far for mobile phone technology.
What occurred to me is how complex Community ROMs for such a device may become with the addition of Video Drivers that may continue to be upgraded and improved (think early Video Card tweaks for PC). Wondering how easy/difficult it will be to get our hands on them, possibly through extraction of updated stock ROMs.
EDIT: As far as benchmarks are concerned, I blame the inability of today's bench marking apps to consider async cores or properly utilize them during testing to factor the over all score. Because the current tests are most likely to be spread across cores which favors efficiency, the scores are going to be much lower than what the true power and performance of the chips can produce. I think of it as putting a horsepower governor on a Ferrari.
thanks for the explanation everyone
The best demonstration is in the first video posted, notice when Charbax looks at the monitor. There on the top right are the frequencies of the two cores, and you'll notice the both of them jumping around a lot, independent of the other. Using the cores "on-demand" only when needed ends up saving a lot of battery power, but doesn't give you any performance loss.
Harfainx said:
The best demonstration is in the first video posted, notice when Charbax looks at the monitor. There on the top right are the frequencies of the two cores, and you'll notice the both of them jumping around a lot, independent of the other. Using the cores "on-demand" only when needed ends up saving a lot of battery power, but doesn't give you any performance loss.
Click to expand...
Click to collapse
Actually I was thinking that not just the battery savings but there could be a performance gain. Think of this if the manufacturer knows they only have to clock one core up to speed when needed they can be more aggressive about their timings and have the core clock up faster than a normal dual core would since they know they don't have to clock up both processors when only one needs the full speed.
I wonder if the drop to 1.2 GHz also serves to keep heat under control. It might not just be battery savings, maybe the small case of a phone doesn't allow for proper cooling to hit 1.5 safely.
I'd love to see some confirmation that the asynchronous nature of this chipset is what's responsible for the seemingly lackluster benchmarking.
mevensen said:
I wonder if the drop to 1.2 GHz also serves to keep heat under control. It might not just be battery savings, maybe the small case of a phone doesn't allow for proper cooling to hit 1.5 safely.
I'd love to see some confirmation that the asynchronous nature of this chipset is what's responsible for the seemingly lackluster benchmarking.
Click to expand...
Click to collapse
The "horrible" benchmark scores are simply due to the tests inability to consider async core performance. Wait till the tests are able to take this into consideration.
Sent from my HERO200 using XDA Premium App
RVDigital said:
The "horrible" benchmark scores are simply due to the tests inability to consider async core performance. Wait till the tests are able to take this into consideration.
Sent from my HERO200 using XDA Premium App
Click to expand...
Click to collapse
I went through all of your links, I didn't see anything that confirms that the benches are somehow affected by the asynchronous nature of the chipset. It's not that I don't believe you, I actually had that same theory when the benches first came out. I just don't have any proof or explanation of it. Do you have a link that provides more solid evidence that this is the case?
NVIDIA actually tells a different story (of course)
http://www.intomobile.com/2011/03/24/nvidia-tegra-2-outperforms-qualcomm-dualcore-1015/
AnandTech's article does explain some of the differences
http://www.anandtech.com/show/4144/...gra-2-review-the-first-dual-core-smartphone/4
It appears that Snapdragon (Scorpion) will excel in some tasks (FPU, non-bandwith constrained applications), but will fall short in others .
I'm pretty sure none of the benchmark apps have even been updated past the release of the sensation so yeah....How could they update the app to use the asynchronus processors the if the only phones to use them have only recently been released.
Sent from my zombified gingerbread hero using XDA Premium App
I had the G2x for like 3 days and never got to root. Poor service where I live. But could the cores be set to a specific frequency independently when rooted like computers?
tyarbro13 said:
I had the G2x for like 3 days and never got to root. Poor service where I live. But could the cores be set to a specific frequency independently when rooted like computers?
Click to expand...
Click to collapse
Yea, if someone were to develop an app for that. I do not see why not.
Sent from my PC36100 using XDA App
Hmm...
If a program such as Smart bench (which takes advantage of dual cores) is stressing both cores to 1.2ghz then regardless of if both cores are active or not the bench will be accurate.
I would rather NOT have asyncronus cores as there would be lag during frequency changes...
Ex:
2 cores running at 500mhz vs 1 core @ 1ghz and other not active.
The 2 cores will produce less heat and use less energy...
Maedhros said:
Hmm...
If a program such as Smart bench (which takes advantage of dual cores) is stressing both cores to 1.2ghz then regardless of if both cores are active or not the bench will be accurate.
I would rather NOT have asyncronus cores as there would be lag during frequency changes...
Ex:
2 cores running at 500mhz vs 1 core @ 1ghz and other not active.
The 2 cores will produce less heat and use less energy...
Click to expand...
Click to collapse
There dual, it would be better for them to run asynchronous. Not only that, but it is a phone so there will be no lag between frequency changing. 2 Cores running at 500mhz will perform better than 1 core at 1ghz.
Sent from my PC36100 using XDA App
tyarbro13 said:
I had the G2x for like 3 days and never got to root. Poor service where I live. But could the cores be set to a specific frequency independently when rooted like computers?
Click to expand...
Click to collapse
This is something that the hardware needs to be capable of. Software can only do so much. As far as I've seen Tegra isn't capable of it.
I read the anandtech article and I came with conclusion that everyday task you might not see the difference between the two and while tegra2 might bench higher. The main thing people dont talk about is the GPU. Adreno 220 is a powerhouse GPU, it will probably stand strong when tegra 3 comes out.
DDiaz007 said:
There dual, it would be better for them to run asynchronous. Not only that, but it is a phone so there will be no lag between frequency changing. 2 Cores running at 500mhz will perform better than 1 core at 1ghz.
Sent from my PC36100 using XDA App
Click to expand...
Click to collapse
Huh... what are u saying? Sorry dont understand... On one hand you say asynchronous is better and on the other ur saying 2 cores @ 500 will work better?
nkd said:
I read the anandtech article and I came with conclusion that everyday task you might not see the difference between the two and while tegra2 might bench higher. The main thing people dont talk about is the GPU. Adreno 220 is a powerhouse GPU, it will probably stand strong when tegra 3 comes out.
Click to expand...
Click to collapse
What?!?
Andreno 220 is a horrible GPU. AT BEST it is equal to the GPU in the Original SGS.
The reason benches are so different is because Qualcomm has made NO improvements in the CPU. Desire HD CPU is the same as Sensations. While... SGS2 + Tegra have IMPROVED CPUs.
Arm 7 vs arm 9?
Maedhros said:
Huh... what are u saying? Sorry dont understand... On one hand you say asynchronous is better and on the other ur saying 2 cores @ 500 will work better?
What?!?
Andreno 220 is a horrible GPU. AT BEST it is equal to the GPU in the Original SGS.
The reason benches are so different is because Qualcomm has made NO improvements in the CPU. Desire HD CPU is the same as Sensations. While... SGS2 + Tegra have IMPROVED CPUs.
Arm 7 vs arm 9?
Click to expand...
Click to collapse
Dude go back to sleep. You have no clue what you are talking about.
Sent from my PC36100 using XDA Premium App
X10 is garbage! this is outrageous!
Yes really, they got it working, you want it so bad try porting it yourself
Sent from my MB860 using XDA App
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
dLo GSR said:
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
Click to expand...
Click to collapse
Oh snap. That was awesome.
Sent from my MB860 using XDA App
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
firefox3 said:
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Good news man
Sent from my MB860 using XDA App
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Javi97100 said:
Good news man
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Its turning out to be harder then i though... I think no one will get it until offical updates come out for other phones
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
So EGL = gpu driver? If thats the only setback, would it be possible to get an ICS rom with software rendering as a proof of concept, or are there other pieces missing?
GB/CM7 is pretty good on the Atrix, if we dont see ICS for a few months it doesn't hurt us in any way. I'd like to think most of us can be patient if we lack the skills to help.
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
According to anandtech, Tegra 2 support is essentially ready, so I think as long as nvidia releases the source for ics (libs?), someone will try to port it. Hell, I have a good 5 weeks during break, I might as well try then.
Sent from my MB860 using XDA App
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Buddy, check out any of the kernels available in the dev thread and you'll see that the GPUs are overclocked.
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
Doubt the iPhone will see ICS, the newest model that can run android as far as I know is the iPhone 3G, which was incredibly slow under Gingerbread.
mac208x said:
X10 is garbage! this is outrageous!
Click to expand...
Click to collapse
222 posts and zero thanks? Is this what you do, go around XDA and post useless threads like the guy complaining about returning home early despite nobody asking him to "to get MIUI ported on his grandma's phone"?
Are you guys related by any chance?
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Azurael said:
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Click to expand...
Click to collapse
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
Don't you get tired if writing those long rants? We understand you know something about CPU architecture, and that Tegra isn't the best one out there, but damn man, it's the same thing in every thread. Just chill out and try to stay on topic for once
Sent from my MB860 using Tapatalk
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
I think you are not seeing the whole picture...
The Tegra 3 (Et-Al) is not just about its quad core implementation, remember that the GPU will offer 12 cores that will translate in performance not seeing as of yet on any other platform.
Benchmarks don't tell the whole story! Specially those benchmarking tools which are not Tegra 3 optimized yet.
Cheers!
Sent from my Atrix using Tapatalk
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
LOL I ran all the iDroid ports on my iphone. Not one was even in alpha stage, I would not even count iDroid as a port since you cant use anything on it.
So I've been lurking on the prime's forums for a while now and noticed the debate of whether the new qualcomm dual core will be better or the current tegra 3 that the prime has. Obviously if both were clocked the same then the tegra 3 would be better. Also I understand that the gpu of the tegra 3 is better. However, for normal user (surf web, play a movie, songs etc) isn't dual core at 1.5 ghz better in that an average user will rarely use more 2 cores? The way I understand it each core is able to handle 1 task so in order to activate the 3rd core you would have to have 3 things going on at the same time? Could someone please explain this to me?
First of all, the tegra 3 can go up to 1.6 ghz. Secondly, all 4 cores can be utilized by a multi threading app. Lastly, battery is great on tegra III due to teh companion core.
jdeoxys said:
First of all, the tegra 3 can go up to 1.6 ghz. Secondly, all 4 cores can be utilized by a multi threading app. Lastly, battery is great on tegra III due to teh companion core.
Click to expand...
Click to collapse
But the native clock for that qualcomm would be 1.5 meaning o/c can take it higher. Also doesn't being dual core compared to quad core give it an edge in battery? You do bring up a good point with the multi threading app. Also to clarify I am not standing up for the qualcomm chip or putting down the tegra 3 just trying to get things straight.
Thanks
Hey I'm the ....idiot aboard here....lol
But the tegra 3 has a companion core, being a fifth core, to take over when the tablet is not stressed. Thus saving the battery.
I am just repeating what I have read, I have no knowledge of how it all works. I guess that is how we can get better battery life.
Just trying to help the OP, maybe some one way smarter can chime in. Shouldn't be hard....lol
Quad core is better by far. On low level tasks, simple things, and screen off/deep sleep the companion core takes over. Meaning its running on a low powered single core. This companion core only has a Max of 500Mhz speed. So when in deep sleep or low level tasks, companion core alone is running everything at only 102mhz -500Mhz. Most of the time on the lower end. Therefore tegra3 has the better battery life since all it's low power level tasks are ran by a single low powered companion core. That's 1 low powered core compared to 2 high powered cores trying to save battery. Quad core better all around. We hsvent even begun real overclocking yet. The 1.6Ghz speed was already in the kernel. So if you rooted n using vipercontrol or ATP tweaks or virtuous rom, we can access those speeds at any time. Once we really start overclocking higher than 1.6Ghz we will have an even more superior advantage. Anyone knows 4 strong men are stronger than 2..lol. tegra3 and nvidia is the future. Tegra3 is just the chip that kicked down the door on an evolution of mobile chip SoC.
---------- Post added at 10:13 PM ---------- Previous post was at 10:06 PM ----------
If you really want to learn the in and outs of tegra3, all the details, and how its better than any dual core, check out this thread I made. I have a whitepaper attachment in that thread you can download and read. Its made by nvidia themselves and goes into great detail on tegra3 by the people who created it, Nvidia. Check it out.
http://forum.xda-developers.com/showthread.php?t=1512936
aamir123 said:
But the native clock for that qualcomm would be 1.5 meaning o/c can take it higher. Also doesn't being dual core compared to quad core give it an edge in battery? You do bring up a good point with the multi threading app. Also to clarify I am not standing up for the qualcomm chip or putting down the tegra 3 just trying to get things straight.
Thanks
Click to expand...
Click to collapse
The maximum clock speed isn't all that important, since during tasks like web browsing, watching videos & movies and listening to music you will never push the processor to its highest available clock speed anyway. All mobile devices will underclock their processors so that you rarely have unused clock cycles eating up battery life. So all things being relatively equal performance would be about the same between both tablets during these types of lightweight tasks.
If you have a lot of background processes running, then the quad-core system might have an edge in performance since theoretically different tasks can be pushed off to different processors. However this use case is rarely found in Android. You might have an app checking weather or syncing photos in the background, or you might have music playing while you web surf, but those are generally fairly lightweight tasks that usually won't test the processor performance of your device.
In tasks that will stress you processor, such as 3D gaming, then quad cores have a very large advantage over dual core systems, despite the slight difference in maximum clock speeds. In addition the Tegra 3 has a more powerful GPU than the new Qualcomm chip, which will definitely make a noticeable difference in gaming performance.
Now when it comes to ultra-low power tasks or when the tablet is on Standby, the Tegra 3 uses its "companion core" which has incredibly low power requirements, so it can continue to sync your email, twitter and weather updates for days (or weeks) while having very little impact on the Transformer Prime's battery.
So in short, the Tegra 3 is more likely to outperform the Qualcomm in situations where you actually need extra performance. In light tasks performance between the two should be about the same. Battery life is yet to be definitively determined, however the Tegra's 3 ultra-low power companion core should give it an edge when only doing light tasks or on standb.
Keep in mind, the Tegra 3 in the TF Prime has a maximum clock speed of 1300Mhz. One core has a maximum clock speed of 1400Mhz. If all things were equal, a difference of 100-200 Mhz n a 1Ghz+ processor is practically unnoticeable in daily usage.
almightywhacko said:
The maximum clock speed isn't all that important, since during tasks like web browsing, watching videos & movies and listening to music you will never push the processor to its highest available clock speed anyway. All mobile devices will underclock their processors so that you rarely have unused clock cycles eating up battery life. So all things being relatively equal performance would be about the same between both tablets during these types of lightweight tasks.
If you have a lot of background processes running, then the quad-core system might have an edge in performance since theoretically different tasks can be pushed off to different processors. However this use case is rarely found in Android. You might have an app checking weather or syncing photos in the background, or you might have music playing while you web surf, but those are generally fairly lightweight tasks that usually won't test the processor performance of your device.
In tasks that will stress you processor, such as 3D gaming, then quad cores have a very large advantage over dual core systems, despite the slight difference in maximum clock speeds. In addition the Tegra 3 has a more powerful GPU than the new Qualcomm chip, which will definitely make a noticeable difference in gaming performance.
Now when it comes to ultra-low power tasks or when the tablet is on Standby, the Tegra 3 uses its "companion core" which has incredibly low power requirements, so it can continue to sync your email, twitter and weather updates for days (or weeks) while having very little impact on the Transformer Prime's battery.
So in short, the Tegra 3 is more likely to outperform the Qualcomm in situations where you actually need extra performance. In light tasks performance between the two should be about the same. Battery life is yet to be definitively determined, however the Tegra's 3 ultra-low power companion core should give it an edge when only doing light tasks or on standb.
Keep in mind, the Tegra 3 in the TF Prime has a maximum clock speed of 1300Mhz. One core has a maximum clock speed of 1400Mhz. If all things were equal, a difference of 100-200 Mhz n a 1Ghz+ processor is practically unnoticeable in daily usage.
Click to expand...
Click to collapse
Wow! Thanks for taking the time for breaking it down for me like that! I understand exactly where your coming from and now have to agree.
demandarin said:
Quad core is better by far.
Click to expand...
Click to collapse
At least that is what Nvidia would like you to think.
The Tegra 3 uses an older ARM core for it's quad core design while Qualcomm uses their own ARM instruction set compatible core for their Krait S4 design. For most current benchmarks the Qualcomm Krait S4 dual core seems to outpace the Tegra 3 by quite a large margin. And of course Krait will be expanded to quad core later this year.
http://www.anandtech.com/show/5563/qualcomms-snapdragon-s4-krait-vs-nvidias-tegra-3
Dave_S said:
At least that is what Nvidia would like you to think.
The Tegra 3 uses an older ARM core for it's quad core design while Qualcomm uses their own ARM instruction set compatible core for their Krait S4 design. For most current benchmarks the Qualcomm Krait S4 dual core seems to outpace the Tegra 3 by quite a large margin. And of course Krait will be expanded to quad core later this year.
http://www.anandtech.com/show/5563/qualcomms-snapdragon-s4-krait-vs-nvidias-tegra-3
Click to expand...
Click to collapse
There's already another thread on what you just mentioned and the Krait claims were easily shot down. Tegra3 still a better chip overall. Plus krait gpu was subpar to tegra3. We have more links and stuff in other thread showing Prime still right up there
demandarin said:
There's already another thread on what you just mentioned and the Krait claims were easily shot down. Tegra3 still a better chip overall. Plus krait gpu was subpar to tegra3. We have more links and stuff in other thread showing Prime still right up there
Click to expand...
Click to collapse
As unlikely as that seems considering the slower cores that Nvidia uses, links to real benchmarks ( not self serving white papers ) would be appreciated. I have glanced at your Tegra3 thread but have not read it all the way through after I saw that it seemed to depend a lot on a white paper and not real comparison tests. It is true that the current GPU the Krait uses is not as fast as the one in the Tegra 3, but graphics is only one element of overall performance. The only benchmarks that I have seen Tegra beat out Krait on were benchmarks that emphasized more than two threads, and then not by much.
Dave_S said:
As unlikely as that seems considering the slower cores that Nvidia uses, links to real benchmarks ( not self serving white papers ) would be appreciated. I have glanced at your Tegra3 thread but have not read it all the way through after I saw that it seemed to depend a lot on a white paper and not real comparison tests. It is true that the current GPU the Krait uses is not as fast as the one in the Tegra 3, but graphics is only one element of overall performance. The only benchmarks that I have seen Tegra beat out Krait on were benchmarks that emphasized more than two threads, and then not by much.
Click to expand...
Click to collapse
Its not my tegra3 thread I'm talking about. I think its the Prime alternatives thread created by shinzz. We had a huge debate over it. More benchmarks n supporting arguments in that thread. Check it out if you get the chance.
demandarin said:
Its not my tegra3 thread I'm talking about. I think its the Prime alternatives thread created by shinzz. We had a huge debate over it. More benchmarks n supporting arguments in that thread. Check it out if you get the chance.
Click to expand...
Click to collapse
Thanks, Will do. Gotta run for a doctor appointment right now though.
I frankly think the power savings with the fifth core is mostly hype. According to many battery tests I've read online and my own experiences with my Prime, it doesn't get much different battery life from dual core tablets.
Quad core is better for future but problem for backwards compatibility... it's definitely good for tablet.
jedi5diah said:
Quad core is better for future but problem for backwards compatibility... it's definitely good for tablet.
Click to expand...
Click to collapse
Here is another benchmark that shows that there is a least one current dual core that can soundly beat the Nvida quad core at benchmarks that are not heavily multithreaded.
http://www.extremetech.com/computin...ragon-s4-has-the-competition-on-the-defensive
Buddy Revell said:
I frankly think the power savings with the fifth core is mostly hype. According to many battery tests I've read online and my own experiences with my Prime, it doesn't get much different battery life from dual core tablets.
Click to expand...
Click to collapse
No dual core android tablet battery last longer than an ipad1. My prime easily outlasts my Ipad in battery life. The battery hype is real. Tons of people here seeing 9-11hrs+ on a single charge with moderate to semi heavy use on balanced mode. Even longer on power savings mode.
demandarin said:
No dual core android tablet battery last longer than an ipad1. My prime easily outlasts my Ipad in battery life. The battery hype is real. Tons of people here seeing 9-11hrs+ on a single charge with moderate to semi heavy use on balanced mode. Even longer on power savings mode.
Click to expand...
Click to collapse
Really? I get 9-12 hours constant use on balanced. Plus 6 more with the dock.
Sent from my PG8610000 using xda premium
I think if Krait were to come out with quad core then it would beat out tegra 3 otherwise no. Also they are supposed to improve the chip with updated gpu to 3.xx in future releases. Also benchmarks have been proven to be wrong in the past so who knows? Not like benchmarks can determine real life performance, nor does the average user need that much power.
Companion core really does work
jdeoxys said:
Really? I get 9-12 hours constant use on balanced. Plus 6 more with the dock.
Sent from my PG8610000 using xda premium
Click to expand...
Click to collapse
Strange, we just started uni here (Australia) and I've been using my prime all day, showing it off to friends (to their absolute amazement!) showing off glowball, camera effects with eyes, mouth etc. 2 hours of lecture typing, gaming on the train, watched a few videos and an episode of community played music on speaker for about 40 mins, webbrowsed etc etc started using at lightly at 9 am (only properly at say 1:30 pm) and it's 10:00pm now and GET THIS!!:
72% battery on tablet and 41% on the dock. It's just crazy man. No joke, it just keeps going, I can't help but admit the power saving must be real :/
Edit: Whoops, I quoted the wrong guy, but you get the idea.
That's what I'm saying. Battery life on prime is great. Add a dock n battery life is sick!
I do agree a quad core variant of krait or S4 will give tegra3 a really good battle. Regardless I'm more than satisfied with power of tegra3. I'm not the type as soon as i see a newer or higher spec tab, ill feel like mines is useless or outdated. With have developement going hard now for this device. Just wait till the 1.8-2ghz+ overclock roms n kernels drop. Then we would even give new quad core higher speed chips a good run.
Above all of that, Android needs to developement more apps to take advantage of the more powerful chips like tegra3 and those that's upcoming. Software is still trying to catch up to hardware spec. Android apps haven't even all been made yet to take advantage of tegra2 power..yet lol. With nvidia/tegra3 we have advantage because developers are encouraged to make apps n games to take advantage of tegra3 power.
Regardless we all Android. Need to focus more on the bigger enemy, apple n IOS
So in 2011 we have Tegra 2, in 2012 we have Tegra 3 so my questions is what will come in 2013? Octo-core or an improved version of quad core cpus?
Fasty12 said:
So in 2011 we have Tegra 2, in 2012 we have Tegra 3 so my questions is what will come in 2013? Octo-core or an improved version of quad core cpus?
Click to expand...
Click to collapse
Well as octo core desktop CPUs havnt really caught on yet I would guess just better quad cores likely with more powerful GPUs
Tegra 3 is already very powerful, presuming the will increase ram and make them more battery efficient or even higher clock speed. 12 core tegra gpu is pretty amazing already and anything better must be godly
Sent from my HTC Desire using xda app-developers app
If u mean for mobile platform , Will we really need beyond Quad core, having seen how SGSIII is smoothly running with it, beyond that what more perfection ( yaa still more can be expected) and speed u will need to do ur work . As known Android use other cores on need basis , why u need to see ur 2-3 cores never used.. i think its just more curiosity n to have more advaced/latest will be the only reason to have such high cpu on ur mobile..
What I like to see is ups in RAM installed and lows in RAM usage by system...
Sounds like octo-mom..the debate.lives on.. battery vs performance...but to answer your question I think it would be hexa-core which is 6..let's wait and see what is to come...
Sent from my SGH-T989 using Tapatalk 2
s-X-s said:
If u mean for mobile platform , Will we really need beyond Quad core, having seen how SGSIII is smoothly running with it, beyond that what more perfection ( yaa still more can be expected) and speed u will need to do ur work . As known Android use other cores on need basis , why u need to see ur 2-3 cores never used.. i think its just more curiosity n to have more advaced/latest will be the only reason to have such high cpu on ur mobile..
What I like to see is ups in RAM installed and lows in RAM usage by system...
Click to expand...
Click to collapse
I agree. Cores are at there peak right now. The amount of CPU power we have especially in the higher end phones is enough to acomplish many, many things. RAM is somewhat of an issue especially since multitasking is a huge part of android. I really thing a 2gb RAM should be a standard soon. Also, better gpu's won't hurt
Sent from my HTC T328w using Tapatalk 2
If they decide to keep going on the core upgrade in the next two or so years, I see one of two possibilities happening:
1) Dual Processor phones utilizing either dual or quad cores.
or
2) Hexacore chips since on the desktop market there's already a few 6-core chips (though whether or not they would actually be practical in the phones architecture, no clue).
Generally speaking whatever they come out with next will either need a better battery material, or lower power processors.
I mean I'm pretty amazed by what my brother's HTC One X is capable of with the quad core, and here I am still sporting a single-core G2. But yes I would like to see more advancement in RAM usage, we got a nice bit of power, but how bout a standard 2GB ram for better multitasking?
I believe 2013 will be all about more efficient quad-cores.
May i ask what going from 1gb to 2gb will improve? Loading times?
hello everyone, could you tell me what is cuad core?
Quad core means that a processor has four processing units.
Because there are more, that means that a process, theoretically, gets executed 4 times faster.
Read more about it: http://simple.wikipedia.org/wiki/Multi-core_processor
Maybe i7 in mobile devices?
I'm sure it will stay at quad core cpu's, anything more is just overkill. They may introduce hyperthreading. It's going to boil down to efficiency.
Sent from my SPH-D700 using xda premium
I'd say the future lies in more efficient use of processors. Right now, Android is still far from optimized on multi-core processor-equipped devices. Project Butter is the start of a great movement by Google to optimize the operating system. Hopefully it spreads out to other OEMs and becomes the main focus for Android development.
Improving and optimizing current processors is the way hardware companies should go.
In my opinion, processor development is out running battery development. Optimized processors could reduce power consumption while preserving excellent speed and usability.
Sent from my Transformer TF101 using Tapatalk 2
building processors on more efficient ARM architectures is going to be the way to go from what I see......throwing four less efficient cores at a problem is the caveman method to dealing with it.....looking at you Samsung Exynos Quad based on tweaked A9 cores.....
the A15 based Qualcomm S4 Krait is more efficient on a clock for clock core for core basis and once the software catches up and starts using the hardware in full capacity, less more efficient cores will be preferred
I dont see anything beyond quads simply because they havent even scratched the surface of what can be done with a modern dual core processor yet.......throwing more cores at it only makes excuses for poor code.....i can shoot **** faster than water with a big enough pump......but that doesn't mean that's the better solution
We don't need more cores! Having more than 2 cores will not make a difference so quad cores are a waste of space in the CPU die.
Hyperthreading, duh.
More ram. Got to have the hardware before the software can be made to use it.
With the convergence of x86 into the Android core and the streamlining of low-power Atom CPUs, the logical step would be to first optimize the current software base for multi-core processors before marketing takes over with their stupid x2 multiplying game...
Not long ago, a senior Intel exec went on record saying that today, a single core CPU Android smartphone is perhaps better overall performing (battery life, user experience, etc) than any dual/quad-core CPU. Mind you, these guys seldom if ever stick out their neck with such bold statements, especially when not pleasing to the ear...
For those interested, you can follow this one (of many) articles on the subject: http://www.zdnet.com/blog/hardware/intel-android-not-ready-for-multi-core-cpus/20746
Android needs to mature, and I think it actually is. With 4.1 we see the focus drastically shifted to optimization, UX and performance with *existing/limited* resources. This will translate to devices beating all else in battery life, performance and graphics but since it was neglected in the first several iterations, it is likely we see 4.0 followed by 4.1 then maybe 4.2 before we hear/see the 5.0 which will showcase maturity and evolution of the experience.
Just my 2c. :fingers-crossed:
Planning to return my N10 cause the stuttering on it is driving me insane and im really interested in the Tablet Z currently.
1920x1080 on an 1.5GHz Qualcomm APQ8064 with adreno 320GPU VS 2560x1600 nexus 10 with an exynos 5 and a mali t604 GPU clocked at 1.7 GHZ.
Fasty12 said:
Planning to return my N10 cause the stuttering on it is driving me insane and im really interested in the Tablet Z currently.
1920x1080 on an 1.5GHz Qualcomm APQ8064 with adreno 320GPU VS 2560x1600 nexus 10 with an exynos 5 and a mali t604 GPU clocked at 1.7 GHZ.
Click to expand...
Click to collapse
The S4 is halfway between the Cortex A9 cores and the new Cortex A15 core that we have. So it is decent enough of a CPU. I am not sure how good of a GPU that is. None of my devices the past couple years have had Adreno GPU's At least it wont have to work as hard with the lower resolution
Fasty12 said:
Planning to return my N10 cause the stuttering on it is driving me insane and im really interested in the Tablet Z currently.
1920x1080 on an 1.5GHz Qualcomm APQ8064 with adreno 320GPU VS 2560x1600 nexus 10 with an exynos 5 and a mali t604 GPU clocked at 1.7 GHZ.
Click to expand...
Click to collapse
What stuttering are you talking about?
Draw your own conclusions.
S4 Pro - http://www.anandtech.com/show/6112/...agon-s4-apq8064adreno-320-performance-preview
Exynos 5 - http://www.anandtech.com/show/6148/samsung-announces-a15malit604-based-exynos-5-dual
From everything I've seen and experienced the exynos 5 is the better of the two. The a15 is a more powerful core than the krait core, that with the higher clock speeds and the better GPU makes for a better chip. Personally I have never had my n10 lag at all. Maybe you just got a dud?
Sent from my Nexus 10 using Tapatalk HD
enik_fox said:
From everything I've seen and experienced the exynos 5 is the better of the two. The a15 is a more powerful core than the krait core, that with the higher clock speeds and the better GPU makes for a better chip. Personally I have never had my n10 lag at all. Maybe you just got a dud?
Sent from my Nexus 10 using Tapatalk HD
Click to expand...
Click to collapse
But the Exynos 5 has to run that massive screen res. Also, the reason I think that Qualcomm modified the core was because of the power consumption. Stock A-15 core consumes quite a lot of power.
Sent from my LG-P990 using xda app-developers app
Fasty12 said:
Planning to return my N10 cause the stuttering on it is driving me insane and im really interested in the Tablet Z currently.
1920x1080 on an 1.5GHz Qualcomm APQ8064 with adreno 320GPU VS 2560x1600 nexus 10 with an exynos 5 and a mali t604 GPU clocked at 1.7 GHZ.
Click to expand...
Click to collapse
Every now and then I read ppl complaining about lags and stutters... I have not experienced one since I have the device; can you please explain what you are doing when this happens?
avdaga said:
Every now and then I read ppl complaining about lags and stutters... I have not experienced one since I have the device; can you please explain what you are doing when this happens?
Click to expand...
Click to collapse
Try opening and closing google maps after the map has been loaded there is a NOTICEABLE frame rate drop compare to other apps.
kaspar737 said:
But the Exynos 5 has to run that massive screen res. Also, the reason I think that Qualcomm modified the core was because of the power consumption. Stock A-15 core consumes quite a lot of power.
Sent from my LG-P990 using xda app-developers app
Click to expand...
Click to collapse
The Snapdragon S4 does not use an A15 core or any derivative of an A15. Qualcomm has ALWAYS completely designed their cores custom and has almost nothing to do with the current major core from ARM's reference design. Additionally, the S4 was designed and released before the A15 MP-Core was even finished with its design phase.
The Krait core uses a similar (but not the same) triple wide decode stage like the A15 core, but it uses a completely different 11 stage execution pipeline compared to the A15's 15 stage pipeline. The higher stages of the pipeline allow the A15 design to break things down smaller and achieve higher frequency, but if there were to be a failure in computing then the A15 must wait a longer time before it can start over where the Krait core doesnt have to wait as long, but also isnt as efficient in " normal" circumstances. Honestly the integer performance between the two cores is pretty close, but I think I remember seeing that the A15 has a lot stronger floating point performance. So I guess it really depends on your workload.
FYI, the Exynos 5, Tegra 4, and TI OMAP 5 processors are all based on the A15 core design. Qualcomm is the only major player who does not base their processors on the ARM design
Fasty12 said:
Try opening and closing google maps after the map has been loaded there is a NOTICEABLE frame rate drop compare to other apps.
Click to expand...
Click to collapse
Do you mean a drop in framerate during the animation when closing Maps? I notice a minor framerate drop which lasts as long as the animation does, but if that is it, I'm kinda wondering why you bought an android device in first place... I have not noticed this before and I cannot imagine anyone would using the device for its intended purposes. If you take any android device, you will find a fps drop at some point... Maybe return it and take an iPad? iPads do not have the issue, on the other hand there's a lot that iPads do not have ^^
kaspar737 said:
But the Exynos 5 has to run that massive screen res. Also, the reason I think that Qualcomm modified the core was because of the power consumption. Stock A-15 core consumes quite a lot of power.
Sent from my LG-P990 using xda app-developers app
Click to expand...
Click to collapse
Exynos has higher memory bandwidth so the difference isn't substantial.
Sent from my Galaxy Nexus using Tapatalk 2
---------- Post added at 01:33 PM ---------- Previous post was at 01:29 PM ----------
EniGmA1987 said:
The Snapdragon S4 does not use an A15 core or any derivative of an A15. Qualcomm has ALWAYS completely designed their cores custom and has almost nothing to do with the current major core from ARM's reference design. Additionally, the S4 was designed and released before the A15 MP-Core was even finished with its design phase.
The Krait core uses a similar (but not the same) triple wide decode stage like the A15 core, but it uses a completely different 11 stage execution pipeline compared to the A15's 15 stage pipeline. The higher stages of the pipeline allow the A15 design to break things down smaller and achieve higher frequency, but if there were to be a failure in computing then the A15 must wait a longer time before it can start over where the Krait core doesnt have to wait as long, but also isnt as efficient in " normal" circumstances. Honestly the integer performance between the two cores is pretty close, but I think I remember seeing that the A15 has a lot stronger floating point performance. So I guess it really depends on your workload.
FYI, the Exynos 5, Tegra 4, and TI OMAP 5 processors are all based on the A15 core design. Qualcomm is the only major player who does not base their processors on the ARM design
Click to expand...
Click to collapse
THANK YOU!! my god I've had to explain this so many times! Qualcomm licenses ONLY the armv7 instructions and not arms designs. They design their own chips from the ground up and GPU, so please people stop saying Qualcomm is a cortex series processor because it isn't. Samsung and the rest license arms design and modify it, in Samsungs case they tend to increase the IPC slightly and give it more memory bandwidth.
Also to answer the question, exynos 5 will do better at higher resolutions and they will be very close in lower resolutions. S4 will be better in multi thread workloads more then likely and exynos will have better float performance. Exynos is better for games once the thermal throttling is fixed.
Sent from my Galaxy Nexus using Tapatalk 2
ECOTOX said:
Exynos has higher memory bandwidth so the difference isn't substantial.
Sent from my Galaxy Nexus using Tapatalk 2
---------- Post added at 01:33 PM ---------- Previous post was at 01:29 PM ----------
THANK YOU!! my god I've had to explain this so many times! Qualcomm licenses ONLY the armv7 instructions and not arms designs. They design their own chips from the ground up and GPU, so please people stop saying Qualcomm is a cortex series processor because it isn't. Samsung and the rest license arms design and modify it, in Samsungs case they tend to increase the IPC slightly and give it more memory bandwidth.
Also to answer the question, exynos 5 will do better at higher resolutions and they will be very close in lower resolutions. S4 will be better in multi thread workloads more then likely and exynos will have better float performance. Exynos is better for games once the thermal throttling is fixed.
Sent from my Galaxy Nexus using Tapatalk 2
Click to expand...
Click to collapse
The 50 percent extra memory bandwith doesn't matter so much considered that the Exynos has to move almost twice the amount of pixels.
Sent from my LG-P990 using xda app-developers app
kaspar737 said:
The 50 percent extra memory bandwith doesn't matter so much considered that the Exynos has to move almost twice the amount of pixels.
Sent from my LG-P990 using xda app-developers app
Click to expand...
Click to collapse
Memory bandwidth makes a pretty big difference when it comes too resolutions. I.E 8600gt ddr2 vs ddr3. Wider memory bus and faster memory makes a big difference in higher res performance of any GPU
Also will help with GPU compute performance for future apps utilizing the Mali t604s compute abilities
Sent from my Galaxy Nexus using Tapatalk 2
ECOTOX said:
Memory bandwidth makes a pretty big difference when it comes too resolutions. I.E 8600gt ddr2 vs ddr3. Wider memory bus and faster memory makes a big difference in higher res performance of any GPU
Also will help with GPU compute performance for future apps utilizing the Mali t604s compute abilities
Sent from my Galaxy Nexus using Tapatalk 2
Click to expand...
Click to collapse
But lets say that Exynos uses the whole 12.8 gb/s bandwith. That means that to move twice as less pixels you would need 6.4 gb/s so memory bandwidth isn't an issue.
Sent from my LG-P990 using xda app-developers app
kaspar737 said:
But lets say that Exynos uses the whole 12.8 gb/s bandwith. That means that to move twice as less pixels you would need 6.4 gb/s so memory bandwidth isn't an issue.
Sent from my LG-P990 using xda app-developers app
Click to expand...
Click to collapse
But that bandwidth is shared, unlike on dedicated GPU where it isn't. The total system bandwidth (not including buses for modem or w.e others are there) on the exynos chip being higher is gonna give it the edge in any situation considering the closeness in performance between the two. It also can't be denied that the Mali t604 has a edge in horse power over adreno 320 because even at the n10s resolution it comes within a couple fps of adreno at 1080p resolution. Not saying it's a big difference, but the exynos is the more powerful all around chip and that's just in is dual core form.
Edit: Also its a known fact that Adreno has crap fill rate compared too Mali or Power VR, Adrenos Strength is Geometry performace so it takes more of a hit the higher the resolution than Either the Mali t604 or the SGX 554MP4 which both have higher Fillrate and the SoC we have to compare both have higher bandwidth to facilitate that so we dont get bottle necked.
Sent from my Galaxy Nexus using Tapatalk 2
Finally a lot of exerts here about GPU
, I know it is not related to topic but me and my girlfriend have Galaxy Note 2 and S3. As you all know they are the last ones to use the elder Mali-400 GPU. I love playing games and I am getting my girlfriend used to them too. So I was wondering how is our Mali400 GPU holding up against the new coming 1080p Adreno 320 devices? It is clear the future is 1080p. I am either planning to switch our devices with a couple of Nexus 4s or Xperia Zs. Because I fear our devices are about to be outdated with the next game right around the corner. So far they are doing just fine with Modern Combat 4 and the all other graphic intensive games by playing over 28-30 FPS. But according to the GLBenchmark 2.5Egypt they are useless against new Adreno 320. However I have read that most of the games were designed for high fill rate power and Mali 400 is able to beat Adreno 320. But on the triangle tests, it just bottlenecks.
So what is your opinion about it? I will our devices do another year and half for the new games? Or should I make the trade? Or should I just buy a Nexus 10 with 2 users assigned and continue games on it? I
Thank you for reading.
Sent from my GT-N7100 using xda app-developers app
_delice_doluca_ said:
Finally a lot of exerts here about GPU
, I know it is not related to topic but me and my girlfriend have Galaxy Note 2 and S3. As you all know they are the last ones to use the elder Mali-400 GPU. I love playing games and I am getting my girlfriend used to them too. So I was wondering how is our Mali400 GPU holding up against the new coming 1080p Adreno 320 devices? It is clear the future is 1080p. I am either planning to switch our devices with a couple of Nexus 4s or Xperia Zs. Because I fear our devices are about to be outdated with the next game right around the corner. So far they are doing just fine with Modern Combat 4 and the all other graphic intensive games by playing over 28-30 FPS. But according to the GLBenchmark 2.5Egypt they are useless against new Adreno 320. However I have read that most of the games were designed for high fill rate power and Mali 400 is able to beat Adreno 320. But on the triangle tests, it just bottlenecks.
So what is your opinion about it? I will our devices do another year and half for the new games? Or should I make the trade? Or should I just buy a Nexus 10 with 2 users assigned and continue games on it? I
Thank you for reading.
Sent from my GT-N7100 using xda app-developers app
Click to expand...
Click to collapse
They will hold on, my SGS2 runs all of the current games at the highest settings ( I haven't tried GTA though) without any issues, the Adreno 320 is far better than the Mali 400 MP4 though
Ya, Im pretty sure they will still play games a year from now. Until the market is completely saturated with devices like the Nexus 10 in power we wont really see large jumps in system requirements. That will probably only happen a year or two from now once all the new phones and tablets are made with A15 processors (or Qualcomm equivalent) and beefy GPUs.
Fidelator said:
They will hold on, my SGS2 runs all of the current games at the highest settings ( I haven't tried GTA though) without any issues, the Adreno 320 is far better than the Mali 400 MP4 though
Click to expand...
Click to collapse
The S2(Mali400) plays GTA3 without a hiccup.
The exynos dual is very power hungry compared to the s4pro but it is also the most powerful arm processor out today. Nothing else yet released (I said RELEASED) is as powerful or can match its bandwidth. Having said that I'm sure a normal resolution 1080p screen in this form factor with the s4pro would be a nice fast tablet. Right now the exynos dual is pretty much the only thing outside apple that can push the resolution that the n10 has. I think if they had put another gig of ddr3 in this thing there wouldn't be so much stuttering in certain instances. Besides the thermal cutoff the n10 is starved for memory as it has to share normal duties and its ram with the graphical load of pushing all the pixels of this monster resolution. You are lucky to have 300mb of ram available at idle on the n10 vs over a gig available with the s4pro on the 720p screen of the nexus 4
Sent from my often RMA'd Nexus 4, So that I can use the one I'm using now when I get the 6th and hopefully final one.