Related
OK, I had my mind set on the Optimus 3D as there was no sign of the Evo 3D coming out in the UK. Now we have a date I'm stuck in 2 minds again.
I'm still swaying towards the LG, I know the specs of the Evo are a lot higher but I'm buying for the 3D aspect and I believe that LG will make more 3D content than HTC, they have said there will be a 3D store, they have a 3D menu, there will be 3D games pre-installed and they are working with publishers for more 3D content. I have heard nothing like this from HTC and they haven't been great with things like this in the past.
However, I may be swayed back to the HTC if the photos are any good.
Does anyone actually have the Evo 3D in their hands like a few have the Optimus?
If so, can anyone let me see a full size .jps file taken by the phone in natural lighting?
evo 3d wont be in hands till june and it terms of specs evo wins, in terms of content the market will provide most of it, a few preinstalled apps isnt enough to justify a weaker phone specifically since you will probably be able to just install the apk. im sure there will be minor differences in the way 3d is deisplaed on each phone but both htc and lg are heavily invested in 3d so i dont imagine the differences will be earth shattering
the only real reason to pick one phone over the other in this case is camera quality if you intend to take 3d pics
cheers aaron, the much earlier release date of the Optimus is also a plus point. 3D content will be king for me, all other apps will be just about the same. I just can't see HTC having their own app store for 3D games/apps like LG will.
I suppose I could get the Optimus and then sell it for the HTC if I decide the images are far superior
Im in the some situation, specialy now when I know release date for evo 3d. About this 3d there is 1 question which decide. Which phone can playback 3d movies? Want to make sure that phone can run 3d movies before I get it. I do not know why there is no any information that phone can or can not playback movies when we put some films on the phone. That is priority for me, Im not so exciting that I can record movie in 3d, The most important thing is can I playback in 3d for example movie like avatra 3d or resident evil 3d?
I will only get HTC and with better stats, whatever the Optimus can do the Evo 3D can do better!
how can you be sure?
1.2Ghz doesn't mean it will definately be quicker than the 1Ghz, both different types of CPU, different channels for memory etc
it could be like saying a 12mp camera is better and a 10mp camera just because there's more pixels there and not seeing which has the best optics or sensors.
Plus, the Optimus is in peoples hands now and is released the 1st week of next month, can the HTC do that better?
I cant be sure, but based upon my knowledge of computers a dual core will definitely run smoother and quicker. It's why its hard to find a single core processor anymore. Its nearly safe to assume, more=better.
toxicfumes22 said:
I cant be sure, but based upon my knowledge of computers a dual core will definitely run smoother and quicker. It's why its hard to find a single core processor anymore. Its nearly safe to assume, more=better.
Click to expand...
Click to collapse
Got to disagree, my 3.6ghz i7 is a lot faster than a 4ghz i5, I very theres not much in dual core 1ghz and 1.2ghz made by different manufacturers
haha i7 is a better processor and still multiple cores. Your i7 has more memory so it can process more efficiently which in turn is faster . But understand I am saying more cores=better and those are both quad core processors. Also that i5 is overclocked and the i7 is not. And theres many other things that will determine why you can tell a difference.
x7nofate said:
Im in the some situation, specialy now when I know release date for evo 3d. About this 3d there is 1 question which decide. Which phone can playback 3d movies? Want to make sure that phone can run 3d movies before I get it. I do not know why there is no any information that phone can or can not playback movies when we put some films on the phone. That is priority for me, Im not so exciting that I can record movie in 3d, The most important thing is can I playback in 3d for example movie like avatra 3d or resident evil 3d?
Click to expand...
Click to collapse
I do believe it will play back 3d movies.......I have never been able to watch a full movie on my EVO as I just don't see the joy in it. Maybe with some 3d addition it will make it more fun.
The EVO 3D isn't an 1.2gb phone is an 2.4gb phone. each processor works independently from each other so the phone can send commands to each core. No phone or PC on the market or coming to the market can do that.
[email protected] said:
I do believe it will play back 3d movies.......I have never been able to watch a full movie on my EVO as I just don't see the joy in it. Maybe with some 3d addition it will make it more fun.
Click to expand...
Click to collapse
Dude, I watch about 1-2 movies on every flight I take. It's awesome...I use rock player and it can play nearly anything.
Not necessarily, the evo has two 1.2 GHz cores, yes.but that doesn't mean its 2.4 ghz, more like 1.8 with massive battery savings. It would also handle multiple tasks much better, it would only be like 2.4ghz if both cores were they were both at 100%
sero2012 said:
The EVO 3D isn't an 1.2gb phone is an 2.4gb phone. each processor works independently from each other so the phone can send commands to each core. No phone or PC on the market or coming to the market can do that.
Click to expand...
Click to collapse
Sent from my HERO200 using XDA App
sero2012 said:
The EVO 3D isn't an 1.2gb phone is an 2.4gb phone. each processor works independently from each other so the phone can send commands to each core. No phone or PC on the market or coming to the market can do that.
Click to expand...
Click to collapse
Ummmmm, it doesn't quite work like that. The processors cannot work on the same task like 2 people pushing cart, its more like 2 people pushing 2 carts (each one on a separate cart) at the same time . They have started having 2 cores work with each other more in the new i7 processors but I do not see that happening in this small chip. Its simply having 2 x 1.2 GHz processors.....to some degree. Now lets say your opening a program, one processor opens it, and the other keeps all the other junk running in the background. The processors can work on the same task but not on top of each other like described above. Its described in its name dual core = 2 processors in 1 chip.
[email protected] said:
I do believe it will play back 3d movies.......I have never been able to watch a full movie on my EVO as I just don't see the joy in it. Maybe with some 3d addition it will make it more fun.
Click to expand...
Click to collapse
I spoke with 2 people who had LG Optimus 3D on this forum and no one confirmed that LG Optimus 3D run 3D movies. I try converted movie in many way but it didn't help.
Look my last post in here>>> http://forum.xda-developers.com/showthread.php?t=985690&page=5
this is HelmuthB answer on my private massage: "For some reasons the "Step Up" clip does not play, neither on my PC (Ubuntu) nor
on the LG.
The Avatar clips play but just 2D, it does not combine the two sides into one. :-("
P.s. does htc evo 3d support gorilla glass?
toxicfumes22 said:
Ummmmm, it doesn't quite work like that. The processors cannot work on the same task like 2 people pushing cart, its more like 2 people pushing 2 carts (each one on a separate cart) at the same time . They have started having 2 cores work with each other more in the new i7 processors but I do not see that happening in this small chip. Its simply having 2 x 1.2 GHz processors.....to some degree. Now lets say your opening a program, one processor opens it, and the other keeps all the other junk running in the background. The processors can work on the same task but not on top of each other like described above. Its described in its name dual core = 2 processors in 1 chip.
Click to expand...
Click to collapse
It also doesn't work quite like you've described here.
Anyone really interested should Google CPU threading to get a brief overview of how it works at a conceptual level, and then should look at the specific implementations on each actual CPU architecture to get a deeper understanding, and even further the specific implementations for an instance of that architecture.
For example: Threading theory > Threading in ARM CPU architecture > Threading in ARM Cortex A9 or Tegra 2 or Hummingbird or QualComm MSM-series, etc.
Trying to compare how it works for these ARM procs versus x86 procs (like Intel or AMD chips) is not only a waste of time, it's also incorrect.
If ARM procs handled processing and threading the same as x86 chips, Microsoft would not have to specifically release Windows 8 with an ARM-compatible version.
IN GENERAL, the theory of threading includes the concept of CPU affinity for a thread of processing. In the case of multi-core CPUs, in many instances, this just means that there are more available processing cores to which multi-threaded code can send processes.
In the case of more recent dual-core CPUs, the implementation has also included dynamic frequency scaling, even to the core level, such that when not in use, a core can lay dormant at a very low frequency, consuming very little power.
The result in user perception is that there is a savings in power, because 2 (or 3 or 4 or 6 or 8 or whatever) cores can accomplish a set of tasks much more efficiently and with less power usage than a single core because the single core would have to run at max frequency for the entire duration of the processing, whereas with 2 cores for example, they both might run at 100%, but because there are more engines to process the work, it might take less than half the time, depending upon how many of those processes are sequentially dependent, and how many can be done in parallel.
Parallelism is also a good theory to read up on.
All this said, I'm waiting for the EVO 3D in June to make the leap from TMOUS to Sprint.
maxawesome said:
It also doesn't work quite like you've described here.
Anyone really interested should Google CPU threading to get a brief overview of how it works at a conceptual level, and then should look at the specific implementations on each actual CPU architecture to get a deeper understanding, and even further the specific implementations for an instance of that architecture.
For example: Threading theory > Threading in ARM CPU architecture > Threading in ARM Cortex A9 or Tegra 2 or Hummingbird or QualComm MSM-series, etc.
Trying to compare how it works for these ARM procs versus x86 procs (like Intel or AMD chips) is not only a waste of time, it's also incorrect.
If ARM procs handled processing and threading the same as x86 chips, Microsoft would not have to specifically release Windows 8 with an ARM-compatible version.
IN GENERAL, the theory of threading includes the concept of CPU affinity for a thread of processing. In the case of multi-core CPUs, in many instances, this just means that there are more available processing cores to which multi-threaded code can send processes.
In the case of more recent dual-core CPUs, the implementation has also included dynamic frequency scaling, even to the core level, such that when not in use, a core can lay dormant at a very low frequency, consuming very little power.
The result in user perception is that there is a savings in power, because 2 (or 3 or 4 or 6 or 8 or whatever) cores can accomplish a set of tasks much more efficiently and with less power usage than a single core because the single core would have to run at max frequency for the entire duration of the processing, whereas with 2 cores for example, they both might run at 100%, but because there are more engines to process the work, it might take less than half the time, depending upon how many of those processes are sequentially dependent, and how many can be done in parallel.
Parallelism is also a good theory to read up on.
All this said, I'm waiting for the EVO 3D in June to make the leap from TMOUS to Sprint.
Click to expand...
Click to collapse
You seem to miss how I was keeping it simple and always said kinda or nearly meaning it wasn't exact. Anyways you said I wasn't right but then you agreed with what I said. I didn't bring power consumption into the equation as it only was about the speed.
mmace said:
how can you be sure?
1.2Ghz doesn't mean it will definately be quicker than the 1Ghz, both different types of CPU, different channels for memory etc
it could be like saying a 12mp camera is better and a 10mp camera just because there's more pixels there and not seeing which has the best optics or sensors.
Plus, the Optimus is in peoples hands now and is released the 1st week of next month, can the HTC do that better?
Click to expand...
Click to collapse
Qualcomms Snapdragon cores are based on cortex-A8, and they've managed to get about 20% better performance per clock cycle. The optimums 3d has a cortex-A9 which is supposedly gives about 40% better performance per clock cycle. So since the Evo 3D's processor is clocked @ 1.2GHz and the Optimums 3D is at 1GHz is basically a wash (basically a tie, impossible to tell at this point). BUT since the true clock speed of the Evo 3D's dual core snapdragon is 1.5GHz that is great indication that the Evo will be able to overclock much higher (while still being stable).
adding to that benchmark show the Evo 3D has a better GPU than the Optimums 3D, the Evo 3D has twice the ram, A higher resolution display, and HTC build quality is much better IMO
LG Optimus 3D record 3D video only 15 frames per second with AMR audio this is ****, now I know I'll NOT BUY THIS PHONE
does aneone know how HTC EVO 3D record 3d movie?
x7nofate said:
LG Optimus 3D record 3D video only 15 frames per second with AMR audio this is ****, now I know I'll NOT BUY THIS PHONE
does aneone know how HTC EVO 3D record 3d movie?
Click to expand...
Click to collapse
Who told you that lie?
I have a sample video here and it's 24fps and can go up to 30fps
X10 is garbage! this is outrageous!
Yes really, they got it working, you want it so bad try porting it yourself
Sent from my MB860 using XDA App
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
dLo GSR said:
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
Click to expand...
Click to collapse
Oh snap. That was awesome.
Sent from my MB860 using XDA App
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
firefox3 said:
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Good news man
Sent from my MB860 using XDA App
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Javi97100 said:
Good news man
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Its turning out to be harder then i though... I think no one will get it until offical updates come out for other phones
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
So EGL = gpu driver? If thats the only setback, would it be possible to get an ICS rom with software rendering as a proof of concept, or are there other pieces missing?
GB/CM7 is pretty good on the Atrix, if we dont see ICS for a few months it doesn't hurt us in any way. I'd like to think most of us can be patient if we lack the skills to help.
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
According to anandtech, Tegra 2 support is essentially ready, so I think as long as nvidia releases the source for ics (libs?), someone will try to port it. Hell, I have a good 5 weeks during break, I might as well try then.
Sent from my MB860 using XDA App
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Buddy, check out any of the kernels available in the dev thread and you'll see that the GPUs are overclocked.
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
Doubt the iPhone will see ICS, the newest model that can run android as far as I know is the iPhone 3G, which was incredibly slow under Gingerbread.
mac208x said:
X10 is garbage! this is outrageous!
Click to expand...
Click to collapse
222 posts and zero thanks? Is this what you do, go around XDA and post useless threads like the guy complaining about returning home early despite nobody asking him to "to get MIUI ported on his grandma's phone"?
Are you guys related by any chance?
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Azurael said:
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Click to expand...
Click to collapse
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
Don't you get tired if writing those long rants? We understand you know something about CPU architecture, and that Tegra isn't the best one out there, but damn man, it's the same thing in every thread. Just chill out and try to stay on topic for once
Sent from my MB860 using Tapatalk
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
I think you are not seeing the whole picture...
The Tegra 3 (Et-Al) is not just about its quad core implementation, remember that the GPU will offer 12 cores that will translate in performance not seeing as of yet on any other platform.
Benchmarks don't tell the whole story! Specially those benchmarking tools which are not Tegra 3 optimized yet.
Cheers!
Sent from my Atrix using Tapatalk
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
LOL I ran all the iDroid ports on my iphone. Not one was even in alpha stage, I would not even count iDroid as a port since you cant use anything on it.
-The Tegra 3 SoC (System on a chip) is a combo of a microprocessor, a memory controller, an audio processor, a video encoder and a graphics renderer. It's designed and manufactured by Nvidia, world leader of graphics computing, making it's first appearance in the Asus Transformer Prime.
-The Tegra 3 SoC has 5 physical cores, but limited to performance of quad-cores. The 5th, lower power core, is activated only when the device is idle or handling low tasks, such as syncing and e-mail checking. So, power consumption is always kept to minimum when performance of the quad-core is not needed, ensuring longer battery life. Once you run a normal or higher-demanding task on the tablet, the 5th core shuts off automatically before the 4 main cores are activated. This is all the bios of the chip and doesn't require the user or the developer to change anything to use the Android OS and application this way. Android OS already has a the best support for multi-tasking and is multi-threaded friendly compared to competing operating systems in the market. So, this should be good news of the Asus Transformer Prime to-be users soon.
-The GPU (Graphics Processing Unit) in the Tegra 3 Soc has 12 shaders. But, because Nvidia has not followed a unified-shader architecture in this ARM SoC like they've been doing in their PC and MAC discrete graphics cards, 8 of those 12 shaders are reserved for pixel work and the remaining 4 are for vertex. Maybe Nvidia will use unified-shader architecture in the next generation Tegra SoC'es, when the ARM-based devices are ready for it. The PowerVR MP2 GPU in the iPad 2 has more raw power than the Tegra 3 GPU (Actually, it's the only one thing I personally like about the iPad 2, it's GPU!), but the Tegra 3 Geforce (the commercial name Nvidia uses for their gaming graphics processors) should give a solid 3D performance in games, especially the officially supported games. Nvidia has long history in 3D gaming and been using it's solid connections with game developers to bring higher quality gaming to Android, like what we've seen with Tegra 2 SoC capabilities in games listed in the TegraZone Android app. Add to that, games are not just GPU bound, Tegra 3's quad-cores and 1GB system RAM (iPad has 512MB) will pump up gaming qualities for sure and the pixel density of 149ppi displays crisper images than the 132ppi of the iPad 2. Once the Asus Prime is released, it can be officially considered the highest performing Android device in the world, especially 3D gaming.
Well, I thought I'd have more to type, I paused for a long time and could not think of anything to add. I only wanted to share few things I know about the Tegra 3. I have high interest in computer graphics/processors and been following the Tegra project since 2008.
Some of the Asus Prime to-be-owners doesn't know or care that much about technical details of the CPU in the device and I thought of sharing with them.
Thanks and gold luck.
Thanks for the info. Very interesting
As I understand it, the use of the lower power 5th core has decreased battery consumption by over 60% when compared to the earlier 2 core design. I am not sure how they are measuring consumption and the task load.
I am most exited about the tablet because of the tegra 3.
In smartphones I find the idea of putting more than one core quite rubbish.
It is not the best solution for a tablet or any other mobile device too. I would highly appreciate a well programmed software over overpowered hardware.
Yet the tegra has a nice concept.
I think most of the time I won't use more than that 5th core. I mean it is even powerful enough to play HD video.
I will primarily use apps that display text and images. Like the browser who is said to utilize 4 cores. But I am sure only because of the crappy programming.
So if people finally come to their minds and start optimizing their apps we will have one quite powerful core and 4 in backup for REAL needs. Seems like an investment in the future for me.
Sent from my Nexus One using XDA App
Straight from Wikipedia:
Tegra 3 (Kal-El) series
Processor: quad-core ARM Cortex-A9 MPCore, up to 1.4 GHz single-core mode and 1.3 GHz multi-core mode
12-Core Nvidia GPU with support for 3D stereo
Ultra low power GPU mode
40 nm process by TSMC
Video output up to 2560×1600
NEON vector instruction set
1080p MPEG-4 AVC/h.264 40 Mbps High-Profile, VC1-AP and DivX 5/6 video decode[18]
The Kal-El chip (CPU and GPU) is to be about 5 times faster than Tegra 2[19]
Estimated release date is now to be Q4 2011 for tablets and Q1 2012 for smartphones, after being set back from Nvidia's prior estimated release dates of Q2 2011,[20] then August 2011,[21] then October 2011[22]
The Tegra 3 is functionally a quad-core processor, but includes a fifth "companion" core. All cores are Cortex-A9s, but the companion core is manufactured with a special low power silicon process. This means it uses less power at low clock rates, but more at higher rates; hence it is limited to 500 MHz. There is also special logic to allow running state to be quickly transferred between the companion core and one of the normal cores. The goal is for a mobile phone or tablet to be able to power down all the normal cores and run on only the companion core, using comparatively little power, during standby mode or when otherwise using little CPU. According to Nvidia, this includes playing music or even video content.[23]
Tegra 3 officially released on November 9, 2011[/LEFT][/CENTER][/FONT]
Tegra 2's maximum ram limit was 1GB. Tegra 3's could be 2GB.
xTRICKYxx said:
Straight from Wikipedia:
Tegra 2's maximum ram limit was 1GB. Tegra 3's could be 2GB.
Click to expand...
Click to collapse
The rumor mill is churning out some specs on an upcoming Lenovo tablet with some funky specs, like 2GB DD3....so it's possible. However, the same leak/article also says its chip is clocked at 1.6 Ghz which is quite a bit out of spec, so I would take it with a usual:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
jerrykur said:
As I understand it, the use of the lower power 5th core has decreased battery consumption by over 60% when compared to the earlier 2 core design. I am not sure how they are measuring consumption and the task load.
Click to expand...
Click to collapse
You can read the White Papers on the Tegra 3 over on Nvidia's website. But the chip has a controller built into the chip that activates either the 4 cores, or the 1 core based on power demand of a given processing activity.
The quad vs single core are made out of different silicone materials, but same design structure in order to maximize the energy efficiency at the performance curve. The difference of Materials is more efficient at different power curves. So the 5th core is very efficient at low processing levels where it is actively being used.
It's pretty cool stuff
RussianMenace said:
The rumor mill is churning out some specs on an upcoming Lenovo tablet with some funky specs, like 2GB DD3....so it's possible. However, the same leak/article also says its chip is clocked at 1.6 Ghz which is quite a bit out of spec, so I would take it with a usual:
Click to expand...
Click to collapse
*Correction: Tegra 3 supports DDR2 AND DDR3. The original Transformer had 1GB of DDR2 @ 667Mhz. The Prime has 1GB of LPDDR2 @ 1066Mhz, a considerable bump in speed. Also, Tegra 3 supports up to DDR3 @ 1500Mhz!
xTRICKYxx said:
I think the only compatible RAM would be DDR2. Clock speeds don't matter, as the Tegra 3 can be OC'd to 2Ghz no problem.
Click to expand...
Click to collapse
I'm sure it can, hopefully they increase the battery capacity to compensate for increased power use. As far as the memory, Nvidia's site on Tegra 3 lists DDR3 (though its still running on a 32-bit bus which may or may not be an issue with 3d games), upto 2GB. However, every bit of spec info on the Prime I can find lists DDR2...so I don't know.
RussianMenace said:
I'm sure it can, hopefully they increase the battery capacity to compensate for increased power use. As far as the memory, Nvidia's site on Tegra 3 lists DDR3 (though its still running on a 32-bit bus which may or may not be an issue with 3d games), upto 2GB. However, every bit of spec info on the Prime I can find lists DDR2...so I don't know.
Click to expand...
Click to collapse
The Prime's RAM speed is considerably faster than the TF101.
If it does have room to expand, could we expand or upgrade the RAM?
doeboy1984 said:
If it does have room to expand, could we expand or upgrade the RAM?
Click to expand...
Click to collapse
Judging by the pictures, it doesn't look like the RAM will be removable or upgradeable (the RAM is the Elpida chip right next to the processor).
xTRICKYxx said:
The Prime's RAM speed is considerably faster than the TF101.
Click to expand...
Click to collapse
I never said it wasn't.
What I said is that both Tegra 2 and now Tegra 3 have a single 32-bit wide memory interface when compared to the two on the A5,Exynos,Qualcom, and OMAP4 chips. What that means is that theoretically it will have lower bandwidth which may cause problems with upcoming games, especially considering that you now have to feed extra cores and a beefier GPU. Now, whether or not it will actually be an issue...we will have to see.
Sad that the SGX543MP2 in the Ipad2 is still faster than the Tegra3's GPU. Apple is always ahead of the curve.. Just when Android devices started becoming as fast as the iPad1.. The iPad2 was released and remains to have one of the strongest SOCs out in the field.
Even for pure CPU benches.. the 1ghz dualcore A5 smokes most chips running faster clocks in dual core configs.
Regardless, this is still the most powerful Android device to date. Just disappointed that Nvidia, one of the king of GPU makers can't even compete with PowerVR.. a much smaller company with a lot less money.
Diversion said:
Sad that the SGX543MP2 in the Ipad2 is still faster than the Tegra3's GPU. Apple is always ahead of the curve.. Just when Android devices started becoming as fast as the iPad1.. The iPad2 was released and remains to have one of the strongest SOCs out in the field.
Even for pure CPU benches.. the 1ghz dualcore A5 smokes most chips running faster clocks in dual core configs.
Regardless, this is still the most powerful Android device to date. Just disappointed that Nvidia, one of the king of GPU makers can't even compete with PowerVR.. a much smaller company with a lot less money.
Click to expand...
Click to collapse
I would have to agree with you that Nvidia dropped the ball on their new GPU, at least on paper.
However, it's not as simple as having "omg wtf i > you" hardware that's the source of the performance. What Apple really has going for them is uniformity of hardware/software. Apple software is designed to work on very specific and strictly controlled hardware setup which allows for an incredible level of optimizations of software. This "closed loop" if software/hardware is what really drives the performance of the iProducts. Simply put, probably way over-simplified, but it let's them do more with less.
Diversion said:
Sad that the SGX543MP2 in the Ipad2 is still faster than the Tegra3's GPU. Apple is always ahead of the curve.. Just when Android devices started becoming as fast as the iPad1.. The iPad2 was released and remains to have one of the strongest SOCs out in the field.
Even for pure CPU benches.. the 1ghz dualcore A5 smokes most chips running faster clocks in dual core configs.
Regardless, this is still the most powerful Android device to date. Just disappointed that Nvidia, one of the king of GPU makers can't even compete with PowerVR.. a much smaller company with a lot less money.
Click to expand...
Click to collapse
Very good point. Also apple has the apps n games that showcase and utilize all this extra power. Even my original iPad has apps/games that I haven't seen Android dual core equivalents of. I love my iPad but I also own Atix dual core Tegra 2 phone. I know the open sourced Android will win out in the end.
I came across a good comment in the lenovo specs link that a member here posted in this thread.
"Google and NVidia need to seriously subsidize 3rd party app development to show ANY value and utility over iPad. Apple won't rest on its laurels as their GPU performance on the A5 is already ahead with games and APPs to prove it".
What do you all think about this? Not trying to thread jack as I see it's relevant to this thread also. What apps/games does Android have up it's sleeve to take advantage of this new Tegra3? Majority of Android apps/games don't even take advantage of tegra2 and similar SOC yet. Are we going to have all this extra power for a while without it never really being used to it's potential. Android needs some hardcore apps n games. iPad has all the b.s. Stuff also BUT has very hardcore apps n games also to use it to close to full potential. IMO my iPad 1 jail broken still trumps most of these Tegra 2 tablets out now. Not because of hardware specs, but because of the quality of apps n games I have. I've noticed Android is finally starting to get more hardcore games like ShadowGun, game loft games, etc.. I can't over clock or customize my iPad as extensively as Android but the software/apps/games I have are great. No, I don't want an ipad2 or ipad3. I want an Android tablet now because of more potential with it. Just like with anything in life, potential doesn't mean sh$& if it's not utilized and made a reality.
I was a windows mobile person first. Then I experienced dual booting with XDAndroid on my tilt 2, I loved it. Then I knew I wanted a real android phone or tablet. First Android tablet I owned, for only a day, was the Archos7IT. It was cool but returned it since it couldn't connect to my WMwifirouter, which uses ad-hoc network. So I researched n finally settled on taking a chance with the apple iPad. I use to be an apple hater to the max..lol. My iPad changed all of that. I still hate the closed system of apple but I had to admit, the iPad worked great for what I needed and wanted to do. This iPad, I'm writing this post on now, still works flawlessly after almost 2 years and it's specs are nowhere compared to iPad 2 or all these new dual core tablets out. I'm doing amazing stuff with only 256mb of ram..SMH I hated having to hook iPad up to iTunes for everything like music n videos. So I jail broke and got Ifiles, which is basically a very detailed root file explorer. I also have the USB n SD card adapter. So now I could put my content on my iPad myself without needing to be chained to iTunes. iTunes only good for software updates. I'm still on 4.2.1 jail broken firmware on iPad. Never bothered or really wanted to upgrade to the new IOS 5.01 out now. With all my jailbreak mods/tweaks, I've been doing most new stuff people are now being able to do. All apple did was implement jailbreak tweaks into their OS, for the most part.
Sorry for the long rant. I'm just excited on getting new Prime tegra3 tablet. I just hope the apps/games start rolling out fast that really take advantage of this power. And I don't just mean tegrazone stuff..lol. Android developers going to have to really step their game up once these new quad cores come out. Really even now with dual cores also. I'm a fan of technology in general. Competition only makes things better. Android is starting to overtake apple in sales or similar categories. Only thing is Android hasn't gotten on par with apple quality apps yet. Like the iPad tablet only apps are very numerous. Lots are b.s. But tons are very great also. I'm just hoping Amdroid tablet only apps will be same quality at least or better. I'm not looking to get new quad core tablet to play angry birds or other kiddy type games. I'm into productivity, media apps, and hardcore games, like Rage HD, NOVA2, Modern Combat 3, Order n Chaos, InfinityBlade, ShadowGun, etc.. All of which I have and more on my almost 2 year old iPad 1.
Asus, with being the first manufacturer to come out with quad core tablet and super IPS + display, might just be the last push needed to get things really rolling for Android, as far as high quality software amd tablet optimized OS goes. Can't wait to see how this plays out .
---------- Post added at 01:00 PM ---------- Previous post was at 12:58 PM ----------
RussianMenace said:
I would have to agree with you that Nvidia dropped the ball on their new GPU, at least on paper.
However, it's not as simple as having "omg wtf i > you" hardware that's the source of the performance. What Apple really has going for them is uniformity of hardware/software. Apple software is designed to work on very specific and strictly controlled hardware setup which allows for an incredible level of optimizations of software. This "closed loop" if software/hardware is what really drives the performance of the iProducts. Simply put, probably way over-simplified, but it let's them do more with less.
Click to expand...
Click to collapse
Great point, just as I was saying basically in my long post..lol
nook-color said:
You can read the White Papers on the Tegra 3 over on Nvidia's website. But the chip has a controller built into the chip that activates either the 4 cores, or the 1 core based on power demand of a given processing activity.
The quad vs single core are made out of different silicone materials, but same design structure in order to maximize the energy efficiency at the performance curve. The difference of Materials is more efficient at different power curves. So the 5th core is very efficient at low processing levels where it is actively being used.
It's pretty cool stuff
Click to expand...
Click to collapse
That is correct. Actually, the "5th" core is licensed with ARM A7 instructions set, the quads are A9.
RussianMenace said:
I would have to agree with you that Nvidia dropped the ball on their new GPU, at least on paper.
However, it's not as simple as having "omg wtf i > you" hardware that's the source of the performance. What Apple really has going for them is uniformity of hardware/software. Apple software is designed to work on very specific and strictly controlled hardware setup which allows for an incredible level of optimizations of software. This "closed loop" if software/hardware is what really drives the performance of the iProducts. Simply put, probably way over-simplified, but it let's them do more with less.
Click to expand...
Click to collapse
Again, I agree. Just like saying why Xbox360 and PS3 consoles can still push high quality graphics compared to a new high-end PC? Unity of hardware plays a big role there.
I have a $4000 custom PC. Sometimes I see my brother play the same games on his $250 Playstation 3 with performance and graphics very similar to my PC.
CyberPunk7t9 said:
I have a $4000 custom PC. Sometimes I see my brother play the same games on his $250 Playstation 3 with performance and graphics very similar to my PC.
Click to expand...
Click to collapse
That's because these days, most PC games are console ports.
GPU specs don't matter. The iPad has more and better games than Android tabs, and that won't change for the (1-yr) lifespan of the Teg3. Not to be a downer, but it's just reality.
The Prime is better at certain things. HDMI-out and USB host (NTFS) support makes it a pretty good HTPC, for one. But I wouldn't get into a pissing contest over games--unless of course you're talking about emus.
e.mote said:
GPU specs don't matter. The iPad has more and better games than Android tabs, and that won't change for the (1-yr) lifespan of the Teg3. Not to be a downer, but it's just reality.
The Prime is better at certain things. HDMI-out and USB host (NTFS) support makes it a pretty good HTPC, for one. But I wouldn't get into a pissing contest over games--unless of course you're talking about emus.
Click to expand...
Click to collapse
Is that true? NTFS support? Are you sure? Can you link me to a spec for that? If so then I can transfer files from my SD to an external NTFS without using Windows! That would be great for trips when I need to dump digital pics.
Hi all,
I have been developing a small android app which is sort of a reader for results coming from a desktop application. Some of these results are in the shape of a 3-dimensional structure made of a number of basic geometries, which I have been generating using a library which I coded in C++ using OpenSceneGraph and compiled with NDK. I have tested my app on both my HTC 3D EVO (before on stock rom, then on a few gingerbread custom ROMS and finally on a few ICS roms too) and also on a crappy 7'' chinese tablet which I bought really cheap a while ago. This tablet has a pretty basic AllWinnerTech A10 single core 1GHz processor, 512 Mb RAM and a Mali 400 GPU. So nothing fancy at all. However in all my tests I get about 2 to 3 times as many FPS from the tablet compared to the EVO. The structure can be moved, zoomed in and out much more smoothly. Remarkably so!
Am I missing something obvious here? Is there a "turn on graphics acceleration, you idiot!" button which I have not found yet? I mean, just in terms of specs I would have expected the EVO to run circles around that tablet.
Has anyone got any idea?
Cheers.
Have you tried forcing HW acceleration threw your build prop to see if makes a difference on your setup??
#Root-Hack_Mod*Always\
debug.sf.hw = 1 already. anything else in the build.prop file that may improve this? Could it be a drivers issue, or is it just me expecting more that I should from this phone?
Are you hitting the frame limit cap ?
what would the value of this cap be? I barely go above 15-20fps on the smallest structures. anyway, don't get me wrong: I can live with it.
It was just curiosity, because I expected much better performance from the EVO. and so I was wondering where/what is the bottle neck
From what I understand HTC shipped the EVO 3D with terrible drivers, I think that they fixed this problem with the ICS update. With these drivers the Adreno 220 is able to surpass the Mali 400 mp4 (galaxy s2 version) in some situations.
Last summer, I decided to buy a Nexus 7 for using it mainly as an ebook reader. It's perfect for that with its very sharp 1280x800 screen. It was my first Android device and I love this little tablet.
I'm a fan of retro gaming and I installed emulators on every device I have: Pocket PC, Xbox, PSP Go, iPhone, iPad3, PS3. So I discovered that the Android platform was one of the most active community for emulation fans like me and I bought many of them, and all those made by Robert Broglia (.EMU series). They were running great on the N7 but I found that 16GB was too small, as was the screen.
I waited and waited until the 32 GB Nexus 10 became available here in Canada and bought it soon after (10 days ago). With its A15 cores, I was expecting the N10 to be a great device for emulation but I am now a little disapointed. When buying the N10, I expected everything to run faster than on the N7 by a noticeable margin.
Many emulators run slower on the N10 than on the N7. MAME4Ddroid and MAME4Droid reloaded are no longer completely smooth with more demanding ROMs, Omega 500, Colleen, UAE4droid and SToid are slower and some others needed much more tweaking than on the N7. I'm a little extreme on accuracy of emulation and I like everything to be as close to the real thing as possible. A solid 60 fps for me is a must (or 50 fps for PAL machines).
On the other side, there are other emus that ran very well: the .EMU series and RetroArch for example. These emulators are much more polished than the average quick port and they run without a flaw. They're great on the 10-inch screen and I enjoy them very much. The CPU intensive emulators (Mupen64Plus AE and FPSE) gained some speed but less that I anticipated.
So is this because of the monster Nexus 10's 2560x1600 resolution? Or is it because of limited memory bandwith? Maybe some emulators are not tweaked for the N10 yet. I wish some emulators had the option to set a lower resolution for rendering and then upscale the output. I think that many Android apps just try to push the frames to the native resolution without checking first if there is a faster way.
The N7 has a lower clocked 4 core CPU but has only 1/4 the resolution. I think that it's a more balanced device that the N10 which may have a faster dual core CPU but too much pixels to push. It's much like the iPad3 who was twice as fast as the iPad2 but had a 4x increase in resolution.
I am now considering going for a custom ROM on the N10 but I wonder if I will see an increase in emulation speed. Maybe those of you who did the jump can tell me. I'm thinking about AOKP maybe.
Any suggestion on that would be appreciated, thanks!
The emulators just need to be tweaked a bit to better perform on the completely different processor architecture. Really our processor is far more powerful than the Nexus 7 so the emulators should run faster. I too am a fan of the old games, and I play Super Nintendo and Game Boy Advance (and some Color) games quite often. I find performance to be perfect with no issues at all, but then again those arent exactly "demanding" emulators.
We do not have any sort of memory bandwidth limitation on the Nexus 10. The tablet has been designed to give the full needed 12.8 GB/s of memory bandwidth that is required for 2560x1600 resolution.
EniGmA1987 said:
The emulators just need to be tweaked a bit to better perform on the completely different processor architecture. Really our processor is far more powerful than the Nexus 7 so the emulators should run faster. I too am a fan of the old games, and I play Super Nintendo and Game Boy Advance (and some Color) games quite often. I find performance to be perfect with no issues at all, but then again those arent exactly "demanding" emulators.
We do not have any sort of memory bandwidth limitation on the Nexus 10. The tablet has been designed to give the full needed 12.8 GB/s of memory bandwidth that is required for 2560x1600 resolution.
Click to expand...
Click to collapse
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Having to render to that size of screen [2560x1600] will slow the game down. It’s called being ‘fill rate bound’. Even for a good processor it's a lot of work as the game uses quite a lot of overdraw.
The solution is to draw everything to a smaller screen (say half at 1280x800) and then stretch the final image to fill the screen.
Click to expand...
Click to collapse
A sad true my nexus 10 get dam hot and i have to play games at 1.4 or 1.2 that sux
Sent from my XT925 using xda app-developers app
espionage724 said:
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Click to expand...
Click to collapse
But fillrate isnt memory bandwidth. We need both more MHz and more raster operations to get higher fill rate of pixels per second. We can overclock the GPU to get the MHz, and that will help, but we have to find a way to solve the higher heat output too from that. More ROP's are impossible as it is a hardware design for how many we have. If we ever get to overclock up to around 750 MHz then we should see a 30-40% improvement in fill rate. At that point we may have memory bandwidth problems, but we wont know for sure until we get there. But the 12.8GB/s of bandwidth that we currently have is enough to support 2560x1600 resolution at our current GPU power. Our Nexus 10 also has the highest fillrate of any Android phone or tablet to date, about 1.4 Mtexel/s. And if we have memory bandwidth limitations, then we would see no improvement at all from the current overclock we do have up to 612-620MHz because the speed wouldnt be where the bottleneck is. Yet we can clearly see in benchmarks and real gaming that we get FPS increases with higher MHz, thus our current problem is the fillrate and not the memory bandwidth.
Also, the solution is not to render the game at half the resolution as that is a band-aid on the real problem. If the developer of a game would code the game properly we wouldnt have this problem, or if they dont feel like doing that then they should at least stop trying to put more into the game than their un-optimized, lazy project is capable of running nicely.
espionage724 said:
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Click to expand...
Click to collapse
With that logic you could buy any video card for a PC and it would run any game at the resolution the video card supports. That isn't the case because rendering involves more than just memory fill rate. There are textures, polygons, multiple rendering passes, filtering, it goes on and on. As EniGmA1987 mentioned nothing has been optimized to take advantage of this hardware yet, developers were literally crossing their fingers hoping their games would run 'as is'. thankfully the A15 cpu cores in the exynos will be used in the tegra 4 as well so we can look forward to the CPU optimizations soon which will definitely help.
Emulators are more cpu intensive than anything else, give it a little time and you won't have any problems with your old school games. Run the new 3DMark bench to see what this tablet can do, it runs native resolution and its not even fully optimized for this architecture yet.
2560*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth. Most emulators don't use 3D functions so fillrate, rendering, overdraw won't be a factor. Most emulators are single-threaded (correct me if I'm wrong) and the A15 should shine in this particular situation and even more so in multi-threaded scenarios. With its out-of-order pipeline and greatly enhanced efficiency it should be perfectly suited for the job.
We have the fillrate, we have enough CPU power and I'm still wondering why simple app like emulators aren't much faster than that. Is it Android? Is it the Dalvik VM? Or is it because some emulators need to be written in native code instead of using Java VM? I'm not a developer and I have only minimal knowledge in this department. I can only speculate but I'm curious enough about it that I started googling around to find why.
Lodovik said:
2560*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth
Click to expand...
Click to collapse
Just curious but what is that calculation supposed to be? total bandwidth needed? Cause I don't see your bit depth in there, unless the 4 is supposed to be that? If that is true than you are calculating on 4-bit color depth?
And then the result would just be bandwidth required for pixel data to memory wouldnt it? It wouldnt include texture data in and out of memory and other special functions like post processing.
2560*1600 = number of pixels on the screen
4 = bytes / pixels for 32-bits depth
60 = frames / second
/1024/1024 = divide twice to get the result in MB
Actually, I made a typo the result is 937,5 MB/s or 0.92 GB/s. This is just a rough estimate to get an idea of what is needed at this resolution just to push the all pixels on the screen in flat 2D at 60 fps, assuming that emulators don't use accelerated functions.
My point was that with 12.8 GB/s of memory bandwith, we should have more than enough even if this estimate isn't very accurate.
Thanks for the explanation
If there really were a memory bandwidth limitation the newer Trinity kernels and newest KTManta should help. In addition to the higher GPU speed they both allow (KTManta up to 720MHz) both ROM's have increased memory speeds which increase memory bandwidth to 13.8GB/s, up from 12.8 on stock.
Thanks for the info. There's so many configuration options available for the Nexus 10. I really enjoy having all those possibilities.
EniGmA1987 said:
If there really were a memory bandwidth limitation the newer Trinity kernels and newest KTManta should help. In addition to the higher GPU speed they both allow (KTManta up to 720MHz) both ROM's have increased memory speeds which increase memory bandwidth to 13.8GB/s, up from 12.8 on stock.
Click to expand...
Click to collapse
=Lodovik;40030*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth. Most emulators don't use 3D functions so fillrate, rendering, overdraw won't be a factor. Most emulators are single-threaded (correct me if I'm wrong) and the A15 should shine in this particular situation and even more so in multi-threaded scenarios. With its out-of-order pipeline and greatly enhanced efficiency it should be perfectly suited for the job.
We have the fillrate, we have enough CPU power and I'm still wondering why simple app like emulators aren't much faster than that. Is it Android? Is it the Dalvik VM? Or is it because some emulators need to be written in native code instead of using Java VM? I'm not a developer and I have only minimal knowledge in this department. I can only speculate but I'm curious enough about it that I started googling around to find why.
Click to expand...
Click to collapse
You are taking what I said out of context. I was responding to someone else, thus the "quote" above my post.
Since you posted I loaded up some Super Nintendo, N64, and PlayStation games on my n10 without any issues. It may just be your setup. There are a lot of tweaks out there that could easily increase performance. One great and very simple one is enabling 2D GPU rendering which is in developer options. Just do some searching. GPU Overclocking won't help much, as you said above your games are only 2D. I am sure you can get them running just fine.