Related
OK, I had my mind set on the Optimus 3D as there was no sign of the Evo 3D coming out in the UK. Now we have a date I'm stuck in 2 minds again.
I'm still swaying towards the LG, I know the specs of the Evo are a lot higher but I'm buying for the 3D aspect and I believe that LG will make more 3D content than HTC, they have said there will be a 3D store, they have a 3D menu, there will be 3D games pre-installed and they are working with publishers for more 3D content. I have heard nothing like this from HTC and they haven't been great with things like this in the past.
However, I may be swayed back to the HTC if the photos are any good.
Does anyone actually have the Evo 3D in their hands like a few have the Optimus?
If so, can anyone let me see a full size .jps file taken by the phone in natural lighting?
evo 3d wont be in hands till june and it terms of specs evo wins, in terms of content the market will provide most of it, a few preinstalled apps isnt enough to justify a weaker phone specifically since you will probably be able to just install the apk. im sure there will be minor differences in the way 3d is deisplaed on each phone but both htc and lg are heavily invested in 3d so i dont imagine the differences will be earth shattering
the only real reason to pick one phone over the other in this case is camera quality if you intend to take 3d pics
cheers aaron, the much earlier release date of the Optimus is also a plus point. 3D content will be king for me, all other apps will be just about the same. I just can't see HTC having their own app store for 3D games/apps like LG will.
I suppose I could get the Optimus and then sell it for the HTC if I decide the images are far superior
Im in the some situation, specialy now when I know release date for evo 3d. About this 3d there is 1 question which decide. Which phone can playback 3d movies? Want to make sure that phone can run 3d movies before I get it. I do not know why there is no any information that phone can or can not playback movies when we put some films on the phone. That is priority for me, Im not so exciting that I can record movie in 3d, The most important thing is can I playback in 3d for example movie like avatra 3d or resident evil 3d?
I will only get HTC and with better stats, whatever the Optimus can do the Evo 3D can do better!
how can you be sure?
1.2Ghz doesn't mean it will definately be quicker than the 1Ghz, both different types of CPU, different channels for memory etc
it could be like saying a 12mp camera is better and a 10mp camera just because there's more pixels there and not seeing which has the best optics or sensors.
Plus, the Optimus is in peoples hands now and is released the 1st week of next month, can the HTC do that better?
I cant be sure, but based upon my knowledge of computers a dual core will definitely run smoother and quicker. It's why its hard to find a single core processor anymore. Its nearly safe to assume, more=better.
toxicfumes22 said:
I cant be sure, but based upon my knowledge of computers a dual core will definitely run smoother and quicker. It's why its hard to find a single core processor anymore. Its nearly safe to assume, more=better.
Click to expand...
Click to collapse
Got to disagree, my 3.6ghz i7 is a lot faster than a 4ghz i5, I very theres not much in dual core 1ghz and 1.2ghz made by different manufacturers
haha i7 is a better processor and still multiple cores. Your i7 has more memory so it can process more efficiently which in turn is faster . But understand I am saying more cores=better and those are both quad core processors. Also that i5 is overclocked and the i7 is not. And theres many other things that will determine why you can tell a difference.
x7nofate said:
Im in the some situation, specialy now when I know release date for evo 3d. About this 3d there is 1 question which decide. Which phone can playback 3d movies? Want to make sure that phone can run 3d movies before I get it. I do not know why there is no any information that phone can or can not playback movies when we put some films on the phone. That is priority for me, Im not so exciting that I can record movie in 3d, The most important thing is can I playback in 3d for example movie like avatra 3d or resident evil 3d?
Click to expand...
Click to collapse
I do believe it will play back 3d movies.......I have never been able to watch a full movie on my EVO as I just don't see the joy in it. Maybe with some 3d addition it will make it more fun.
The EVO 3D isn't an 1.2gb phone is an 2.4gb phone. each processor works independently from each other so the phone can send commands to each core. No phone or PC on the market or coming to the market can do that.
[email protected] said:
I do believe it will play back 3d movies.......I have never been able to watch a full movie on my EVO as I just don't see the joy in it. Maybe with some 3d addition it will make it more fun.
Click to expand...
Click to collapse
Dude, I watch about 1-2 movies on every flight I take. It's awesome...I use rock player and it can play nearly anything.
Not necessarily, the evo has two 1.2 GHz cores, yes.but that doesn't mean its 2.4 ghz, more like 1.8 with massive battery savings. It would also handle multiple tasks much better, it would only be like 2.4ghz if both cores were they were both at 100%
sero2012 said:
The EVO 3D isn't an 1.2gb phone is an 2.4gb phone. each processor works independently from each other so the phone can send commands to each core. No phone or PC on the market or coming to the market can do that.
Click to expand...
Click to collapse
Sent from my HERO200 using XDA App
sero2012 said:
The EVO 3D isn't an 1.2gb phone is an 2.4gb phone. each processor works independently from each other so the phone can send commands to each core. No phone or PC on the market or coming to the market can do that.
Click to expand...
Click to collapse
Ummmmm, it doesn't quite work like that. The processors cannot work on the same task like 2 people pushing cart, its more like 2 people pushing 2 carts (each one on a separate cart) at the same time . They have started having 2 cores work with each other more in the new i7 processors but I do not see that happening in this small chip. Its simply having 2 x 1.2 GHz processors.....to some degree. Now lets say your opening a program, one processor opens it, and the other keeps all the other junk running in the background. The processors can work on the same task but not on top of each other like described above. Its described in its name dual core = 2 processors in 1 chip.
[email protected] said:
I do believe it will play back 3d movies.......I have never been able to watch a full movie on my EVO as I just don't see the joy in it. Maybe with some 3d addition it will make it more fun.
Click to expand...
Click to collapse
I spoke with 2 people who had LG Optimus 3D on this forum and no one confirmed that LG Optimus 3D run 3D movies. I try converted movie in many way but it didn't help.
Look my last post in here>>> http://forum.xda-developers.com/showthread.php?t=985690&page=5
this is HelmuthB answer on my private massage: "For some reasons the "Step Up" clip does not play, neither on my PC (Ubuntu) nor
on the LG.
The Avatar clips play but just 2D, it does not combine the two sides into one. :-("
P.s. does htc evo 3d support gorilla glass?
toxicfumes22 said:
Ummmmm, it doesn't quite work like that. The processors cannot work on the same task like 2 people pushing cart, its more like 2 people pushing 2 carts (each one on a separate cart) at the same time . They have started having 2 cores work with each other more in the new i7 processors but I do not see that happening in this small chip. Its simply having 2 x 1.2 GHz processors.....to some degree. Now lets say your opening a program, one processor opens it, and the other keeps all the other junk running in the background. The processors can work on the same task but not on top of each other like described above. Its described in its name dual core = 2 processors in 1 chip.
Click to expand...
Click to collapse
It also doesn't work quite like you've described here.
Anyone really interested should Google CPU threading to get a brief overview of how it works at a conceptual level, and then should look at the specific implementations on each actual CPU architecture to get a deeper understanding, and even further the specific implementations for an instance of that architecture.
For example: Threading theory > Threading in ARM CPU architecture > Threading in ARM Cortex A9 or Tegra 2 or Hummingbird or QualComm MSM-series, etc.
Trying to compare how it works for these ARM procs versus x86 procs (like Intel or AMD chips) is not only a waste of time, it's also incorrect.
If ARM procs handled processing and threading the same as x86 chips, Microsoft would not have to specifically release Windows 8 with an ARM-compatible version.
IN GENERAL, the theory of threading includes the concept of CPU affinity for a thread of processing. In the case of multi-core CPUs, in many instances, this just means that there are more available processing cores to which multi-threaded code can send processes.
In the case of more recent dual-core CPUs, the implementation has also included dynamic frequency scaling, even to the core level, such that when not in use, a core can lay dormant at a very low frequency, consuming very little power.
The result in user perception is that there is a savings in power, because 2 (or 3 or 4 or 6 or 8 or whatever) cores can accomplish a set of tasks much more efficiently and with less power usage than a single core because the single core would have to run at max frequency for the entire duration of the processing, whereas with 2 cores for example, they both might run at 100%, but because there are more engines to process the work, it might take less than half the time, depending upon how many of those processes are sequentially dependent, and how many can be done in parallel.
Parallelism is also a good theory to read up on.
All this said, I'm waiting for the EVO 3D in June to make the leap from TMOUS to Sprint.
maxawesome said:
It also doesn't work quite like you've described here.
Anyone really interested should Google CPU threading to get a brief overview of how it works at a conceptual level, and then should look at the specific implementations on each actual CPU architecture to get a deeper understanding, and even further the specific implementations for an instance of that architecture.
For example: Threading theory > Threading in ARM CPU architecture > Threading in ARM Cortex A9 or Tegra 2 or Hummingbird or QualComm MSM-series, etc.
Trying to compare how it works for these ARM procs versus x86 procs (like Intel or AMD chips) is not only a waste of time, it's also incorrect.
If ARM procs handled processing and threading the same as x86 chips, Microsoft would not have to specifically release Windows 8 with an ARM-compatible version.
IN GENERAL, the theory of threading includes the concept of CPU affinity for a thread of processing. In the case of multi-core CPUs, in many instances, this just means that there are more available processing cores to which multi-threaded code can send processes.
In the case of more recent dual-core CPUs, the implementation has also included dynamic frequency scaling, even to the core level, such that when not in use, a core can lay dormant at a very low frequency, consuming very little power.
The result in user perception is that there is a savings in power, because 2 (or 3 or 4 or 6 or 8 or whatever) cores can accomplish a set of tasks much more efficiently and with less power usage than a single core because the single core would have to run at max frequency for the entire duration of the processing, whereas with 2 cores for example, they both might run at 100%, but because there are more engines to process the work, it might take less than half the time, depending upon how many of those processes are sequentially dependent, and how many can be done in parallel.
Parallelism is also a good theory to read up on.
All this said, I'm waiting for the EVO 3D in June to make the leap from TMOUS to Sprint.
Click to expand...
Click to collapse
You seem to miss how I was keeping it simple and always said kinda or nearly meaning it wasn't exact. Anyways you said I wasn't right but then you agreed with what I said. I didn't bring power consumption into the equation as it only was about the speed.
mmace said:
how can you be sure?
1.2Ghz doesn't mean it will definately be quicker than the 1Ghz, both different types of CPU, different channels for memory etc
it could be like saying a 12mp camera is better and a 10mp camera just because there's more pixels there and not seeing which has the best optics or sensors.
Plus, the Optimus is in peoples hands now and is released the 1st week of next month, can the HTC do that better?
Click to expand...
Click to collapse
Qualcomms Snapdragon cores are based on cortex-A8, and they've managed to get about 20% better performance per clock cycle. The optimums 3d has a cortex-A9 which is supposedly gives about 40% better performance per clock cycle. So since the Evo 3D's processor is clocked @ 1.2GHz and the Optimums 3D is at 1GHz is basically a wash (basically a tie, impossible to tell at this point). BUT since the true clock speed of the Evo 3D's dual core snapdragon is 1.5GHz that is great indication that the Evo will be able to overclock much higher (while still being stable).
adding to that benchmark show the Evo 3D has a better GPU than the Optimums 3D, the Evo 3D has twice the ram, A higher resolution display, and HTC build quality is much better IMO
LG Optimus 3D record 3D video only 15 frames per second with AMR audio this is ****, now I know I'll NOT BUY THIS PHONE
does aneone know how HTC EVO 3D record 3d movie?
x7nofate said:
LG Optimus 3D record 3D video only 15 frames per second with AMR audio this is ****, now I know I'll NOT BUY THIS PHONE
does aneone know how HTC EVO 3D record 3d movie?
Click to expand...
Click to collapse
Who told you that lie?
I have a sample video here and it's 24fps and can go up to 30fps
Ok, so we have been told the Atrix and ALL Tegra 2 devices cannot playback High Profile MKV. Most people believe it.
It's not true whatsoever. Single core 800mhz CPU's can do it, this one can too. We are looking at codec's or drivers here. I know it's almost entirely GPU related, so CPU speed doesn't matter much. I am SURE the Atrix can play 720p MKV decoded using the CPU.
My 1.6ghz Atom can do that. For the most part. Cortex A9 smashes Atom N450 though, shouldnt be an issue. From what I've seen we cannot even access our other core yet, I have seen NO apps that benefit from it yet. Looks like its time. If this can be fixed, Tegra 2 will be much better off. At this point, I am not proud of my phone anymore, inferior, older phones can play videos my Atrix cannot. Umm, wat?
We need to fix this now, it's pathetic. Can anyone shed some light on this?
*edit*
Nvidia is such trash for doing this. Last Nvidia product I buy. After the GTX 400 series GPU's and then this, lost ALL faith in them.
i agree, its pathetic...
to be fair, on a small screen baseline profile is perfectly fine in terms of quality, but its more the convenience of being able to drag and drop a x264 rip without having to encode.
X10 is garbage! this is outrageous!
Yes really, they got it working, you want it so bad try porting it yourself
Sent from my MB860 using XDA App
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
dLo GSR said:
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
Click to expand...
Click to collapse
Oh snap. That was awesome.
Sent from my MB860 using XDA App
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
firefox3 said:
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Good news man
Sent from my MB860 using XDA App
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Javi97100 said:
Good news man
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Its turning out to be harder then i though... I think no one will get it until offical updates come out for other phones
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
So EGL = gpu driver? If thats the only setback, would it be possible to get an ICS rom with software rendering as a proof of concept, or are there other pieces missing?
GB/CM7 is pretty good on the Atrix, if we dont see ICS for a few months it doesn't hurt us in any way. I'd like to think most of us can be patient if we lack the skills to help.
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
According to anandtech, Tegra 2 support is essentially ready, so I think as long as nvidia releases the source for ics (libs?), someone will try to port it. Hell, I have a good 5 weeks during break, I might as well try then.
Sent from my MB860 using XDA App
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Buddy, check out any of the kernels available in the dev thread and you'll see that the GPUs are overclocked.
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
Doubt the iPhone will see ICS, the newest model that can run android as far as I know is the iPhone 3G, which was incredibly slow under Gingerbread.
mac208x said:
X10 is garbage! this is outrageous!
Click to expand...
Click to collapse
222 posts and zero thanks? Is this what you do, go around XDA and post useless threads like the guy complaining about returning home early despite nobody asking him to "to get MIUI ported on his grandma's phone"?
Are you guys related by any chance?
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Azurael said:
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Click to expand...
Click to collapse
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
Don't you get tired if writing those long rants? We understand you know something about CPU architecture, and that Tegra isn't the best one out there, but damn man, it's the same thing in every thread. Just chill out and try to stay on topic for once
Sent from my MB860 using Tapatalk
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
I think you are not seeing the whole picture...
The Tegra 3 (Et-Al) is not just about its quad core implementation, remember that the GPU will offer 12 cores that will translate in performance not seeing as of yet on any other platform.
Benchmarks don't tell the whole story! Specially those benchmarking tools which are not Tegra 3 optimized yet.
Cheers!
Sent from my Atrix using Tapatalk
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
LOL I ran all the iDroid ports on my iphone. Not one was even in alpha stage, I would not even count iDroid as a port since you cant use anything on it.
I was trying to load Bejeweled blitz from facebooks desktop site and it looks choppy and unsmoothed. I tried it on my friends sg2 and its smooth as butter.
Making me wonder if the sg2 is much better than the prime? I have a g2x and its no where near the speed of the sg2 and I really thought the prime was able to load flash much better but not the case.
Nvidia always!
I cant reproduce it with flash games, but my experience so far is that the Galaxy S2 is MUCH smoother than the Prime in EVERY respect (with the exception of hardware video decoding, but hey...).
HOWEVER I definitely hope that is because of Honeycomb rather than Tegra 3. But I'll have to wait another 2 1/2 weeks for ICS to decide that...
The bizarre thing is that from pure specs, even the Tegra 2 is a faster/more powerful SoC than the Exynos in the SGS2, but it's lacking the NEOS which the SGS2 has. Also remember the SGS2 has to drive a much smaller screen. Also, flash can probably not use the quadcore (?) well.
So far I have to say in general smoothness Tegra 3/HC has been very underwhelming. And the 3 week delay of ICS in Germany does not bode well.
I am on ICS and its not that much of a difference in resolution for it to lag so much. All I can think of is tegra doesn't do well with flash.
Nvidia always!
Wow so I installed chainfire 3d on my g2x and installed the nvidia plug in and guess what? Seems as fast as the sg2. I take it flash just isn't optimized for nvidia. I am gonna try this on the prime and see what the results are
Nvidia always!
Alot smoother and just with chainfire. I wonder how or why?
Nvidia always!
A member of the Android team posted this about smoothness comparisons between the GS2 and the Galaxy Nexus. I think this may apply to the Prime as well.
"Some have raised points along the lines of Samsung Galaxy S2 phones already having a smoother UI and indicating that they are doing something different vs. the Galaxy Nexus. When comparing individual devices though you really need to look at all of the factors. For example, the S2's screen is 480x800 vs. the Galaxy Nexus at 720x1280. If the Nexus S could already do 60fps for simple UIs on its 480x800, the CPU in the S2's is even better off.
The real important difference between these two screens is just that the Galaxy Nexus has 2.4x as many pixels that need to be drawn as the S2. This means that to achieve the same efficiency at drawing the screen, you need a CPU that can run a single core at 2.4x the speed (and rendering a UI for a single app is essentially not parallelizable, so multiple cores isn't going to save you).
This is where hardware accelerated rendering really becomes important: as the number of pixels goes up, GPUs can generally scale much better to handle them, since they are more specialized at their task. In fact this was the primary incentive for implementing hardware accelerated drawing in Android -- at 720x1280 we are well beyond the point where current ARM CPUs can provide 60fps. (And this is a reason to be careful about making comparisons between the Galaxy Nexus and other devices like the S2 -- if you are running third party apps, there is a good chance today that the app is not enabling hardware acceleration, so your comparison is doing CPU rendering on the Galaxy Nexus which means you almost certainly aren't going to get 60fps out of it, because it needs to hit 2.4x as many pixels as the S2 does.)"
A link to the rest of the article is in my sig.
Yeah but why does it work as smooth when applying chainfire 3d? I mean its a world of difference there on both my devices.
Besides the prime has a quad core with a much powerful gpu than the s2
Nvidia always!
Try going to blitz on Facebook or g+ and play without chainfire and then with.
Nvidia always!
Got an s3 now and flash is awesome
Sent from my GT-I9300 using XDA Premium HD app
Last summer, I decided to buy a Nexus 7 for using it mainly as an ebook reader. It's perfect for that with its very sharp 1280x800 screen. It was my first Android device and I love this little tablet.
I'm a fan of retro gaming and I installed emulators on every device I have: Pocket PC, Xbox, PSP Go, iPhone, iPad3, PS3. So I discovered that the Android platform was one of the most active community for emulation fans like me and I bought many of them, and all those made by Robert Broglia (.EMU series). They were running great on the N7 but I found that 16GB was too small, as was the screen.
I waited and waited until the 32 GB Nexus 10 became available here in Canada and bought it soon after (10 days ago). With its A15 cores, I was expecting the N10 to be a great device for emulation but I am now a little disapointed. When buying the N10, I expected everything to run faster than on the N7 by a noticeable margin.
Many emulators run slower on the N10 than on the N7. MAME4Ddroid and MAME4Droid reloaded are no longer completely smooth with more demanding ROMs, Omega 500, Colleen, UAE4droid and SToid are slower and some others needed much more tweaking than on the N7. I'm a little extreme on accuracy of emulation and I like everything to be as close to the real thing as possible. A solid 60 fps for me is a must (or 50 fps for PAL machines).
On the other side, there are other emus that ran very well: the .EMU series and RetroArch for example. These emulators are much more polished than the average quick port and they run without a flaw. They're great on the 10-inch screen and I enjoy them very much. The CPU intensive emulators (Mupen64Plus AE and FPSE) gained some speed but less that I anticipated.
So is this because of the monster Nexus 10's 2560x1600 resolution? Or is it because of limited memory bandwith? Maybe some emulators are not tweaked for the N10 yet. I wish some emulators had the option to set a lower resolution for rendering and then upscale the output. I think that many Android apps just try to push the frames to the native resolution without checking first if there is a faster way.
The N7 has a lower clocked 4 core CPU but has only 1/4 the resolution. I think that it's a more balanced device that the N10 which may have a faster dual core CPU but too much pixels to push. It's much like the iPad3 who was twice as fast as the iPad2 but had a 4x increase in resolution.
I am now considering going for a custom ROM on the N10 but I wonder if I will see an increase in emulation speed. Maybe those of you who did the jump can tell me. I'm thinking about AOKP maybe.
Any suggestion on that would be appreciated, thanks!
The emulators just need to be tweaked a bit to better perform on the completely different processor architecture. Really our processor is far more powerful than the Nexus 7 so the emulators should run faster. I too am a fan of the old games, and I play Super Nintendo and Game Boy Advance (and some Color) games quite often. I find performance to be perfect with no issues at all, but then again those arent exactly "demanding" emulators.
We do not have any sort of memory bandwidth limitation on the Nexus 10. The tablet has been designed to give the full needed 12.8 GB/s of memory bandwidth that is required for 2560x1600 resolution.
EniGmA1987 said:
The emulators just need to be tweaked a bit to better perform on the completely different processor architecture. Really our processor is far more powerful than the Nexus 7 so the emulators should run faster. I too am a fan of the old games, and I play Super Nintendo and Game Boy Advance (and some Color) games quite often. I find performance to be perfect with no issues at all, but then again those arent exactly "demanding" emulators.
We do not have any sort of memory bandwidth limitation on the Nexus 10. The tablet has been designed to give the full needed 12.8 GB/s of memory bandwidth that is required for 2560x1600 resolution.
Click to expand...
Click to collapse
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Having to render to that size of screen [2560x1600] will slow the game down. It’s called being ‘fill rate bound’. Even for a good processor it's a lot of work as the game uses quite a lot of overdraw.
The solution is to draw everything to a smaller screen (say half at 1280x800) and then stretch the final image to fill the screen.
Click to expand...
Click to collapse
A sad true my nexus 10 get dam hot and i have to play games at 1.4 or 1.2 that sux
Sent from my XT925 using xda app-developers app
espionage724 said:
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Click to expand...
Click to collapse
But fillrate isnt memory bandwidth. We need both more MHz and more raster operations to get higher fill rate of pixels per second. We can overclock the GPU to get the MHz, and that will help, but we have to find a way to solve the higher heat output too from that. More ROP's are impossible as it is a hardware design for how many we have. If we ever get to overclock up to around 750 MHz then we should see a 30-40% improvement in fill rate. At that point we may have memory bandwidth problems, but we wont know for sure until we get there. But the 12.8GB/s of bandwidth that we currently have is enough to support 2560x1600 resolution at our current GPU power. Our Nexus 10 also has the highest fillrate of any Android phone or tablet to date, about 1.4 Mtexel/s. And if we have memory bandwidth limitations, then we would see no improvement at all from the current overclock we do have up to 612-620MHz because the speed wouldnt be where the bottleneck is. Yet we can clearly see in benchmarks and real gaming that we get FPS increases with higher MHz, thus our current problem is the fillrate and not the memory bandwidth.
Also, the solution is not to render the game at half the resolution as that is a band-aid on the real problem. If the developer of a game would code the game properly we wouldnt have this problem, or if they dont feel like doing that then they should at least stop trying to put more into the game than their un-optimized, lazy project is capable of running nicely.
espionage724 said:
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Click to expand...
Click to collapse
With that logic you could buy any video card for a PC and it would run any game at the resolution the video card supports. That isn't the case because rendering involves more than just memory fill rate. There are textures, polygons, multiple rendering passes, filtering, it goes on and on. As EniGmA1987 mentioned nothing has been optimized to take advantage of this hardware yet, developers were literally crossing their fingers hoping their games would run 'as is'. thankfully the A15 cpu cores in the exynos will be used in the tegra 4 as well so we can look forward to the CPU optimizations soon which will definitely help.
Emulators are more cpu intensive than anything else, give it a little time and you won't have any problems with your old school games. Run the new 3DMark bench to see what this tablet can do, it runs native resolution and its not even fully optimized for this architecture yet.
2560*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth. Most emulators don't use 3D functions so fillrate, rendering, overdraw won't be a factor. Most emulators are single-threaded (correct me if I'm wrong) and the A15 should shine in this particular situation and even more so in multi-threaded scenarios. With its out-of-order pipeline and greatly enhanced efficiency it should be perfectly suited for the job.
We have the fillrate, we have enough CPU power and I'm still wondering why simple app like emulators aren't much faster than that. Is it Android? Is it the Dalvik VM? Or is it because some emulators need to be written in native code instead of using Java VM? I'm not a developer and I have only minimal knowledge in this department. I can only speculate but I'm curious enough about it that I started googling around to find why.
Lodovik said:
2560*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth
Click to expand...
Click to collapse
Just curious but what is that calculation supposed to be? total bandwidth needed? Cause I don't see your bit depth in there, unless the 4 is supposed to be that? If that is true than you are calculating on 4-bit color depth?
And then the result would just be bandwidth required for pixel data to memory wouldnt it? It wouldnt include texture data in and out of memory and other special functions like post processing.
2560*1600 = number of pixels on the screen
4 = bytes / pixels for 32-bits depth
60 = frames / second
/1024/1024 = divide twice to get the result in MB
Actually, I made a typo the result is 937,5 MB/s or 0.92 GB/s. This is just a rough estimate to get an idea of what is needed at this resolution just to push the all pixels on the screen in flat 2D at 60 fps, assuming that emulators don't use accelerated functions.
My point was that with 12.8 GB/s of memory bandwith, we should have more than enough even if this estimate isn't very accurate.
Thanks for the explanation
If there really were a memory bandwidth limitation the newer Trinity kernels and newest KTManta should help. In addition to the higher GPU speed they both allow (KTManta up to 720MHz) both ROM's have increased memory speeds which increase memory bandwidth to 13.8GB/s, up from 12.8 on stock.
Thanks for the info. There's so many configuration options available for the Nexus 10. I really enjoy having all those possibilities.
EniGmA1987 said:
If there really were a memory bandwidth limitation the newer Trinity kernels and newest KTManta should help. In addition to the higher GPU speed they both allow (KTManta up to 720MHz) both ROM's have increased memory speeds which increase memory bandwidth to 13.8GB/s, up from 12.8 on stock.
Click to expand...
Click to collapse
=Lodovik;40030*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth. Most emulators don't use 3D functions so fillrate, rendering, overdraw won't be a factor. Most emulators are single-threaded (correct me if I'm wrong) and the A15 should shine in this particular situation and even more so in multi-threaded scenarios. With its out-of-order pipeline and greatly enhanced efficiency it should be perfectly suited for the job.
We have the fillrate, we have enough CPU power and I'm still wondering why simple app like emulators aren't much faster than that. Is it Android? Is it the Dalvik VM? Or is it because some emulators need to be written in native code instead of using Java VM? I'm not a developer and I have only minimal knowledge in this department. I can only speculate but I'm curious enough about it that I started googling around to find why.
Click to expand...
Click to collapse
You are taking what I said out of context. I was responding to someone else, thus the "quote" above my post.
Since you posted I loaded up some Super Nintendo, N64, and PlayStation games on my n10 without any issues. It may just be your setup. There are a lot of tweaks out there that could easily increase performance. One great and very simple one is enabling 2D GPU rendering which is in developer options. Just do some searching. GPU Overclocking won't help much, as you said above your games are only 2D. I am sure you can get them running just fine.