[Q] Why are desktop browsers so much faster than mobile browsers? - Android Q&A, Help & Troubleshooting

Run Speed Battle. What's your score?
My score using my Macbook Pro i7 with Firefox 10 and 8gb of ram is in the 400-500 range. My score using my HTC Flyer tablet (1.5 GHz Scorpion single core, with 1gb ram) with Opera Mobile is in the 25-35 range. When using Opera Mini or the stock Honeycomb browser, it's even lower, somewhere in the 10-20 range.
I ran this test because I was confused why my desktop browser is so much faster than browsing on a tablet. From these numbers, you would conclude that it's on the order of 10x faster. Naively, I just assumed that the biggest factor behind loading a website was the connection speed. But since the same connection is used for both my tests, I guess this can't be the reason. Certainly, CPU matters when it comes to processing the webpage, but does it matter this much?
So perhaps someone else can explain: why is mobile browsing on the order of ten times slower than desktop browsing?

TSGM said:
Run Speed Battle. What's your score?
My score using my Macbook Pro i7 with Firefox 10 and 8gb of ram is in the 400-500 range. My score using my HTC Flyer tablet (1.5 GHz Scorpion single core, with 1gb ram) with Opera Mobile is in the 25-35 range. When using Opera Mini or the stock Honeycomb browser, it's even lower, somewhere in the 10-20 range.
I ran this test because I was confused why my desktop browser is so much faster than browsing on a tablet. From these numbers, you would conclude that it's on the order of 10x faster. Naively, I just assumed that the biggest factor behind loading a website was the connection speed. But since the same connection is used for both my tests, I guess this can't be the reason. Certainly, CPU matters when it comes to processing the webpage, but does it matter this much?
So perhaps someone else can explain: why is mobile browsing on the order of ten times slower than desktop browsing?
Click to expand...
Click to collapse
Err, because that's not a connection test, but a javascript test. Thus executing code... Can't expect a phone to compete with a full fledged top of the line computer, can you?

Pretty obvious is it not? Wow...
Sent from my Nexus S using xda premium

I just think it's quiet obvious, but that mostly comes from that it has always been this way for me. I think it's just because of the fact that a Mac is a Mac and a tablet is a.. tablet

CdTDroiD said:
Pretty obvious is it not? Wow...
Sent from my Nexus S using xda premium
Click to expand...
Click to collapse
I'm sorry, call me stupid, but I don't think it's obvious at all.
Let us assume that there is an upper limit for the computing speed and resources required to render a typical internet page. I would have assumed that the kind of processors found in tablets are already near this upper limit. So, as a made up figure, a 486 machine running Windows 3.1 might be able to render a given page in 10 seconds, and my modern desktop might render it in 3. A tablet might be able to render it in 5. A supercomputer might render it in 2.5. This is assuming all these machines are on the same connection.
The point is that there is a point of diminishing returns, where the principle bottleneck of rendering a page is the connection speed.
However, because I noticed a substantial difference in loading times between a tablet and a desktop, I was forced to conclude that the principle bottleneck in speed wasn't the connection speed, but rather the CPU's capability in loading various scripts (or whatnot). I'd simply assumed that the technology in 2011 (regardless of whether it took tablet or desktop form) was already close to the plateau of diminishing returns. This, I found surprising.
...but then again, I guess everybody here thinks it's obvious that the principle bottleneck is not connection speed, but something else.

It is definitely obvious. You can make a 1000-page post explaining why you THINK it isn't obvious, but it will still be obvious.
I'm completely sorry.

Mr. Holmes said:
It is definitely obvious. You can make a 1000-page post explaining why you THINK it isn't obvious, but it will still be obvious.
I'm completely sorry.
Click to expand...
Click to collapse
I understand. Thanks.
Can you explain to me what is the relation between browsing speed and CPU? Is it a linear relation? I would have expected it to vary to a plateau. This is the point I am most curious about. I don't think the functional dependence is at all obvious, but I am happy to hear your explanation.

Related

[Q] Are both cores used all the time?

Just as the question states. I know the second core will sleep when not needed but say you launch an app, does the second core help load the app? The reason I ask is because I'm curious about the raw speed difference between the atrix and inspire. Now compairing the inspire running at 1.8 and the atrix seemingly stuck at 1 per core (I'm not saying the atrix wont ever be OCed but I'm just talking about what's currently available). I'm just curious if the second core will help the first with tasks. If it doesn't would that make the inspire technically way faster (obviously battery life may be an issue but this isn't a battery compairo)?
Thanks for any insight
I think you should start by knowing that overclocking ARM prroccessors gives little yield.
XOOM at 1.5 ghz scores only 500 better than a non-overclocked xoom on quadrant.
I'm going to try and simplify the answer for you.
Will BOTH cores be used? Maybe. First off, is the app itself optimized for dual core, or does it even need dual core / multithreaded capability.
Secondly, and I think more importantly, what is the rest of the phone doing. So, let's say you fire up your favorite app, the phone is still doing stuff in the background. Maybe it's checking email. Maybe Google Latitude is checking your location and updating. The point is - the other core will still be around to offload this work.
Now, WILL it go to the other core. Maybe. Maybe not. I do work on some big Sun machines, and have seen them use one or two out of 64 cores, even with massive loads and each core being used 100%, it refused to balance the load amongst CPU's.
Hope this helps.
mister_al said:
I'm going to try and simplify the answer for you.
Will BOTH cores be used? Maybe. First off, is the app itself optimized for dual core, or does it even need dual core / multithreaded capability.
Secondly, and I think more importantly, what is the rest of the phone doing. So, let's say you fire up your favorite app, the phone is still doing stuff in the background. Maybe it's checking email. Maybe Google Latitude is checking your location and updating. The point is - the other core will still be around to offload this work.
Now, WILL it go to the other core. Maybe. Maybe not. I do work on some big Sun machines, and have seen them use one or two out of 64 cores, even with massive loads and each core being used 100%, it refused to balance the load amongst CPU's.
Hope this helps.
Click to expand...
Click to collapse
Yea that's exactly like I figured, I was kinda going off Windows/Intel multi core setup. Even after dual+cores have been out for quite some time 95% of programs made still don't use more than one core (Most of those remaining 5% being very CPU intense programs PS, Autocad ect.). But I get what you mean, the one core will be dedicated to what your doing and not sharing cycles with anything else because core 2 is working on whatever pops up. So basically the Atrix might be a little slower at doing things BUT it will always stay the same speed with less/no bog.
Techcruncher said:
I think you should start by knowing that overclocking ARM prroccessors gives little yield.
XOOM at 1.5 ghz scores only 500 better than a non-overclocked xoom on quadrant.
Click to expand...
Click to collapse
So you're saying Quadrant suck as it does with most phones or OCing the Xoom (and Atrix) wont really do much?
I already built an apk for testing CPU usage on both processors... When I get some free time, I'm going to turn it into a widget... Here's what I noticed:
Because of the current OS and less dual core support for apps, the phone kind of kicks certain tasks into using the 2nd processor. The APK i built reads the '/proc/stat' file and i've noticed that when the 2nd processor is being used it actually shows up in the file as 'cpu1'. However, when it's not being used the 'cpu1' line does not exist and you can default the 2nd processor usage to 0%. It seems like performing core OS tasks (like installing apps) kick the 2nd processor into use, which is what you can expect since froyo supports dual cores.
Like everyone says, I'd expect to see more dual core usage on 2.3/2.4 (whichever motorola gives) and when more apps are designed to kick certain threads onto the 2nd processor.

So what gives with these lousy benchmarks?

I finally found a comparable tegra 2 bench posted online in a droid x 2 review, both devices have a qHD screen. It's looking like the hardware we have here isn't particularly impressive, and let's not even go there with the Galaxy s 2 *shudder*, it's a massacre.
I was to understand that the Qualcomm/Adreno setup was going to at least be competitive, and was supposed to be all out superior to Tegra 2. Can anyone shed some light on this?
Levito said:
I finally found a comparable tegra 2 bench posted online in a droid x 2 review, both devices have a qHD screen. It's looking like the hardware we have here isn't particularly impressive, and let's not even go there with the Galaxy s 2 *shudder*, it's a massacre.
I was to understand that the Qualcomm/Adreno setup was going to at least be competitive, and was supposed to be all out superior to Tegra 2. Can anyone shed some light on this?
Click to expand...
Click to collapse
I don't look at benchmarks too much... but it can download n' upload like a God that's its power tool
My overlocked 1.5 Ghz tegra 2 lags behind my EVO 3D but it scores 900 more points in quadrant so my epeen feels alright. Seriously most of these benchmarks are not coded well.
I think the 3vo uses only one core with quadrant. You have to use a dual core benchmark test like CF Bench for better results. Then again benchmarks really don't mean much.
Sent from my PG86100 using Tapatalk
Benchmarks are nearly useless measures.
Using benchmarks to determine real world performance is like licking your finger and sticking it up in the air to determine how fast the wind is moving.
Yeah, it'll put you roughly in the ballpark--roughly. But that ''ballpark'' is big enough to drive a couple dump trucks through...
Both the droid x2 and the galaxy s2 aren't running sense, which usually drags down bench marks even though the phone is silky smooth. Benchmarks may be useful for testing modifications on the same phone, but not for comparing different phones. Just ask yourself... Does it seem to suffer to you?
Sent from my PG86100 using XDA App
Who gives a #$% about benchmarks, all I know is that this thing is fast, way faster than the EVO. I have a gTablet (tegra 2, Honeycomb) that runs games very well and this 3VO runs the same games but only smoother and faster, no hiccups at all. Totally happy here and I have like 200 apps on this thing and I have like 280 megs left.
Oh, and my gTablet is clocked to 1.5ghz!
G_Dmaxx said:
Who gives a #$% about benchmarks, all I know is that this thing is fast, way faster than the EVO. I have a gTablet (tegra 2, Honeycomb) that runs games very well and this 3VO runs the same games but only smoother and faster, no hiccups at all. Totally happy here and I have like 200 apps on this thing and I have like 280 megs left.
Oh, and my gTablet is clocked to 1.5ghz!
Click to expand...
Click to collapse
Seriously my Tegra 2 Transformer has nothing on my EVO 3D. Why people look only at benchmarks and not what is in front of them I have no clue.
danaff37 said:
Both the droid x2 and the galaxy s2 aren't running sense, which usually drags down bench marks even though the phone is silky smooth. Benchmarks may be useful for testing modifications on the same phone, but not for comparing different phones. Just ask yourself... Does it seem to suffer to you?
Sent from my PG86100 using XDA App
Click to expand...
Click to collapse
I've actually never had an AOSP rom run all that much faster than a Sense rom. Enough of a variance to say that there isn't a difference at all.
Like many others have pointed out. Quadrants is a terrible bench for dualcore phones until it's updated. When it reads off a bunch of question marks as the evo3ds CPU, CPU speed,etc. You know its not going to be a reliable test.
Sent from my PG86100 using Tapatalk
Go to anand-tech for the Adreno 220 benches... It crushed the competition so maybe that'll make you feel better.
1 possible reason why the EVO 3D isn't scoring as high as you expect is because I think the benchmark tests don't utilize CPU's with asynchonous dual cores correctly.
Someone correct me if I'm wrong, but I think the Galaxy uses synchonous cores which mean they can only work on the same thing at the same time, they can't work on separate operations at the same time.
The EVO 3D has asynchonous cores which allow for true multitasking meaning each core will work on separate tasks. As I understand it, support for this type of CPU is going to be added in Android 2.4 and later, but don't quote me on that.
LOL @ benchmarks
DDiaz007 said:
Go to anand-tech for the Adreno 220 benches... It crushed the competition so maybe that'll make you feel better.
Click to expand...
Click to collapse
Any similar comparisons to the exynos/mali(?) that the sgs 2 is packing?
Some of the above statements about asynchronous processing do make me feel better if true.
Levito said:
Any similar comparisons to the exynos/mali(?) that the sgs 2 is packing?
Some of the above statements about asynchronous processing do make me feel better if true.
Click to expand...
Click to collapse
Why not feel good in the first place?
This phone screams. You're comparing it to a Moto phone with Tegra 2 which will likely be one of the last new phones with Tegra 2. Enjoy the 3D. By the time something comes around to crush it, we'll be into 4 core territory, or Android will be updated to better support multiple cores (if I remember right, this was only really started for 3.0).
I'll agree the SGS2 seems like a killer but I'll take HTC build quality over Samsung any day of the week. Plus, let's see Exynos pushing qHD.
No I hear you. Truth is that there probably won't be any software written for quite sometime that is going to really push our current hardware. Besides I upgrade every year or so anyway, making future proofing less of an issue for me.
It's the principle of the thing.
Levito said:
No I hear you. Truth is that there probably won't be any software written for quite sometime that is going to really push our current hardware. Besides I upgrade every year or so anyway, making future proofing less of an issue for me.
It's the principle of the thing.
Click to expand...
Click to collapse
I hear ya too, but you gotta try not to get caught up in numbers. Numbers can be manipulated. Manufacturers can tune their phones to perform better in Quadrant (this can also be done with custom ROMs; when it is, performance in other categories suffers). AMD and Intel still participate in this ePeen warfare.
I won't be surprised if we see that Evo 3D outperforms the Tegra Moto overall.
The good thing is, we will eventually see this thing rooted completely (hopefully not after it's lost most of its luster). THEN we will see what we can push out of this phone. Look how fast it's running sense. Imagine a vanilla Android experience on it, or an overclock to say, 1.8 GHz (which will probably happen). I dunno about you but I'm salivating.
Ok, the only benchmark I need to know is that my phone boots up from "off" in 10-12 seconds. Base your satisfaction on a constant, not on relativism.
megatron-g1 said:
1 possible reason why the EVO 3D isn't scoring as high as you expect is because I think the benchmark tests don't utilize CPU's with asynchonous dual cores correctly.
Someone correct me if I'm wrong, but I think the Galaxy uses synchonous cores which mean they can only work on the same thing at the same time, they can't work on separate operations at the same time.
The EVO 3D has asynchonous cores which allow for true multitasking meaning each core will work on separate tasks. As I understand it, support for this type of CPU is going to be added in Android 2.4 and later, but don't quote me on that.
Click to expand...
Click to collapse
Should be no difference to code for asynchronous or synchronous. The cores will run at full speed if they're pushed. Quadrant scores are more based on database read and write speeds than anything.
I've owned many many phones, and this one is by far the most fluid (although I have not had hands on with the Galaxy SII, but I hate Samsung's software)
I haven't run into a case where the phone stutters, have you?
I believe in the Anandtech benchmarks, they used a developer phone that has the same qualcomm chipset running at the stock 1.5ghz, while our phones were downclocked to 1.2ghz.
They might have done this for various reasons, it would be interesting to see how our phones overclock and if there's any changes in battery life.

running full speed interesting observation

OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
markimar said:
OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
Click to expand...
Click to collapse
ill check on this when i get home. this issue im assuming is with honeycomb itself. we would assume that ICS would properly use those cores
Sent from my Samsung Galaxy S II t989
i don't have it yet (mine gets delivered on wed), but what you observed makes perfect sense. Can they change it to run on say an 800 MHZ constant "down" to 500MHZ when doing the most simple tasks? obviously i to do not believe that 500MHZ will be sufficient at all times to do screen scrolling and such on it's own.
I'm really hoping that the few performance issues people are seeing is resolved in firmware updates and a tegra 3 optimized version of ICS. Maybe asus/nvidia needs to do more tweaking to HC before the ICS build is pushed if it will take a while for ICS to arrive to the prime (past january).
The cores are optimized just fine. They kick in when rendering a web page or a game, but go idle and use the 5th core when done. Games always render.
ryan562 said:
ill check on this when i get home. this issue im assuming is with honeycomb itself. we would assume that ICS would properly use those cores
Sent from my Samsung Galaxy S II t989
Click to expand...
Click to collapse
Nothing's changed over HC in the way ICS uses h/w acceleration. And I'd assume apps using h/w acceleration do so via calls to the OS, not to the chip directly. So it appears what you've got is what you're going to get.
---------- Post added at 06:59 PM ---------- Previous post was at 06:55 PM ----------
markimar said:
OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
Click to expand...
Click to collapse
Do you have Pulse installed? A bunch of people using it were reporting stuttering where their lower powered devices aren't. If you run it at full speed, does it stutter? One of the hypothesis is that it's the core's stepping up and down that's causing the stuttering.
BarryH_GEG said:
Nothing's changed over HC in the way ICS uses h/w acceleration. And I'd assume apps using h/w acceleration do so via calls to the OS, not to the chip directly. So it appears what you've got is what you're going to get.
Click to expand...
Click to collapse
Also, correct me if I'm wrong, but I don't think that the OS knows about the fifth core? I believe the chip's own scheduler manages the transition between the quad-core and the companion core, not the Android scheduler.
Mithent said:
Also, correct me if I'm wrong, but I don't think that the OS knows about the fifth core? I believe the chip's own scheduler manages the transition between the quad-core and the companion core, not the Android scheduler.
Click to expand...
Click to collapse
That's the way I'd guess it would work. I don't think Android addresses different chips differently. I'd assume it's up to the SoC to manage the incoming instructions and react accordingly. If Android was modified for dual-core, I don't think it diffentiates between the different implementations of dual-core chips. Someone with more h/w experience correct me if I'm wrong. Also, does anyone know if the chip manufacturer can add additional API's that developers can write to directly either instead of or in parallel with the OS? I ask because how can a game be optimized for Tegra if to the OS all chips are treated the same?
I tried out the power savings mode for a while.it seemed to perform just fine. Immediate difference is that it lowers the contrast ratio on display. This happens as soon as you press the power savings tab. Screen will look like brightness dropped a bit but if you look closely, you'll see it lowered the contrast ratio. Screen still looks good but not as sharp as in other 2 modes. UI still seems to preform just fine. Plus I think the modes doesn't affect gaming or video playback performance. I read that somewhere, either anandtech or Engadget. When watching vids or playing games, it goes into normal mode. So those things won't be affected no matter what power mode you in, I think..lol
I was thinking of starting a performance mode thread. To see different peoples results and thoughts on different power modes. I read some people post that they just use it in power/battery savings mode. Some keep it in normal all the time. Others in balanced mode. Would be good to see how these different modes perform in real life usage. From user perspective. I've noticed, so far, that In balanced mode, battery drains about 10% an hour. This is with nonstop use including gaming, watching vids, web surfing, etc. now in battery savings mode, it drains even less per hour. I haven't ran normal mode long enough to see how it drains compared to others. One thing though, web surfing drains battery just as fast as gaming.
BarryH_GEG said:
I ask because how can a game be optimized for Tegra if to the OS all chips are treated the same?
Click to expand...
Click to collapse
I hate quoting myself but I found the answer on Nvidia's website. Any otimizations are handled through OpenGL. So games written to handle additional calls that Teg2 can support are making those calls through OpenGL with the OS (I'm guessing) used as a pass-through. It would also explain why Tegra optimized games fail on non-Teg devices because they wouldn't be able process the additional requests. So it would appear that Teg optimization isn't being done through the OS. Again, correct me if I'm wrong.
BarryH_GEG said:
That's the way I'd guess it would work. I don't think Android addresses different chips differently. I'd assume it's up to the SoC to manage the incoming instructions and react accordingly. If Android was modified for dual-core, I don't think it diffentiates between the different implementations of dual-core chips.
Click to expand...
Click to collapse
I did some research on it; here's what Nvidia say:
The Android 3.x (Honeycomb) operating system has built-in support for multi-processing and is
capable of leveraging the performance of multiple CPU cores. However, the operating system
assumes that all available CPU cores are of equal performance capability and schedules tasks
to available cores based on this assumption. Therefore, in order to make the management of
the Companion core and main cores totally transparent to the operating system, Kal-El
implements both hardware-based and low level software-based management of the Companion
core and the main quad CPU cores.
Patented hardware and software CPU management logic continuously monitors CPU workload
to automatically and dynamically enable and disable the Companion core and the main CPU
cores. The decision to turn on and off the Companion and main cores is purely based on current
CPU workload levels and the resulting CPU operating frequency recommendations made by the
CPU frequency control subsystem embedded in the operating system kernel. The technology
does not require any application or OS modifications.
Click to expand...
Click to collapse
http://www.nvidia.com/content/PDF/t...e-for-Low-Power-and-High-Performance-v1.1.pdf
So it uses the existing architecture for CPU power states, but intercepts that at a low level and uses it to control the companion core/quad-core switch?
Edit: I wonder if that means that tinkering with the scheduler/frequency control would allow the point at which the companion core/quad-core switch happens to be altered? If the OP is correct, this might allow the companion core to be utilised less if an increase in "smoothness" was desired, at the cost of some battery life?
Mithent said:
I wonder if that means that tinkering with the scheduler/frequency control would allow the point at which the companion core/quad-core switch happens to be altered? If the OP is correct, this might allow the companion core to be utilised less if an increase in "smoothness" was desired, at the cost of some battery life?
Click to expand...
Click to collapse
So what we guessed was right. The OS treats all multi-cores the same and it's up to the chip maker to optimize requests and return them. To your point, what happens between the three processors (1+1x2+1x2) is black-box and controlled by Nvidia. To any SetCPU type program it's just going to show up as a single chip. People have tried in vain to figure how to make the Qualcomm dual-core's act independently so I'd guess Teg3 will end up the same way. And Nvidia won't even publish their drivers so I highly doubt they'll provide any outside hooks to control something as sensitive as the performance of each individual core in what they're marketing as a single chip.
[/COLOR]
Do you have Pulse installed? A bunch of people using it were reporting stuttering where their lower powered devices aren't. If you run it at full speed, does it stutter? One of the hypothesis is that it's the core's stepping up and down that's causing the stuttering.[/QUOTE]
I have been running mine in balanced mode, have had pulse installed since day one, no lag or stuttering in anything. games, other apps work fine.
Well my phones when clocked at 500 so I wouldn't be surprised
Sent from my VS910 4G using xda premium

Developing

Here's my question. I want to start with an app or two, but ultimately want to make it to roms, themes, etc. My computer however is a bit on the low end. 1ghz dual core, 1gb ram. Decent storage though 250gb. Is this not enough, sufficient, or great. Thanks in advance
Posted from my 1.34ghz, Infected, Themed Out, Lightningbolt.
when your phone has a faster clockspeed than your desktop, it is time for an upgrade.
Bigandrewgold said:
when your phone has a faster clockspeed than your desktop, it is time for an upgrade.
Click to expand...
Click to collapse
Except clockspeed is not the almighty determinant of cpu power that years of marketing would lead you to believe. That CPU could still run circles around the phone CPU for various architectural reason I won't get into because it's too much to explain and no one will likely be interested. However, 1ghz is slow, especially if you're going to be using a Java based (that is, built on Java) IDE like Eclispe or Intellij IDEA. I've used Intellij IDEA on my 2 core Atom netbook when I'm not around my desktop and it's just painful (and Intellij is faster than Eclipse). Java anything eats RAM like a fat kid eats skittles and drags your CPU like you're running a race with him on your shoulders. Intellij IDEA eats up around 600mb of RAM being open and Eclipse is around the same.
That amount of RAM is also low as well. Your system is already using at least 50% of that + whatever more for your GPU if you do not have a separate GPU.
Can you use that computer to do some basic application tutorials, theme and do small mods? Yeah sure. Will it be annoying to do so? A little, as things lag and you probably don't realize it as you're used to that system.
If you're going to compile Android from the source, then that computer will never work out. Android source needs around 8-16gb of RAM for 2.3.x and 16-24gb for ICS. A 4 core CPU such as an i5 or i7 is also recommended.
You could build a decent computer from parts made for compiling for probably 700-900 excluding a monitor. One for just apps and anything else for probably 500-600.
Yeah definitely time for an upgrade.
I dev on my laptop. It's a Toshiba Satellite L675D-7104:
AMD Turion II Dual Core 2.5GHz cpu, 4GB RAM, 500GB HD, ATI Mobility Radeon HD 4200, dual booting Windows 7 Home Premium and Fedora 16.
It's a decent mid-range computer, nothing too special. It does the job as far as building ROMs. It can build from source, but takes a pretty long time.
Thanks folks. I will be dispatching my computer promptly, office space style
Posted from my 1.34ghz, Infected, Themed Out, Lightningbolt.
haliwa04 said:
Thanks folks. I will be dispatching my computer promptly, office space style
Posted from my 1.34ghz, Infected, Themed Out, Lightningbolt.
Click to expand...
Click to collapse
No need for that, I use old computers for linux test boxes quite a bit as you don't need that much to run it with just the command line. I'm sure someone will take it off your hands if you put it on Craigslist or give it to the thrift store.

[INFO] Nexus 10 vs Nexus 7 and emulators

Last summer, I decided to buy a Nexus 7 for using it mainly as an ebook reader. It's perfect for that with its very sharp 1280x800 screen. It was my first Android device and I love this little tablet.
I'm a fan of retro gaming and I installed emulators on every device I have: Pocket PC, Xbox, PSP Go, iPhone, iPad3, PS3. So I discovered that the Android platform was one of the most active community for emulation fans like me and I bought many of them, and all those made by Robert Broglia (.EMU series). They were running great on the N7 but I found that 16GB was too small, as was the screen.
I waited and waited until the 32 GB Nexus 10 became available here in Canada and bought it soon after (10 days ago). With its A15 cores, I was expecting the N10 to be a great device for emulation but I am now a little disapointed. When buying the N10, I expected everything to run faster than on the N7 by a noticeable margin.
Many emulators run slower on the N10 than on the N7. MAME4Ddroid and MAME4Droid reloaded are no longer completely smooth with more demanding ROMs, Omega 500, Colleen, UAE4droid and SToid are slower and some others needed much more tweaking than on the N7. I'm a little extreme on accuracy of emulation and I like everything to be as close to the real thing as possible. A solid 60 fps for me is a must (or 50 fps for PAL machines).
On the other side, there are other emus that ran very well: the .EMU series and RetroArch for example. These emulators are much more polished than the average quick port and they run without a flaw. They're great on the 10-inch screen and I enjoy them very much. The CPU intensive emulators (Mupen64Plus AE and FPSE) gained some speed but less that I anticipated.
So is this because of the monster Nexus 10's 2560x1600 resolution? Or is it because of limited memory bandwith? Maybe some emulators are not tweaked for the N10 yet. I wish some emulators had the option to set a lower resolution for rendering and then upscale the output. I think that many Android apps just try to push the frames to the native resolution without checking first if there is a faster way.
The N7 has a lower clocked 4 core CPU but has only 1/4 the resolution. I think that it's a more balanced device that the N10 which may have a faster dual core CPU but too much pixels to push. It's much like the iPad3 who was twice as fast as the iPad2 but had a 4x increase in resolution.
I am now considering going for a custom ROM on the N10 but I wonder if I will see an increase in emulation speed. Maybe those of you who did the jump can tell me. I'm thinking about AOKP maybe.
Any suggestion on that would be appreciated, thanks!
The emulators just need to be tweaked a bit to better perform on the completely different processor architecture. Really our processor is far more powerful than the Nexus 7 so the emulators should run faster. I too am a fan of the old games, and I play Super Nintendo and Game Boy Advance (and some Color) games quite often. I find performance to be perfect with no issues at all, but then again those arent exactly "demanding" emulators.
We do not have any sort of memory bandwidth limitation on the Nexus 10. The tablet has been designed to give the full needed 12.8 GB/s of memory bandwidth that is required for 2560x1600 resolution.
EniGmA1987 said:
The emulators just need to be tweaked a bit to better perform on the completely different processor architecture. Really our processor is far more powerful than the Nexus 7 so the emulators should run faster. I too am a fan of the old games, and I play Super Nintendo and Game Boy Advance (and some Color) games quite often. I find performance to be perfect with no issues at all, but then again those arent exactly "demanding" emulators.
We do not have any sort of memory bandwidth limitation on the Nexus 10. The tablet has been designed to give the full needed 12.8 GB/s of memory bandwidth that is required for 2560x1600 resolution.
Click to expand...
Click to collapse
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Having to render to that size of screen [2560x1600] will slow the game down. It’s called being ‘fill rate bound’. Even for a good processor it's a lot of work as the game uses quite a lot of overdraw.
The solution is to draw everything to a smaller screen (say half at 1280x800) and then stretch the final image to fill the screen.
Click to expand...
Click to collapse
A sad true my nexus 10 get dam hot and i have to play games at 1.4 or 1.2 that sux
Sent from my XT925 using xda app-developers app
espionage724 said:
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Click to expand...
Click to collapse
But fillrate isnt memory bandwidth. We need both more MHz and more raster operations to get higher fill rate of pixels per second. We can overclock the GPU to get the MHz, and that will help, but we have to find a way to solve the higher heat output too from that. More ROP's are impossible as it is a hardware design for how many we have. If we ever get to overclock up to around 750 MHz then we should see a 30-40% improvement in fill rate. At that point we may have memory bandwidth problems, but we wont know for sure until we get there. But the 12.8GB/s of bandwidth that we currently have is enough to support 2560x1600 resolution at our current GPU power. Our Nexus 10 also has the highest fillrate of any Android phone or tablet to date, about 1.4 Mtexel/s. And if we have memory bandwidth limitations, then we would see no improvement at all from the current overclock we do have up to 612-620MHz because the speed wouldnt be where the bottleneck is. Yet we can clearly see in benchmarks and real gaming that we get FPS increases with higher MHz, thus our current problem is the fillrate and not the memory bandwidth.
Also, the solution is not to render the game at half the resolution as that is a band-aid on the real problem. If the developer of a game would code the game properly we wouldnt have this problem, or if they dont feel like doing that then they should at least stop trying to put more into the game than their un-optimized, lazy project is capable of running nicely.
espionage724 said:
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Click to expand...
Click to collapse
With that logic you could buy any video card for a PC and it would run any game at the resolution the video card supports. That isn't the case because rendering involves more than just memory fill rate. There are textures, polygons, multiple rendering passes, filtering, it goes on and on. As EniGmA1987 mentioned nothing has been optimized to take advantage of this hardware yet, developers were literally crossing their fingers hoping their games would run 'as is'. thankfully the A15 cpu cores in the exynos will be used in the tegra 4 as well so we can look forward to the CPU optimizations soon which will definitely help.
Emulators are more cpu intensive than anything else, give it a little time and you won't have any problems with your old school games. Run the new 3DMark bench to see what this tablet can do, it runs native resolution and its not even fully optimized for this architecture yet.
2560*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth. Most emulators don't use 3D functions so fillrate, rendering, overdraw won't be a factor. Most emulators are single-threaded (correct me if I'm wrong) and the A15 should shine in this particular situation and even more so in multi-threaded scenarios. With its out-of-order pipeline and greatly enhanced efficiency it should be perfectly suited for the job.
We have the fillrate, we have enough CPU power and I'm still wondering why simple app like emulators aren't much faster than that. Is it Android? Is it the Dalvik VM? Or is it because some emulators need to be written in native code instead of using Java VM? I'm not a developer and I have only minimal knowledge in this department. I can only speculate but I'm curious enough about it that I started googling around to find why.
Lodovik said:
2560*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth
Click to expand...
Click to collapse
Just curious but what is that calculation supposed to be? total bandwidth needed? Cause I don't see your bit depth in there, unless the 4 is supposed to be that? If that is true than you are calculating on 4-bit color depth?
And then the result would just be bandwidth required for pixel data to memory wouldnt it? It wouldnt include texture data in and out of memory and other special functions like post processing.
2560*1600 = number of pixels on the screen
4 = bytes / pixels for 32-bits depth
60 = frames / second
/1024/1024 = divide twice to get the result in MB
Actually, I made a typo the result is 937,5 MB/s or 0.92 GB/s. This is just a rough estimate to get an idea of what is needed at this resolution just to push the all pixels on the screen in flat 2D at 60 fps, assuming that emulators don't use accelerated functions.
My point was that with 12.8 GB/s of memory bandwith, we should have more than enough even if this estimate isn't very accurate.
Thanks for the explanation
If there really were a memory bandwidth limitation the newer Trinity kernels and newest KTManta should help. In addition to the higher GPU speed they both allow (KTManta up to 720MHz) both ROM's have increased memory speeds which increase memory bandwidth to 13.8GB/s, up from 12.8 on stock.
Thanks for the info. There's so many configuration options available for the Nexus 10. I really enjoy having all those possibilities.
EniGmA1987 said:
If there really were a memory bandwidth limitation the newer Trinity kernels and newest KTManta should help. In addition to the higher GPU speed they both allow (KTManta up to 720MHz) both ROM's have increased memory speeds which increase memory bandwidth to 13.8GB/s, up from 12.8 on stock.
Click to expand...
Click to collapse
=Lodovik;40030*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth. Most emulators don't use 3D functions so fillrate, rendering, overdraw won't be a factor. Most emulators are single-threaded (correct me if I'm wrong) and the A15 should shine in this particular situation and even more so in multi-threaded scenarios. With its out-of-order pipeline and greatly enhanced efficiency it should be perfectly suited for the job.
We have the fillrate, we have enough CPU power and I'm still wondering why simple app like emulators aren't much faster than that. Is it Android? Is it the Dalvik VM? Or is it because some emulators need to be written in native code instead of using Java VM? I'm not a developer and I have only minimal knowledge in this department. I can only speculate but I'm curious enough about it that I started googling around to find why.
Click to expand...
Click to collapse
You are taking what I said out of context. I was responding to someone else, thus the "quote" above my post.
Since you posted I loaded up some Super Nintendo, N64, and PlayStation games on my n10 without any issues. It may just be your setup. There are a lot of tweaks out there that could easily increase performance. One great and very simple one is enabling 2D GPU rendering which is in developer options. Just do some searching. GPU Overclocking won't help much, as you said above your games are only 2D. I am sure you can get them running just fine.

Categories

Resources