Official Froyo devkit available for Download onTI website - Nook Color Android Development

This was posted in Nook General.
Thought it would be interesting for devs to see here so linkin it.
http://forum.xda-developers.com/showthread.php?t=938747

Nice to see, though i only see mention of OMAP 35XX, not the 36XX (3621) that we use. Though, as a counterpoint, what we relaly have been lacking is updated video drivers, and both the 35xx and 36xx use the same GPU, so perhaps there is some hope it will be useful.

Fifcic said:
The thing to understand is that all the OMAP 3 serires share the same software register interface. the part numbering is based on generation and intended market:
OMAP34xx: High Volume ODM 65nm
OMAP35xx: Embedded Low volume customers (same features as OMAP34xx)
OMAP36xx: High Volume ODM 45nm (Higher clock speed, SGX double clock speed)
AM37xx and DM 37xx: Embedded low volume customers (same features as OMAP36xx)
if you look at the release. this is intended for non ODM customers and enthousiasts access to the SDK they provide to their high volume customer. this is why the OMAP 34and 36 are not mentioned. TI provides them different SDK. the important part is that this provides a stable hardware accelerated kernel with drivers to the comunity.
the OMAP 3621 is an OMAP 3630 nutered. it has the same core but the pins to support pop memory and camera interface are not conected. It is still very powerfull as it still has the DSP and SGX core inside.
I hope this helps.
Click to expand...
Click to collapse
Updated drivers/dsp, updated linux kernel and bluetooth stack! This should really make for some interesting progress with nookie froyo, BT support, CM7 and possibly even the honeycomb builds

Believe it or not, the Devs do actually look in the general forums too

Related

Arm v7 not arm v8?

It may be a typo in the build.prop but it says our processor is armv7 rev2.. Does anyone know if this is actually what's under the hood? I'd be pretty bummed
Pretty sure the processor is based on Arm Cortex A8 so it's last generation but dual core. The SGSII is based on Arm Cortex A9. Not sure how this relates to Armv7 though.
lokhor said:
Pretty sure the processor is based on Arm Cortex A8 so it's last generation but dual core. The SGSII is based on Arm Cortex A9. Not sure how this relates to Armv7 though.
Click to expand...
Click to collapse
I was under the same impression but I'm still curious about the build.prop... idk probly just a screw up
From here:
CPU:
Architecture: ARM v7
ARM core: ARM Cortex-A9
Click to expand...
Click to collapse
Looks like there are 2 different ones. One for the chipset, one for the cores themselves. Chipset is ARM v7, Cores are A9
Also, I have no clue WTF any of this means. Google + 30 seconds = Some possibly useful info.
Edit: Okies, after doing some looking: There's no v8, only v7. The Cortex A9 is a subcategory of that, like different versions of it. Like we have gingerbread, 2.3. You can have subcategories of 2.3.3, 2.3.4, etc, which are all patches with improvements. So the CPU runs on ARM v7-A9, if that makes more sense...
Of course, this is how the processor is built, so it's not like it can be "Patched" to the newer versions when they come out... So that's just an example, to make it easier to understand.
The Scorpion CPU is a modified Cortex A8. ALL newer Cortex Ax CPUs are based on the ARMv7 instruction set architecture (ISA.)
Summary:
CPU is based on ARM's ARMv7 instruction set architecture intellectual property, which is branded Cortex A8. (Newer TI OMAP, and the Exynos are Cortex A9, basically unmodified, but are *still* using the ARMv7 ISA.)
Ergo, ARMv7 --> instruction set architecture, Cortex A8 --> configuration/branding.
APOLAUF said:
The Scorpion CPU is a modified Cortex A8. ALL newer Cortex Ax CPUs are based on the ARMv7 instruction set architecture (ISA.)
Summary:
CPU is based on ARM's ARMv7 instruction set architecture intellectual property, which is branded Cortex A8. (Newer TI OMAP, and the Exynos are Cortex A9, basically unmodified, but are *still* using the ARMv7 ISA.)
Ergo, ARMv7 --> instruction set architecture, Cortex A8 --> configuration/branding.
Click to expand...
Click to collapse
Ah hah. Thankyou. Someone who knows what they're talking about, and isn't just pulling stuff out of their search engine.
BlaydeX15 said:
Ah hah. Thankyou. Someone who knows what they're talking about, and isn't just pulling stuff out of their search engine.
Click to expand...
Click to collapse
Glad to help. I will be starting as junior faculty at the University of Louisville, and I'm teaching microprocessor design, so I hope I can remember all this! ARM definitely has made quite a salad of their branding. For instance, the classic ARM9 CPU is based on ARMv5, while the ARM7 is based on ARMv4 (if I'm not mistaken - I have a few of the dev boards lying around somewhere... there were actually variants of the 7 and the 9 that were both under v4 and v5 ISAs). The ARM11 (which was found in the newer 400MHz+ pocket PCs and smartphones of old) used the ARMv6 architecture, and all Cortex use ARMv7. What a mess! I guess that's what happens when you just create CPU core intellectual property, without manufacturing a single chip.
APOLAUF said:
Glad to help. I will be starting as junior faculty at the University of Louisville, and I'm teaching microprocessor design, so I hope I can remember all this! ARM definitely has made quite a salad of their branding. For instance, the classic ARM9 CPU is based on ARMv5, while the ARM7 is based on ARMv4 (if I'm not mistaken - I have a few of the dev boards lying around somewhere... there were actually variants of the 7 and the 9 that were both under v4 and v5 ISAs). The ARM11 (which was found in the newer 400MHz+ pocket PCs and smartphones of old) used the ARMv6 architecture, and all Cortex use ARMv7. What a mess! I guess that's what happens when you just create CPU core intellectual property, without manufacturing a single chip.
Click to expand...
Click to collapse
I've got an iPaq sitting here...I didn't even realize it was sitting here until you said ARM11 and then I looked down in back of my keyboard, saw that and a giant whooshing sounded flew through my head and reminded me that after reading all of your posts and thinking to myself "this guy really knows his stuff...wow, I doubt I could ever know all of that stuff" that, in fact, I already did in a previous life....lol.
But as you already stated (in different words) "Knowing" is the easy part, remembering is the hard part and to that end you have one upped me.
...wow, bizarre feeling, lol, thanks...
the scorpion core is not a modified a8 it is qualcomms own design that uses the armv7 instruction set
Sent from my PG86100 using XDA Premium App
stimpyshaun said:
the scorpion core is not a modified a8 it is qualcomms own design that uses the armv7 instruction set
Sent from my PG86100 using XDA Premium App
Click to expand...
Click to collapse
It is a modified ARM8 ISA CPU that uses the ARMv7 instruction set, Cortex is a branding.
...that is correct, and if it isn't swap a couple acronyms and numbers around and it will be.
if you r curious here are some links talking about how the scorpion core is similar and different from both a8 and a9
http://www.qualcomm.com/documents/files/linley-report-dual-core-snapdragon.pdf
http://www.anandtech.com/show/3632/anands-google-nexus-one-review/8
http://www.anandtech.com/show/4144/...gra-2-review-the-first-dual-core-smartphone/4
http://www.anandtech.com/show/3632/anands-google-nexus-one-review/9
stimpyshaun said:
the scorpion core is not a modified a8 it is qualcomms own design that uses the armv7 instruction set
Sent from my PG86100 using XDA Premium App
Click to expand...
Click to collapse
You are correct in that the Scorpion cannot be technically branded as an A8. Qualcomm licenses the ARMv7 ISA and basic core design (which ARM called Cortex A8) (when we implement these in FPGA, we call them softcores - kinda kinky. ). Qualcomm, when designing their initial Snapdragon, essentially gave a checklist to ARM for the reference design that they wanted their IP library to use.
For instance, Intel marketed the PXA 255 and PXA 270-series CPUs. (HTC PPC 6700 and Dell Axims, anyone? ). Despite being a CPU innovator in the desktop realm, the cores were still based on ARM reference designs - Intel's mobile division selected the reference they wanted, added MMX, etc., and then went to fab with it. By the same token, the Scorpion was based on ARMv7 ISA, which in its initial incarnation, as used by Qualcomm, was the Cortex A8. What came out of that is, logically, different, but related enough, the same way the PXAs were ARM11 reference desgins (ARMv6.) Qualcomm added the NEON instruction set, as well as out-of-order execution, for example, something the other Cortex CPUs didn't have (this may have changed with the A9), in order to increase data and instruction-level parallelism. They also added the ability to perform fine-grain CPU clock frequency and voltage throttling, much more so than in the stock A8 reference.
I guess in the long run, if they don't update their references to an A9 IP library variant, or perhaps something newer down the road, the Scorpion will start lagging behind the competition rather significantly. Not that I'm complaining at the moment, I love my 3vo's performance as it is.
http://www.qualcomm.com/documents/files/linley-report-dual-core-snapdragon.pdf
here it says qualcomm does not use arms cortex reference designs but infact designed its own
if it is wrong than blame qualcomm... if I am misunderstanding it please explain
stimpyshaun said:
http://www.qualcomm.com/documents/files/linley-report-dual-core-snapdragon.pdf
here it says qualcomm does not use arms cortex reference designs but infact designed its own
if it is wrong than blame qualcomm... if I am misunderstanding it please explain
Click to expand...
Click to collapse
Boy, they certainly make it sound that way! My guess is that the truth lies somewhere in the middle. That they don't use the reference design is clear - the end-product isn't the same product that is the core design in the Cortex/v7 spec. However, I don't think that Qualcomm has the capital and the resources to do a full-scale, ground-up build of a CPU. Note that even Intel didn't do this (!) when they entered (and then promptly exited) the mobile CPU-building business.
The Qualcomm processor design team most-likely chooses attributes and then custom-designs additional features and configurations, and Qc does have the fabs to produce the silicon to state-of-the-art, or (state-of-the-art - 1) process technologies, as is clear from their stated transistor design (with a certain leakage factor) and feature size (45nm - keeping in mind that Samsung has this down to 32nm (you might want to check me on that, this may be a lie )).
So, let me restate my original opinion - I value your inputs, and this document certainly does its best to make the CPU look unique in the market. My understanding is that a certain core of the IP library is still present in the A8 format from ARM. It's entirely possible that none of the original architecture was kept in its native forms - design, routing, and layout might all be different, but I think ultimately, the CPU finds its roots in part of the original A8 design. To simply use the ISA without *any* reference to the original A8, I think, is beyond Qualcomm's capabilities, at least at present.
I suppose a reasonable example may be seen in the car world. Take the Lexus ES series vs. the Toyota Camry (just to name a plain-jane, basic example that most people would know.) The ES looks, performs, and runs differently. It has different features, a different pricetag, and many different interior and even engine-bay features (and probably a larger engine.) But ultimately, it is just an altered Camry. The extent of the alterations here is the question (and bringing an analog to an ISA into the automotive domain is tricky - maybe the use of an engine and 4 wheels? ). Anyway, perhaps only Qualcomm knows the answer, but well done sir, in bringing an in-depth discussion to the table.
To put this into layman's terms:
You can mostly think of the term "architecture" as being a language that the CPU speaks, and the software must therefore be written in that language (or as programmers refer to it, being compiled into that language.) Android apps speak java bytecode to the dalvik engine, which then translates and speaks ARMv7 to the CPU. Different phones can run on different architectures and still have the apps be compatible because the dalvik engine can be compiled for each different architecture.
Now, the "core" in this sense is the specific implementation of that architecture. The easiest analogy to that I can think of, is that intel and AMD CPU's both use the x86 architecture, but their implementations are way different. They are designed far different from one another, but in the end they speak the same language more or less.
There are variations to the ARM "language" which is indicated by the revision number (ARMv7) just as there are variations to the x86 "language." For example, you have x86 32-bit and you have x86 64-bit, and then there are extensions to x86 such as SSE, 3dnow, etc. It's a little more complicated than that, but that's the general idea.

RLY?! Xperia x10 gets ISC port but not atrix?

X10 is garbage! this is outrageous!
Yes really, they got it working, you want it so bad try porting it yourself
Sent from my MB860 using XDA App
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
dLo GSR said:
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
Click to expand...
Click to collapse
Oh snap. That was awesome.
Sent from my MB860 using XDA App
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
firefox3 said:
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Good news man
Sent from my MB860 using XDA App
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Javi97100 said:
Good news man
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Its turning out to be harder then i though... I think no one will get it until offical updates come out for other phones
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
So EGL = gpu driver? If thats the only setback, would it be possible to get an ICS rom with software rendering as a proof of concept, or are there other pieces missing?
GB/CM7 is pretty good on the Atrix, if we dont see ICS for a few months it doesn't hurt us in any way. I'd like to think most of us can be patient if we lack the skills to help.
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
According to anandtech, Tegra 2 support is essentially ready, so I think as long as nvidia releases the source for ics (libs?), someone will try to port it. Hell, I have a good 5 weeks during break, I might as well try then.
Sent from my MB860 using XDA App
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Buddy, check out any of the kernels available in the dev thread and you'll see that the GPUs are overclocked.
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
Doubt the iPhone will see ICS, the newest model that can run android as far as I know is the iPhone 3G, which was incredibly slow under Gingerbread.
mac208x said:
X10 is garbage! this is outrageous!
Click to expand...
Click to collapse
222 posts and zero thanks? Is this what you do, go around XDA and post useless threads like the guy complaining about returning home early despite nobody asking him to "to get MIUI ported on his grandma's phone"?
Are you guys related by any chance?
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Azurael said:
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Click to expand...
Click to collapse
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
Don't you get tired if writing those long rants? We understand you know something about CPU architecture, and that Tegra isn't the best one out there, but damn man, it's the same thing in every thread. Just chill out and try to stay on topic for once
Sent from my MB860 using Tapatalk
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
I think you are not seeing the whole picture...
The Tegra 3 (Et-Al) is not just about its quad core implementation, remember that the GPU will offer 12 cores that will translate in performance not seeing as of yet on any other platform.
Benchmarks don't tell the whole story! Specially those benchmarking tools which are not Tegra 3 optimized yet.
Cheers!
Sent from my Atrix using Tapatalk
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
LOL I ran all the iDroid ports on my iphone. Not one was even in alpha stage, I would not even count iDroid as a port since you cant use anything on it.

running full speed interesting observation

OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
markimar said:
OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
Click to expand...
Click to collapse
ill check on this when i get home. this issue im assuming is with honeycomb itself. we would assume that ICS would properly use those cores
Sent from my Samsung Galaxy S II t989
i don't have it yet (mine gets delivered on wed), but what you observed makes perfect sense. Can they change it to run on say an 800 MHZ constant "down" to 500MHZ when doing the most simple tasks? obviously i to do not believe that 500MHZ will be sufficient at all times to do screen scrolling and such on it's own.
I'm really hoping that the few performance issues people are seeing is resolved in firmware updates and a tegra 3 optimized version of ICS. Maybe asus/nvidia needs to do more tweaking to HC before the ICS build is pushed if it will take a while for ICS to arrive to the prime (past january).
The cores are optimized just fine. They kick in when rendering a web page or a game, but go idle and use the 5th core when done. Games always render.
ryan562 said:
ill check on this when i get home. this issue im assuming is with honeycomb itself. we would assume that ICS would properly use those cores
Sent from my Samsung Galaxy S II t989
Click to expand...
Click to collapse
Nothing's changed over HC in the way ICS uses h/w acceleration. And I'd assume apps using h/w acceleration do so via calls to the OS, not to the chip directly. So it appears what you've got is what you're going to get.
---------- Post added at 06:59 PM ---------- Previous post was at 06:55 PM ----------
markimar said:
OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
Click to expand...
Click to collapse
Do you have Pulse installed? A bunch of people using it were reporting stuttering where their lower powered devices aren't. If you run it at full speed, does it stutter? One of the hypothesis is that it's the core's stepping up and down that's causing the stuttering.
BarryH_GEG said:
Nothing's changed over HC in the way ICS uses h/w acceleration. And I'd assume apps using h/w acceleration do so via calls to the OS, not to the chip directly. So it appears what you've got is what you're going to get.
Click to expand...
Click to collapse
Also, correct me if I'm wrong, but I don't think that the OS knows about the fifth core? I believe the chip's own scheduler manages the transition between the quad-core and the companion core, not the Android scheduler.
Mithent said:
Also, correct me if I'm wrong, but I don't think that the OS knows about the fifth core? I believe the chip's own scheduler manages the transition between the quad-core and the companion core, not the Android scheduler.
Click to expand...
Click to collapse
That's the way I'd guess it would work. I don't think Android addresses different chips differently. I'd assume it's up to the SoC to manage the incoming instructions and react accordingly. If Android was modified for dual-core, I don't think it diffentiates between the different implementations of dual-core chips. Someone with more h/w experience correct me if I'm wrong. Also, does anyone know if the chip manufacturer can add additional API's that developers can write to directly either instead of or in parallel with the OS? I ask because how can a game be optimized for Tegra if to the OS all chips are treated the same?
I tried out the power savings mode for a while.it seemed to perform just fine. Immediate difference is that it lowers the contrast ratio on display. This happens as soon as you press the power savings tab. Screen will look like brightness dropped a bit but if you look closely, you'll see it lowered the contrast ratio. Screen still looks good but not as sharp as in other 2 modes. UI still seems to preform just fine. Plus I think the modes doesn't affect gaming or video playback performance. I read that somewhere, either anandtech or Engadget. When watching vids or playing games, it goes into normal mode. So those things won't be affected no matter what power mode you in, I think..lol
I was thinking of starting a performance mode thread. To see different peoples results and thoughts on different power modes. I read some people post that they just use it in power/battery savings mode. Some keep it in normal all the time. Others in balanced mode. Would be good to see how these different modes perform in real life usage. From user perspective. I've noticed, so far, that In balanced mode, battery drains about 10% an hour. This is with nonstop use including gaming, watching vids, web surfing, etc. now in battery savings mode, it drains even less per hour. I haven't ran normal mode long enough to see how it drains compared to others. One thing though, web surfing drains battery just as fast as gaming.
BarryH_GEG said:
I ask because how can a game be optimized for Tegra if to the OS all chips are treated the same?
Click to expand...
Click to collapse
I hate quoting myself but I found the answer on Nvidia's website. Any otimizations are handled through OpenGL. So games written to handle additional calls that Teg2 can support are making those calls through OpenGL with the OS (I'm guessing) used as a pass-through. It would also explain why Tegra optimized games fail on non-Teg devices because they wouldn't be able process the additional requests. So it would appear that Teg optimization isn't being done through the OS. Again, correct me if I'm wrong.
BarryH_GEG said:
That's the way I'd guess it would work. I don't think Android addresses different chips differently. I'd assume it's up to the SoC to manage the incoming instructions and react accordingly. If Android was modified for dual-core, I don't think it diffentiates between the different implementations of dual-core chips.
Click to expand...
Click to collapse
I did some research on it; here's what Nvidia say:
The Android 3.x (Honeycomb) operating system has built-in support for multi-processing and is
capable of leveraging the performance of multiple CPU cores. However, the operating system
assumes that all available CPU cores are of equal performance capability and schedules tasks
to available cores based on this assumption. Therefore, in order to make the management of
the Companion core and main cores totally transparent to the operating system, Kal-El
implements both hardware-based and low level software-based management of the Companion
core and the main quad CPU cores.
Patented hardware and software CPU management logic continuously monitors CPU workload
to automatically and dynamically enable and disable the Companion core and the main CPU
cores. The decision to turn on and off the Companion and main cores is purely based on current
CPU workload levels and the resulting CPU operating frequency recommendations made by the
CPU frequency control subsystem embedded in the operating system kernel. The technology
does not require any application or OS modifications.
Click to expand...
Click to collapse
http://www.nvidia.com/content/PDF/t...e-for-Low-Power-and-High-Performance-v1.1.pdf
So it uses the existing architecture for CPU power states, but intercepts that at a low level and uses it to control the companion core/quad-core switch?
Edit: I wonder if that means that tinkering with the scheduler/frequency control would allow the point at which the companion core/quad-core switch happens to be altered? If the OP is correct, this might allow the companion core to be utilised less if an increase in "smoothness" was desired, at the cost of some battery life?
Mithent said:
I wonder if that means that tinkering with the scheduler/frequency control would allow the point at which the companion core/quad-core switch happens to be altered? If the OP is correct, this might allow the companion core to be utilised less if an increase in "smoothness" was desired, at the cost of some battery life?
Click to expand...
Click to collapse
So what we guessed was right. The OS treats all multi-cores the same and it's up to the chip maker to optimize requests and return them. To your point, what happens between the three processors (1+1x2+1x2) is black-box and controlled by Nvidia. To any SetCPU type program it's just going to show up as a single chip. People have tried in vain to figure how to make the Qualcomm dual-core's act independently so I'd guess Teg3 will end up the same way. And Nvidia won't even publish their drivers so I highly doubt they'll provide any outside hooks to control something as sensitive as the performance of each individual core in what they're marketing as a single chip.
[/COLOR]
Do you have Pulse installed? A bunch of people using it were reporting stuttering where their lower powered devices aren't. If you run it at full speed, does it stutter? One of the hypothesis is that it's the core's stepping up and down that's causing the stuttering.[/QUOTE]
I have been running mine in balanced mode, have had pulse installed since day one, no lag or stuttering in anything. games, other apps work fine.
Well my phones when clocked at 500 so I wouldn't be surprised
Sent from my VS910 4G using xda premium

iPad 4 vs 5250 (Nexus 10 Soc) GLBenchmark full results. UPDATE now with Anandtech!!

XXXUPDATEXXX
Anandtech have now published the perfromance preview of the Nexus 10, lets the comparison begin!
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Well, the first full result has appeared on GLBenchmark for the iPad4, so I have created a comparison with the Samsung Arndale board, which uses exactly the same SoC as the Nexus 10, so will be very close in performance to Google's newest tablet. GLBenchmark, as its name suggests test Open GL graphics perrformance, which is important criteria for gaming.
Which device wins, click the link to find out.
http://www.glbenchmark.com/compare....ly=1&D1=Apple iPad 4&D2=Samsung Arndale Board
If you're really impatient, the iPad 4 maintains it lead in tablet graphics, the Nexus 10 may performance slightly better in final spec, but the underlying low-level performance will not change much.
I've also made a comparison between the iPad 3 & 4.
Interestingly the in-game tests GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p) :, which is run independent of native screen resolution show the following
iPad 4: 48.6 FPS
iPad 3: 25.9 FPS
5250 : 33.7 FPS
So the iPad is twice as fast as its older brother, the Exynos will probably score nearer 40 FPS in final spec, with new drivers and running 4.2, the board runs ICS, however Jelly Bean did not really boost GL performance over ICS. What is interesting is that iPad 4, whose GPU is supposed to clocked at 500 MHz vs 250 MHz in the iPad 3 does not perform twice as fast in low-level test.
Fill Rate, triangle throughtput, vertex output etc is not double the power of the iPad 3, so although the faster A6 cpu helps, I reckon a lot of the improvement in the Egypt HD test is caused by improved drivers for the SGX 543 MP4 in the Pad 4. The Galaxy S2 received a big jump in GL performance when it got updated Mali drivers, so I imagine we should see good improvements for the T604, which is still a new product and not as mature as the SGX 543.
http://www.glbenchmark.com/compare....tified_only=1&D1=Apple iPad 4&D2=Apple iPad 3
I'd imagine the new new iPad would take the lead in benchmarks for now as it'll take Sammy and Google some time to optimize the beast, in the end however actual app and user interface performance is what matters, and reports are overwhelmingly positive on the Nexus 10.
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
________________
Edit: I forgot that Exynos 4210 with Mali400MP4 GPU had very bad GLbenchmark initially (even worse than PowerVR SGX540), but after updating firmware it's way better than other SoCs on Android handsets.
hung2900 said:
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
Click to expand...
Click to collapse
Not sure about this, but don't benchmark tools need to be upgraded for new architectures to? A15 is quite a big step, SW updates may be necessary for proper bench.
Damn..now I have to get an iPad.
I believe we have to take the Arndale board numbers with pinch of salt. It's a dev board, and I doubt it has optimized drivers for the SoC like it's expected for N10. Samsung has this habit of optimizing the drivers with further updates.
SGS2 makes for a good case study. When it was launched in MWC2011, it's numbers were really pathetic. It was even worse than Tegra2.
Anand ran benchmark on the pre-release version of SGS2 on MWC2011, check this:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
It was showing less than Tegra2 numbers! It was that bad initially.
Then look when Anand finally reviewed the device after few months:
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
Egypt (native resolution) numbers went up by 3.6x and Pro also got 20% higher. Now they could have been higher if not limited by vsync. GLbenchmark moved from 2.0 to 2.1 during that phase, but I am sure this would not make such a big difference in numbers.
If you again check the numbers now for SGS2, it's again another 50% improvement in performance from the time Anand did his review.
Check this SGS2 numbers now:
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
This is just to show that how driver optimization can have a big affect on the performance. My point is that we have to wait for proper testing on final release of N10 device.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Samsung has clocked the GPU @ 533MHz. So, it shouldn't be getting so less.
According to Samsung, it more like 2.1 GPixels/s: http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Fill rate is a low-level test, and there shouldn't be such a big difference from the quoted value. Let's wait and see how the final device shapes up.
hung2900 said:
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
________________
Edit: I forgot that Exynos 4210 with Mali400MP4 GPU had very bad GLbenchmark initially (even worse than PowerVR SGX540), but after updating firmware it's way better than other SoCs on Android handsets.
Click to expand...
Click to collapse
In areas where the Mali 400 lacked performance, like fragment and vertex lit triangle output T604 is comfortably 5 x the performance, whereas in these low-level tests iPad is not a concrete 2x the power of iPad 3, but achieves twice the FPS in Egypt HD than its older brother. I suspect drivers are a big factor here, and Exynos 5250 will get better as they drivers mature.
hot_spare said:
I believe we have to take the Arndale board numbers with pinch of salt. It's a dev board, and I doubt it has optimized drivers for the SoC like it's expected for N10. Samsung has this habit of optimizing the drivers with further updates.
SGS2 makes for a good case study. When it was launched in MWC2011, it's numbers were really pathetic. It was even worse than Tegra2.
Anand ran benchmark on the pre-release version of SGS2 on MWC2011, check this:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
It was showing less than Tegra2 numbers! It was that bad initially.
Then look when Anand finally reviewed the device after few months:
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
Egypt (native resolution) numbers went up by 3.6x and Pro also got 20% higher. Now they could have been higher if not limited by vsync. GLbenchmark moved from 2.0 to 2.1 during that phase, but I am sure this would not make such a big difference in numbers.
If you again check the numbers now for SGS2, it's again another 50% improvement in performance from the time Anand did his review.
Check this SGS2 numbers now:
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
This is just to show that how driver optimization can have a big affect on the performance. My point is that we have to wait for proper testing on final release of N10 device.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Samsung has clocked the GPU @ 533MHz. So, it shouldn't be getting so less.
According to Samsung, it more like 2.1 GPixels/s: http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Fill rate is a low-level test, and there shouldn't be such a big difference from the quoted value. Let's wait and see how the final device shapes up.
Click to expand...
Click to collapse
I agree with most of what you have said. On the GPixel figure this is like ATI GPU teraflops figures always being much higher than Nvidia, in theory with code written to hit the device perfectly you might see that those high figures, but in reality the Nvidia cards with lower on paper numbers equaled or beat ATI in actual game FPS. It all depends on whether the underlying architecture is as efficient in real-world tests, vs maximum technical numbers that can't be replicated in actual game environments.
I think the current resolution of the iPad / Nexus 10 is actually crazy, and will would see prettier games with lower resolutions, the amount of resources needed to drive those high MP displays, means lots of compromises will be made in terms of effects / polygon complexity etc to ensure decent FPS, especially when you think that to drive Battlefield 3 at 2560 x 1600 with AA and high textures, requires a PC that burn 400+ watts of power, not a 10 watt SoC.
Overall when we consider that Nexus 10 has twice the RAM for game developers to use and faster CPU cores, games should look equally as nice as both, the biggest effect will be the level of support game developers provide for each device, the iPad will probably be stronger i that regard. Nvidia was able to coax prettier games out of Tegra 3 through developer support, hopefully Google won't forget the importance of this.
What's the point of speculation? Just wait for the device to be released and run all the test you want to get confirmation on performance. Doesn't hurt to wait
BoneXDA said:
Not sure about this, but don't benchmark tools need to be upgraded for new architectures to? A15 is quite a big step, SW updates may be necessary for proper bench.
Click to expand...
Click to collapse
Both A9 & A15 use the same instruction set architecture (ISA) so no they won't. Benchmarks may need to be modified, if the new SoC are too powerful and max out the old benches, but for GL Benchmark, that has not happened yet and there are already new updates in the pipeline.
I can't wait to see this Exynos 5250 in a 2.0ghz quad-core variant in the semi near future... Ohhhh the possibilities. Samsung has one hell of a piece of silicon on their hand.
Chrome
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Google if you want to use Chrome as the stock browser, then develop to fast and smooth and not an insult, stock AOSP browser would be so much faster.
Turbotab said:
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Google if you want to use Chrome as the stock browser, then develop to fast and smooth and not an insult, stock AOSP browser would be so much faster.
Click to expand...
Click to collapse
True.. Chrome on mobile is still not upto desktop level yet. I believe it's v18 or something, right? The stock browser would have much better result in SunSpider/Browsermark. The N4 numbers looks even worse. Somewhere the optimizations isn't working.
The GLbenchmark tests are weird. Optimus G posts much better result than N4 when both are same hardware. It infact scores lower than Adreno 225 in some cases. This is totally whacked.
For N10, I am still wondering about fill rate. Need to check what guys say about this.
Is it running some debugging code in the devices at this time?
Turbotab said:
Both A9 & A15 use the same instruction set architecture (ISA) so no they won't. Benchmarks may need to be modified, if the new SoC are too powerful and max out the old benches, but for GL Benchmark, that has not happened yet and there are already new updates in the pipeline.
Click to expand...
Click to collapse
Actually not. A8 and A9 are the same ISA (Armv7), while A5 A7 and A15 are in another group (Armv7a)
Once we get rid of the underclock no tablet will be able to match. I'm sure the Mali t604 at 750 MHz would destroy everything.
hung2900 said:
Actually not. A8 and A9 are the same ISA (Armv7), while A5 A7 and A15 are in another group (Armv7a)
Click to expand...
Click to collapse
I have to disagree, this is from ARM's info site.
The ARM Cortex-A15 MPCore processor has an out-of-order superscalar pipeline with a tightly-coupled low-latency level-2 cache that can be up to 4MB in size. The Cortex-A15 processor implements the ARMv7-A architecture.
The ARM Cortex-A9 processor is a very high-performance, low-power, ARM macrocell with an L1 cache subsystem that provides full virtual memory capabilities. The Cortex-A9 processor implements the ARMv7-A architecture and runs 32-bit ARM instructions, 16-bit and 32-bit Thumb instructions, and 8-bit Java bytecodes in Jazelle state.
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.set.cortexa/index.html
Keion said:
Once we get rid of the underclock no tablet will be able to match. I'm sure the Mali t604 at 750 MHz would destroy everything.
Click to expand...
Click to collapse
Except the iPad 4, which has a GPU that is currently 57% faster than the T604.
Sent from my iPad Mini using Tapatalk
Do remember that Awesome resolution does tax the GPU a lot. Heck most lower end desktop GPUs would struggle
Harry GT-S5830 said:
Do remember that Awesome resolution does tax the GPU a lot. Heck most lower end desktop GPUs would struggle
Click to expand...
Click to collapse
Indeed it does,but not in offscreen testing, where Anand made his proclamation.
Sent from my iPad Mini using Tapatalk
Hemlocke said:
Except the iPad 4, which has a GPU that is currently 57% faster than the T604.
Sent from my iPad Mini using Tapatalk
Click to expand...
Click to collapse
Nah, I think we can beat that too.
Drivers + OC.

Qualcomm hardware cryptographic engine doesn´t work on Z5C?

Hello,
i just got my Z5C yesterday and so far i´m more than happy. But there is one issue:
I use the AOSP Full disk encryption on the phone but it seems like the native Qualcomm hardware cryptographic engine doesn´t work well - i benchmarked the internal storage before and after, here are the results:
Before: read ~ 200 MB/s write: ~ 120 MB/s
After: read ~ 120 MB/s write: ~ 120 MB/s
(Benchmarked by A1 SD Bench)
I´m using a FDE on my windows 10 Notebook with an e-drive resulting in like 5% performance loss. The decrease in read-speed on the Z5C is noticable. What do you think, is there something wrong or is this a normal behaviour?
Cheers
I don't know if this helps, but it seems that the Nexus 5X and 6P won't use hardware encryption according to this:
DB> Encryption is software accelerated. Specifically the ARMv8 as part of 64-bit support has a number of instructions that provides better performance than the AES hardware options on the SoC.
Click to expand...
Click to collapse
Source: The Nexus 5X And 6P Have Software-Accelerated Encryption, But The Nexus Team Says It's Better Than Hardware Encryption
So maybe, Sony is following the same path...
Sadly they don't, it seems like the write-speed decrease is just on the same level as the N6 back then. Let's hope that they include the bibs in the kernel by the marshmellow update.
Why would they use Qualcomms own crappy crypto engine, if the standard Cortex-A57 is really fast with AES thanks to NEON and possibly additional, newer optimizations/instructions? AFAIK the latter are supported in newer Linux kernels per default, so there's no need to use additional libraries to enable support or the Qualcomm crypto stuff.
But it would be nice, if someone with actual insight and detailed knowledge about this could say a few words for clarification.
Neither i got insight nor big knowledge, but i benchmarked the system and like 60% loss in reading speed doesn't feels like a optimized kernel either :/
Qualcomm is a no go. On android plaform, only Exynos 7420(not sure about 5xxx series) real get used h/w encry and decry engine and no speed down.
TheEndHK said:
Qualcomm is a no go. On android plaform, only Exynos 7420(not sure about 5xxx series) real get used h/w encry and decry engine and no speed down.
Click to expand...
Click to collapse
That's not only off topic, it's also wrong. The Exynos SoCs don't have a substantially different crypto engine or "better"/"faster" crypto/hashing acceleration via the ARM cores. If anything, the Samsung guys are smart enough to optimize their software so it makes use of the good hardware. This seems to be missing here, but for no obvious reason.
xaps said:
That's not only off topic, it's also wrong. The Exynos SoCs don't have a substantially different crypto engine or "better"/"faster" crypto/hashing acceleration via the ARM cores. If anything, the Samsung guys are smart enough to optimize their software so it makes use of the good hardware. This seems to be missing here, but for no obvious reason.
Click to expand...
Click to collapse
I agreed all ARMv8-A cpu support hardware AES and SHA. Both Exynos 7420 and S810 should also got that ability but it turns out doesn't work on Z5c now which is a fact. I'm sure S6 got it working but not sure about on other S810 phones or might be Qualcomm missing driver support.
TheEndHK said:
Both Exynos 7420 and S810 should also got that ability but it turns out doesn't work on Z5c now which is a fact.
Click to expand...
Click to collapse
Please show us the kernel source code proving that fact.
What you call "fact" is the result of a simple before and after comparison done with a flash memory benchmark app run by one person on one device. To draw the conclusion that the only reason for the shown result is that the Z5(c) can't do HW acceleration of AES or SHA is a bit far-fetched, don't you think?
xaps said:
Please show us the kernel source code proving that fact.
What you call "fact" is the result of a simple before and after comparison done with a flash memory benchmark app run by one person on one device. To draw the conclusion that the only reason for the shown result is that the Z5(c) can't do HW acceleration of AES or SHA is a bit far-fetched, don't you think?
Click to expand...
Click to collapse
I've got a S6 and no slower after encry/decry and we had a thread discussing about it on S6 board.
I don't own a Z5c now bcoz my living place HK not yet started to sell it(I come to here bcoz considering to sell my S6 and Z1c and swap to Z5c later) so I can't test it but according to OP, there is a substantial slow down.
All ARMv8-A should support hardware AES/SHA, it is not just a cached benchmark result on S6. That's real.
A few things to ponder...
This is confusing. I was always under the impression that decryption (reads) are usually a tad bit faster then encryption writes. This at least seems true for TrueCrypt benchmarks. But that may be comparing apples and oranges.
A few thoughts...
In some other thread it was mentioned that the Z5C optimizes RAM usage by doing internal on the fly compression / decompression to make very efficient usage of the RAM. As cryptotext usually is incompressible could this be a source of the slowdown on flash R/W. Could this be a source of the problem (either by actual slowdown or confusing the measurement of the benchmarking tool?)
These days the SSD flash controllers also do transparent compression of data before writing to reduce wear on the flash. If you send a huge ASCII plaintext file into the write queue the write speed will be ridiculously high, if you send incompressible data like video the write speed rate goes way down. This happens on hardware level, not taking any cryptop/decrypto operations on the OS level into account.
Is there maybe a similar function in todays smartphone flash controllers?
Can I ask the OP, in what situations do you notice the slower read rate on the crypted device? Not so long ago when spinning rust disks were still the norm in desktop and laptop computers read rates of 120 MB were totally out of reach. What kind of usage do you have on your smartphone that you actually notice the lag? Is it when loading huge games or PDF files or something similar?

Categories

Resources