I would like to use the processing power of my Android devices to supplement the processing throughput of my PC. It could be incredibly stupid since there are no USB 3.0 to micro USB available for full cpu bandwidth utilisation but can it be done just to get at that little bit more power and parallel processing?
I am aware about ARM's lack of x86 processing capability and the resultant inability to handle resource intensive workloads effectively.
Being able to virtualise Windows applications into the Android OS to run as a remote program for Windows with RAM usage and all being provided solely by the android device for these applications would be great for USB bandwidth as app data has already been locally processed so that RAM dumping (or some other method) would be the only consumer of the slow USB micro bandwidth.
Inspired by HTC Corporate Social Responsibility app Power to Give.
(This is really blue skies and I'm relatively speaking, a noob for what I'm asking.)
Related
Does anyone know if the computer spoken about here http://www.kickstarter.com/projects...a-supercomputer-for-everyone?ref=home_popular would be able to compile android if it were running linux??
You would need to get all the tools for teh build system running for arm. I'm pretty sure most of it has been done (gcc, python, bash) because there is a ubuntu built for the arm cpu. The specs on that thing even say it will come with ubuntu on it,. I'm not sure if the jdk is done yet for arm.
I think you're gonna hit a wall with 1GB of ram easily. The operating system youre using will probably take up 1/4 to 1/3 of it. Go around and look at the requirements to build projects like firefox and openoffice. Last time I saw it, firefox needed like 3GB of ram for the linker. You can get a huge SD card and use it as swap space, but thats gonna slow down all those 64 cores. Next up is the disk interface. It has usb2, which is capped at 480MB/s [citation needed]. It doesn't benefit you at all that your cpu can build a bunch of source files at once if it gets bottlenecked at reading those source files from and writing the object files to the hard drive.
I say you probably will be able to get it to build android, but it wont be lightning fast, or really even remarkably fast. By the time you buy that thing for $99, and a keyboard, mouse, usb HDD, SD card, HDMI monitor, and whatever else you need to actually use it, you could have bought a "traditional" computer that has SATA and > 1GB of ram.
noneabove said:
Does anyone know if the computer spoken about here http://www.kickstarter.com/projects...a-supercomputer-for-everyone?ref=home_popular would be able to compile android if it were running linux??
Click to expand...
Click to collapse
No, it will not.
Compiling isn't a task suitable for such a parallel computer. Compiling is mostly I/O intense, not CPU intense, hence you would not gain anything here, even if you'd be able to distribute the compiling task to multiple cores, which is by itself not a trivial task if we are talking about more than a handful of cores.
Also, you don't need a project like this to run a parallel super computer. You can run in parallel on modern graphics cards today. E.g. get a NVIDIA GPU and start using CUDA, and you'll get the idea what it's all about.
Parallel supercomputing is more suitable for specific CPU intense task such as FFT, flow analysis, brute forcing crypto, neural nets and such, where you've got a relative limited amount of data in comparison to the amount CPU needed.
As has been said, much return (financial and performance) and less work to implement with CUDA.
example of the outrageous performance of a system CUDA:
with a password cracking software, with a core i5 was 125 000 operations / s ... to enable support Cuda software, has become more than 8 million / s
Hi all,
instead of buying a new Laptop, i rather remote control my win7 Desktop with a new ~10“ android tablet.
I found a number of RDP-apps for android (X2, Splashtop), but what i am wondering: What are the hardware requirements for, lets say lag-free web-surfing?
If i understand RDP correctly, it transmits graphical information over wifi to the client rather than to the server’s graphical card.
Hence, my tablet doesnt need much of a CPU, but a good graphical card.
Or is this neglectable? Does any cheap tablet (lets say ARM Cortex-A8, 1.00GHz CPU, 512 RAM, noname IGP) do the job of lag-free surfing and maybe video? Anyone has tryed this?
Thanks so much
Lennart
With Snapdragon 805 and (possibly) Apple A8X packing 25.6 GB/s of memory bandwidth to not starve powerful GPUs, I was wondering, does 64-bit K1 have 17 GB/s of bandwidth like the 32-bit one or have they changed anything here?
Sent from my Nexus 5 using XDA Free mobile app
Now, Nvidia’s saying that accelerated path rendering gets rid of that, simultaneously conferring certain power-oriented benefits since the CPU isn’t touching the scene.Perhaps sensitive to Qualcomm’s disclosure that Snapdragon 805 sports a 128-bit memory interface supporting LPDDR3-1600 memory (128-bit divided by eight, multiplied by 1600 MT/s, equals 25.6 GB/s), Nvidia is eager to assure that the 17 GB/s enabled by its 64-bit bus populated with 2133 MT/s memory is still ample. Of course, raw bandwidth is an important specification. However, Nvidia carries over architectural features from Kepler that benefit Tegra beyond its spec sheet. A 128 KB L2 cache is one example, naturally alleviating demands on the DRAM in situations where references to already-used data result in a high hit rate. And because the cache is unified, whatever on-chip unit is experiencing locality can use it. A number of rejection and compression technologies also minimize memory traffic, including on-chip Z-cull, early Z and Z compression, texture compression (including DXT, ETC, and ASTC), and color compression.*
Some of those capabilities even extend beyond 3D workloads into layered user interfaces, where bandwidth savings pave the way for higher-res outputs (and perhaps explain why most of Tegra 4-based devices we’ve seen today employ lower resolutions). New to Tegra K1 is delta-encoded compression, which uses comparisons between blocks of pixels to reduce the footprint of color data. Nvidia is also able to save bandwidth on UI layers with a lot of transparency—the GPU recognizes clear areas and skips that work completely. We’ll naturally get a better sense of how Tegra’s memory subsystem affects performance with hardware to test. For now, Nvidia insists elegant technology is just as effective as brute force.
Tegra K1 additionally inherits the Kepler architecture’s support for heterogeneous computing. Up until now, the latest PowerVR, Mali, and Adreno graphics designs all facilitated some combination of OpenCL and/or Renderscript, isolating Nvidia’s aging mobile architecture as the least flexible. That changes as Nvidia enables CUDA, OpenCL, Renderscript, Rootbeer (for Java), and a number of other compute-oriented languages on its newest SoC.
Hello,
i just got my Z5C yesterday and so far i´m more than happy. But there is one issue:
I use the AOSP Full disk encryption on the phone but it seems like the native Qualcomm hardware cryptographic engine doesn´t work well - i benchmarked the internal storage before and after, here are the results:
Before: read ~ 200 MB/s write: ~ 120 MB/s
After: read ~ 120 MB/s write: ~ 120 MB/s
(Benchmarked by A1 SD Bench)
I´m using a FDE on my windows 10 Notebook with an e-drive resulting in like 5% performance loss. The decrease in read-speed on the Z5C is noticable. What do you think, is there something wrong or is this a normal behaviour?
Cheers
I don't know if this helps, but it seems that the Nexus 5X and 6P won't use hardware encryption according to this:
DB> Encryption is software accelerated. Specifically the ARMv8 as part of 64-bit support has a number of instructions that provides better performance than the AES hardware options on the SoC.
Click to expand...
Click to collapse
Source: The Nexus 5X And 6P Have Software-Accelerated Encryption, But The Nexus Team Says It's Better Than Hardware Encryption
So maybe, Sony is following the same path...
Sadly they don't, it seems like the write-speed decrease is just on the same level as the N6 back then. Let's hope that they include the bibs in the kernel by the marshmellow update.
Why would they use Qualcomms own crappy crypto engine, if the standard Cortex-A57 is really fast with AES thanks to NEON and possibly additional, newer optimizations/instructions? AFAIK the latter are supported in newer Linux kernels per default, so there's no need to use additional libraries to enable support or the Qualcomm crypto stuff.
But it would be nice, if someone with actual insight and detailed knowledge about this could say a few words for clarification.
Neither i got insight nor big knowledge, but i benchmarked the system and like 60% loss in reading speed doesn't feels like a optimized kernel either :/
Qualcomm is a no go. On android plaform, only Exynos 7420(not sure about 5xxx series) real get used h/w encry and decry engine and no speed down.
TheEndHK said:
Qualcomm is a no go. On android plaform, only Exynos 7420(not sure about 5xxx series) real get used h/w encry and decry engine and no speed down.
Click to expand...
Click to collapse
That's not only off topic, it's also wrong. The Exynos SoCs don't have a substantially different crypto engine or "better"/"faster" crypto/hashing acceleration via the ARM cores. If anything, the Samsung guys are smart enough to optimize their software so it makes use of the good hardware. This seems to be missing here, but for no obvious reason.
xaps said:
That's not only off topic, it's also wrong. The Exynos SoCs don't have a substantially different crypto engine or "better"/"faster" crypto/hashing acceleration via the ARM cores. If anything, the Samsung guys are smart enough to optimize their software so it makes use of the good hardware. This seems to be missing here, but for no obvious reason.
Click to expand...
Click to collapse
I agreed all ARMv8-A cpu support hardware AES and SHA. Both Exynos 7420 and S810 should also got that ability but it turns out doesn't work on Z5c now which is a fact. I'm sure S6 got it working but not sure about on other S810 phones or might be Qualcomm missing driver support.
TheEndHK said:
Both Exynos 7420 and S810 should also got that ability but it turns out doesn't work on Z5c now which is a fact.
Click to expand...
Click to collapse
Please show us the kernel source code proving that fact.
What you call "fact" is the result of a simple before and after comparison done with a flash memory benchmark app run by one person on one device. To draw the conclusion that the only reason for the shown result is that the Z5(c) can't do HW acceleration of AES or SHA is a bit far-fetched, don't you think?
xaps said:
Please show us the kernel source code proving that fact.
What you call "fact" is the result of a simple before and after comparison done with a flash memory benchmark app run by one person on one device. To draw the conclusion that the only reason for the shown result is that the Z5(c) can't do HW acceleration of AES or SHA is a bit far-fetched, don't you think?
Click to expand...
Click to collapse
I've got a S6 and no slower after encry/decry and we had a thread discussing about it on S6 board.
I don't own a Z5c now bcoz my living place HK not yet started to sell it(I come to here bcoz considering to sell my S6 and Z1c and swap to Z5c later) so I can't test it but according to OP, there is a substantial slow down.
All ARMv8-A should support hardware AES/SHA, it is not just a cached benchmark result on S6. That's real.
A few things to ponder...
This is confusing. I was always under the impression that decryption (reads) are usually a tad bit faster then encryption writes. This at least seems true for TrueCrypt benchmarks. But that may be comparing apples and oranges.
A few thoughts...
In some other thread it was mentioned that the Z5C optimizes RAM usage by doing internal on the fly compression / decompression to make very efficient usage of the RAM. As cryptotext usually is incompressible could this be a source of the slowdown on flash R/W. Could this be a source of the problem (either by actual slowdown or confusing the measurement of the benchmarking tool?)
These days the SSD flash controllers also do transparent compression of data before writing to reduce wear on the flash. If you send a huge ASCII plaintext file into the write queue the write speed will be ridiculously high, if you send incompressible data like video the write speed rate goes way down. This happens on hardware level, not taking any cryptop/decrypto operations on the OS level into account.
Is there maybe a similar function in todays smartphone flash controllers?
Can I ask the OP, in what situations do you notice the slower read rate on the crypted device? Not so long ago when spinning rust disks were still the norm in desktop and laptop computers read rates of 120 MB were totally out of reach. What kind of usage do you have on your smartphone that you actually notice the lag? Is it when loading huge games or PDF files or something similar?
I want to know which Android phones can run without battery at full load.
My requirements are the following:
1. The phone should run without connecting to a regulated power supply which could simulate the battery
2. Phone should have be an octa-core with more than 1.5GB RAM
3. If custom ROMs are used, video proof would be preferable
Basically I am interested in a list of Android phones which could run BOINC / Folding on (at full load) but without the battery which will swallow in about one month
cristipurdel said:
I want to know which Android phones can run without battery at full load.
My requirements are the following:
1. The phone should run without connecting to a regulated power supply which could simulate the battery
2. Phone should have be an octa-core with more than 1.5GB RAM
3. If custom ROMs are used, video proof would be preferable
Basically I am interested in a list of Android phones which could run BOINC / Folding on (at full load) but without the battery which will swallow in about one month
Click to expand...
Click to collapse
We have a dedicated thread for this here
https://forum.xda-developers.com/showthread.php?t=1620179
please ask your question there.
Thanks for understanding
Sawdoctor