I attempted to find an answer to this question but was unsuccessful. I'm not sure if my understanding of the Dalvik max heap size is incorrect or if there is some other aspect of memory management I am not understanding. I'm basically trying to get an understanding of how memory is used in Android.
I am looking at a fairly large application that is ran through an emulator. I set Max VM application heap size to 24MB in the AVD. I then run the application and run procrank on it. It shows the application is using about 50MB. How does this correspond to heap size? Why is the application showing 50MB when the max heap size is only 24MB? After messing around with the application I see memory jump to over 100MB. This seems terribly high to me but I don't know enough to say definitively.
Also, if I run MAT on it I am getting a totally different thing. I don't really understand what data is coming from MAT. It is showing a total of 3.4MB with most of it unused. What does this mean?
I've tried to read as much information as I can about memory usage but I think I'm just not quite understanding it. Any explanations or links would be very much appreciated.
Thank you in advance.
Related
pflatlyne said:
Looking back at it,im not so sure anymore. I looked up some information on how the virtual memory in windows mobile works. It seems that whether a program is kept in memory depends on what kind of memory it is stored on. Programs that can be reloaded from flash will be unloaded if needed. Data structures that programs keep,will not be becuase there is no way to restore them.
It might well be possible to swap some of these things out to the transflash card. In fact,vista does something similar with readyboost. Ive actually used my phones card as the readyboost device without problems. What might be a problem is the speed of the card reader in the phone. Some people say the reader is very slow. I dont know if this is true or not,but as a general rule,the card itself is faster than a hard disk,so its not at all unreasonable. One problem of course would be wearing out the flash. Readyboost uses a special algorithm to spread the writes across the drive and reduce the amount of writes. If you just put the swap on flash,it would wear out rather fast.
Click to expand...
Click to collapse
Ive decided that this might not be so crazy as it sounds. This was originally discussed here. http://forum.xda-developers.com/showthread.php?t=350002&page=15
Consider this: http://blogs.msdn.com/ce_base/archive/2006/10/30/What-is-Virtual-Memory.aspx
Now direct your attention to this part from the above link
<QUOTE>
Windows CE will “demand commit” pages, meaning that it will usually delay committing them until the last possible moment. There are also some cases in which Windows CE will take committed pages out of memory again, returning them to “reserved” status:
* Since DLL and EXE code can easily be re-loaded from the file system, it is often decommitted.
* Memory-mapped file pages also have well defined files to read from, so those are decommitted as well.
* Thread stack can shrink; if the system is very low on memory we’ll scavenge the top pages from a stack if the thread is not currently using them.
* Heaps can shrink; if the system is very low on memory we’ll check whether there are heap pages without data in them that can be decommitted.
However that is where Windows CE stops. Other operating systems have a “page file” in which they will write other pages that don’t have corresponding files, notably:
* Stack
* Heap
* Allocations from VirtualAlloc()
* Memory-mapped files that don’t actually have files underneath them (CreateFileMapping with INVALID_HANDLE_VALUE)
* The writable data of executables (global variables)
Those operating systems have algorithms to decide that these pages have not been used in a while, and will write them to the page file and decommit the RAM. Windows CE does not have a page file. We’ll demand-commit to delay committing these pages as long as possible, but once they are committed, the operating system won’t page them out again.
So, as you see, virtual memory in its most minimal definition is just having a mapping between virtual addresses and physical addresses. To lay out allocations in the address space in an efficient manner and avoid wasting physical memory on unallocated address space. In more practical terms, we also use the virtual address space to implement paging, avoid wasting physical memory on allocated addresses that are not actively being used.
</QUOTE>
Here is another interesting article.
http://blogs.msdn.com/ce_base/archive/2008/01/19/Paging-and-the-Windows-CE-Paging-Pool.aspx
Consider:
<Quote>
I’d like to explain a little more about memory management in Windows CE. I already explained a bit about paging in Windows CE when I discussed virtual memory. In short, the OS will delay committing memory as long as possible by only allocating pages on first access (known as demand paging). And when memory is running low, the OS will page data out of memory if it corresponds to a file – a DLL or EXE, or a file-backed memory-mapped file – because the OS can always page the data from the file back into memory later. (Win32 allows you to create “memory-mapped files” which do or do not correspond to files on disk – I call these file-backed and RAM-backed memory-mapped files, respectively.) Windows CE does not use a page file, which means that non file-backed data such as heap and thread stacks is never paged out to disk. So for the discussion of paging in this blog post I’m really talking only about memory that is used by executables and by file-backed memory-mapped files.
</QUOTE>
In short,people are comparing transflash speed to ram speed,realizing there is a vast difference and deciding it wont work. The fact is,the OS is already using virtual memory. It is just not using the transflash and it is not paging certain structures. It still may not be possible,but I think its going to be more of a matter of getting the OS to handle it rather than a speed thing. The quick easy and vastly oversimplified requirements would be to some software on the phone that would create a paging file,manage that file so as not to wear out the flash to fast,and do all the back end work necessary to swap them in and out. What is not clear is how much we can use machinery the OS already has.
Im now going to go into the realm of unsupported speculation. Could we perhaps have a program that manages the paging file(we will call this the page file manager). Its job would be to create a virtual file system inside the page file. Then we could set the pages that are ordinarily not pageable (through some magic algorithm that knows what we REALLY want to not page,and what we can get away with paging to the file) as pageable. I imagine we set them as "file backed" and allocate a virtual file inside the paging file.(our page file manager would manage this for us). The machinery of the OS might then handle all the paging for us. Of course the fact that these structures are writeable would be an issue. Writes to a page would cause an exception and the modified page would be recorded in the virtual file system. Like I said however,this is very early speculation,not even to the level of a plan. In short the idea is to take data structures,make a file for them somewhere,and then make the OS treat them like any other file backed memory page. It sounds almost plausible,doesn't it.
Still more speculatory rambling. I imagine the paging file manager would manage the list of free pages in the page file that are allocated to files and the free ones that can be allocated. It would also manage the order in which they are used,to spread the usage around as much as possible to extend the life of the flash card.
It would go something like this. A data structure is to be paged. A virtual file is created in the paging file,allocated to the size of the data structure to be paged. The structure would be copied to the file and the pages uncommitted. A read to the virtual memory pages would generate a exception and ultimatley cause structure to be reloaded. A write would generate an exception and cause the file to be updated. Page_file_manager would handle this. It would allocate the next available block to the page. It would then update the file allocation table to reflect the replacement of that block in the file and release the original. Of course this all really depends on the internal structure of windows mobile.
I am having trouble in understanding if and how best to use Compcache and/or Swap.
I see that the default config of UD not is CompCache AND Swap on. But I see no benefit. The memory never seems to be used (also it feels slower).
Even if I use CompCache 64Mb with swapiness at 80, if I check stats there is never more uses than about 32mb. But Android does go ahead and close programs in the mean time. Stuff does not keep running if I go to another program, it has to be restarted if I switch back.
So why does the system not use all 'RAM' available to it? Should swapiness be increased even more?
I wonder if your minfree settings may be keeping your compcache swap from being fully utilized. What does the output of 'cat /proc/swaps' in a terminal look like? What are your minfree settings?
I don't know if I understand the compcache thing fully, and I'm no Android expert....
From what I read from the net, both compcache and swap memory only swaps kernal memory, so it does not mean every bit of compcache will be used, as kernal only uses a limit amount of memory. If a certain program uses a lot of memory outside kernal, then other programs will still be closed to make room for it. In short, compcache is just an alternative solution, it's never a real solution for lack of ram.
For me, I actually experimented with setting 192MB compcache and swapiness of 100. It seems to work well for me, and the most I've seen it used is only about 30MB.
Thanks guys, I'll look at both things - minfree and setting swapiness to 100 - and report back!
As to minfree, I saw other threads that seemed to point in this direction as well, but maybe this is something different.
The other threads were talking about that the system/kernel would not get notified of the "additional free RAM" offered by the CompCache, so that it would still kill programs is there would still be CompCache (but not much real free RAM) available...?
[EDIT] this may help us to some answers, or part of, reading now myself...
hi evryone i've bin looking around for a while now and i can't seem to find the exact thing/explenation i am looking for.
i would like to know what the following are used for.
simply a more value means this
less value means that
if its possible what i understand from it for now is
Min Free KBytes(obvious is suppose)
Dirty Ratio(controls the delay of the kernel writes to data why is this important?so i can control how long it takes for him to flush his memory ?)
Dirty Background Ratio
VFS Cache Pressure(smaller value is bigger cache wich in the end could bite me in the ass ??)
Oom Allocating Task(instead of scanning what to kill it kills whatever made it exceed the memory??)
all i could find on google is people and there prefered settings.
wich could be amazing but aslong as i dont SEE a difference seeing my phone runs smooth to begin with
its rather hard to find out what it does
ty in advance
edited for overview
hi evryone i've bin looking around for a while now and i can't seem to find the exact thing/explenation i am looking for.
i would like to know what the following are used for.
simply a more value means this
less value means that
if its possible!
what i sorta get/understand now is the following
Min Free KBytes(obvious is suppose keeping the amount free before it starts killing?)
Dirty Ratio(controls the delay of the kernel writes to data why is this important?so i can control how long it takes for him to flush his memory ?)
Dirty Background Ratio
VFS Cache Pressure(smaller value is bigger cache wich in the end could bite me in the ass ??)
Oom Allocating Task(instead of scanning what to kill it kills whatever made it exceed the memory??)
all i could find on google is people and there prefered settings.
wich could be amazing but aslong as i dont SEE a difference seeing my phone runs smooth to begin with
its rather hard to find out what it does
ty in advance
in case it matters i am using an X10i
feras ng rom
and doomskernel V6 UB
bump.
As i'm a new member,i can not ask geowolf1000 question for malata_t8_smba_9701_cm9_3g.So i hope i could get reply from here.
First i would like thanks geowolf1000 very much for his great work.
After i flashed my T8 into malata_t8_smba_9701_cm9_3g,the performance is amazing!Very fast than before!
But i have some question as below,
1.Max CPU speed is 750,it should be 1G.How could we set it into 1G?
2.when i connected it into PC,how could i set it as a large size storage model ?Because when i copy a large size file into it,it takes so long time if it can not be set into a large size storage model.