[Q] How to debug CPU starvation - Android Q&A, Help & Troubleshooting

Hi everyone
I'm trying find the way debug CPU starvation
Many causes occurs CPU starvation and i know finalize() fuction one of reason.
We can see CPU usage at that time and i think i need find out how to record the owner class of finalize()?
Thank you.

Related

[Q] Questions on CPU governors

I am just curious about some new settings that are in the CPU governors. Because I could not find much information about these settings, Scary and Savagedzen.
Setcpu website latest CPU governors is only at, Smartass, http://www.setcpu.com/#7.
http://forum.xda-developers.com/showthread.php?t=843406 <- also only explained to smartass.
Use CPU Spy free in the android market to see how they work
Downloaded CPU spy from market studying how the app works.
Found a thread that is quite detail and gives recommendation too
http://forum.xda-developers.com/showthread.php?t=1242323

What use have kernel logging, debug etc.?

Some Kernels have disabled "unnecessary" logging and tracing functions, e.g. Speedmod.
1) What exactly are these logging, debugging and other functions?
2) Why do stock kernels have these functions?
3) Do they really thwart the system?
4) Are these functions only for human analysis or does Android make use of the logged data itself?
1) As far as I know these are tools the kernel uses to put errors/crashes into log files. It's a great way for developers to fix certain issues because users can extract these logs from the device and send them over or upload them in the forums.
2) I don't know if they have it. Anyways, I imagine that the logs created are useful for service centers / supporters if you've a software issue.
3) I'm using DorimanX kernel and you can disable all loggers. But I don't feel a performance increase nor does battery last significantly longer. As long as the kernel is stable this may be called fine tuning :b
4) I guess the system doesn't touch them. Not completely sure though.
Thank you for your reply.
Since you assert disabled logging would not save battery, what is it that the developers promote their kernels to be more power saving than the stock kernels? Not regarding underclocking or undervolting.
Let's take Speedmod again as an example. It is - of course among the brilliant work of other developers - known for its power saving qualities. But without touching anything of the conventional power loads (CPU, Display,...).
It's not all about overclocking and undervolting Just to name a few examples: Developers can alter the way how and when the CPU scales up - the governor is responsible for that. Or some kernels provide several schedulers or options to save battery in deep sleep. Take DorimanX as an example: You can activate "Auto WiFi" and set 30 sec for example. So if the screen is off for 30 seconds WiFi will turn off. If you've got a data plan you'll still receive WhatsApp/Facebook messages but it's more battery saving because WiFi doesn't drain anymore :b
So in general it's about a code-efficient kernel and how you tweak it

[Q] Framework doesn't gracefully handle foreground application that uses too much CPU

Hello,
There is a known behaviour about when device is low on memory (OOM is activated and last used activities are killed and resources are freed)
But I didn't find any info about how Android will handle foregrounded applications, which overloading the system. I mean starting a stress app overloads the system so much (CPU: 130%, loadavg 30, 25, 20) and there is no CPU time slot for system activities (system_server threads and so on). After a couple of minutes it starts to timeouts in FinalizerWatchdogDaemon and many other places.
Any ideas and info about this situation is appreciated.
Thanks,
B.

[KERNEL][ROM][NOUGAT]Despair Kernel/UBERSTOCK

This is the new refined home for DarkRoom Development. If you submit bug reports without a log, you may be prosecuted...or executed.
Disclaimer:
If your device fails to comply with your standards of what you consider functioning, I am not liable. This is provided free of charge and does not come with a warranty. Although, if you provide a log, I can provide some sort of assurance that I will look into your issue.
Links:
Social:
Twitter - http://twitter.com/DespairDev
G+ Community - https://plus.google.com/u/0/communities/117685307734094084120
Downloads:
Google Drive – https://drive.google.com/drive/folders/0Bwcofov-xyI0ZVhQUWJhMm9PMkU
Source:
Github – https://github.com/matthewdalex/
Github – https://github.com/UBERROMS/
Credits:
faux123
franco
Google
flar2
imoseyon
Cl3Kener
neobuddy89
Star Wars
XDA:DevDB Information
[KERNEL][ROM][NOUGAT]Despair Kernel/UBERSTOCK, Kernel for the Nexus 6
Contributors
DespairFactor
Source Code: https://github.com/UBERROMS
Kernel Special Features:
Version Information
Status: Testing
Created 2015-07-07
Last Updated 2017-06-17
Packet Schedulers/Congestion Avoidance Algorithms:
CDG vs. Cubic vs. Westwood:
CDG
CAIA-Delay Gradient (CDG) is a hybrid congestion control algorithm which reacts to both packet loss and inferred queuing delay. It attempts to operate as a delay-based algorithm where possible, but utilises heuristics to detect loss-based TCP cross traffic and will compete effectively as required. CDG is therefore incrementally deployable and suitable for use on shared networks. During delay-based operation, CDG uses a delay-gradient based probabilistic backoff mechanism, and will also try to infer non congestion related packet losses and avoid backing off when they occur. During loss-based operation, CDG essentially reverts to reno-like behaviour. CDG switches to loss-based operation when it detects that a configurable number of consecutive delay-based backoffs have had no measurable effect. It periodically attempts to return to delay-based operation, but will keep switching back to loss-based operation as required.
Cubic
CUBIC is an enhanced version of BIC: it simplifies the BIC window control and improves its TCP-friendliness and RTT-fairness. The window growth function of CUBIC is governed by a cubic function in terms of the elapsed time since the last loss event. Our experience indicates that the cubic function provides a good stability and scalability. Furthermore, the real-time nature of the protocol keeps the window growth rate independent of RTT, which keeps the protocol TCP friendly under both short and long RTT paths.
Westwood
TCP Westwood estimates the available bandwidth by counting and filtering the flow of returning ACKs and adaptively sets the cwnd and the sshtresh after congestion by taking into account the estimated bandwidth.TCP Westwood, is a sender-side-only modification to TCP New Reno that is intended to better handle large bandwidth-delay product paths (large pipes), with potential packet loss due to transmission or other errors (leaky pipes) and with dynamic load (dynamic pipes). TCP Westwood+ is an evolution of TCP Westwood, in fact it was soon discovered that the Westwood bandwidth estimation algorithm did not work well in the presence of reverse traffic due to ACK compression. Westwood+ is friendly towards TCP Reno and fairer than Reno in bandwidth allocation.
Packet Schedulers:
Why use a non default packet scheduler?
Packet schedulers are a portion of the kernel that queues network data on a specific interface and governs how they are transmitted and received including buffers. Below I will breakdown a couple of the packet schedulers included in this kernel.
fq_codel
FQ_Codel (Fair Queuing Controlled Delay) is queuing discipline that combines Fair Queuing with the CoDel AQM scheme. FQ_Codel uses a stochastic model to classify incoming packets into different flows and is used to provide a fair share of the bandwidth to all the flows using the queue. Each such flow is managed by the CoDel queuing discipline. Reordering within a flow is avoided since Codel internally uses a FIFO queue.
pfifo_fast
The FIFO algorithm forms the basis for the default qdisc on all Linux network interfaces (pfifo_fast). It performs no shaping or rearranging of packets. It simply transmits packets as soon as it can after receiving and queuing them. This is also the qdisc used inside all newly created classes until another qdisc or a class replaces the FIFO.
A real FIFO qdisc must, however, have a size limit (a buffer size) to prevent it from overflowing in case it is unable to dequeue packets as quickly as it receives them. Linux implements two basic FIFO qdiscs, one based on bytes, and one on packets. Regardless of the type of FIFO used, the size of the queue is defined by the parameter limit. For a pfifo the unit is understood to be packets and for a bfifo the unit is understood to be bytes.
pie
PIE is designed to control delay effectively. First, an average dequeue rate is estimated based on the standing queue. The rate is used to calculate the current delay. Then, on a periodic basis, the delay is used to calculate the dropping probabilty. Finally, on arrival, a packet is dropped (or marked) based on this probability. PIE makes adjustments to the probability based on the trend of the delay i.e. whether it is going up or down.The delay converges quickly to the target value specified. alpha and beta are statically chosen parameters chosen to control the drop probability growth and are determined through control theoretic approaches. alpha determines how the deviation between the current and target latency changes probability. beta exerts additional adjustments depending on the latency trend. The drop probabilty is used to mark packets in ecn mode. However, as in RED, beyond 10% packets are dropped based on this probability. The bytemode is used to drop packets proportional to the packet size.
fq
A packet scheduler is charged with organizing the flow of packets through the network stack to meet a set of policy objectives. The kernel has quite a few of them, including CBQ for fancy class-based routing, CHOKe for routers, and a couple of variants on the CoDel queue management algorithm. FQ joins this list as a relatively simple scheduler designed to implement fair access across large numbers of flows with local endpoints while keeping buffer sizes down; it also happens to implement TCP pacing.
FQ keeps track of every flow it sees passing through the system. To do so, it calculates an eight-bit hash based on the socket associated with the flow, then uses the result as an index into an array of red-black trees. The data structure is designed, according to Eric, to scale well up to millions of concurrent flows. A number of parameters are associated with each flow, including its current transmission quota and, optionally, the time at which the next packet can be transmitted.
That transmission time is used to implement the TCP pacing support. If a given socket has a pace specified for it, FQ will calculate how far the packets should be spaced in time to conform to that pace. If a flow's next transmission time is in the future, that flow is added to another red-black tree with the transmission time used as the key; that tree, thus, allows the kernel to track delayed flows and quickly find the one whose next packet is due to go out the soonest. A single timer is then used, if needed, to ensure that said packet is transmitted at the right time.
The scheduler maintains two linked lists of active flows, the "new" and "old" lists. When a flow is first encountered, it is placed on the new list. The packet dispatcher services flows on the new list first; once a flow uses up its quota, that flow is moved to the old list. The idea here appears to be to give preferential treatment to new, short-lived connections — a DNS lookup or HTTP "GET" command, for example — and not let those connections be buried underneath larger, longer-lasting flows. Eventually the scheduler works its way through all active flows, sending a quota of data from each; then the process starts over.
There are a number of additional details, of course. There are limits on the amount of data queued for each flow, as well as a limit on the amount of data buffered within the scheduler as a whole; any packet that would exceed one of those limits is dropped. A special "internal" queue exists for high-priority traffic, allowing it to reach the wire more quickly. And so on.
One other detail is garbage collection. One problem with this kind of flow tracking is that nothing tells the scheduler when a particular flow is shut down; indeed, nothing can tell the scheduler for flows without local endpoints or for non-connection-oriented protocols. So the scheduler must figure out on its own when it can stop tracking any given flow. One way to do that would be to drop the flow as soon as there are no packets associated with it, but that would cause some thrashing as the queues empty and refill; it is better to keep flow data around for a little while in anticipation of more traffic. FQ handles this by putting idle flows into a special "detached" state, off the lists of active flows. Whenever a new flow is added, a pass is made over the associated red-black tree to clean out flows that have been detached for a sufficiently long time — three seconds in the current patch.
cake
The CAKE Principle:
(or, how to have your cake and eat it too)
This is a combination of several shaping, AQM and FQ
techniques into one easy-to-use package:
- An overall bandwidth shaper, to move the bottleneck away
from dumb CPE equipment and bloated MACs. This operates
in deficit mode (as in sch_fq), eliminating the need for
any sort of burst parameter (eg. token buxket depth).
Burst support is limited to that necessary to overcome
scheduling latency.
- A Diffserv-aware priority queue, giving more priority to
certain classes, up to a specified fraction of bandwidth.
Above that bandwidth threshold, the priority is reduced to
avoid starving other classes.
- Each priority class has a separate Flow Queue system, to
isolate traffic flows from each other. This prevents a
burst on one flow from increasing the delay to another.
Flows are distributed to queues using a set-associative
hash function.
- Each queue is actively managed by Codel. This serves
flows fairly, and signals congestion early via ECN
(if available) and/or packet drops, to keep latency low.
The codel parameters are auto-tuned based on the bandwidth
setting, as is necessary at low bandwidths.
The configuration parameters are kept deliberately simple
for ease of use. Everything has sane defaults. Complete
generality of configuration is not a goal.
The priority queue operates according to a weighted DRR
scheme, combined with a bandwidth tracker which reuses the
shaper logic to detect which side of the bandwidth sharing
threshold the class is operating. This determines whether
a priority-based weight (high) or a bandwidth-based weight
(low) is used for that class in the current pass.
This qdisc incorporates much of Eric Dumazet's fq_codel code,
customised for use as an integrated subordinate.
How to apply a packet scheduler:
1. Open terminal on your device
2. Use the "su" command to become root
3. Use tc to change the packet scheduler(qdisc) on your device. I have included an example below, the first line is for WiFi and the second for data. In the example, we are setting the qdisc to fq_pie, which is a mix of PIE with per flow rate shaping from fq.
Code:
tc qdisc add dev wlan0 root fq_pie
tc qdisc add dev rmnet_data0 root fq_pie
4. Confirm your packet scheduler has been applied by using the tc tool again. I have included an example below.
Code:
tc qdisc
To use another packet scheduler after applying a previous one, you will need to either reboot or remove the added qdisc from each interface using the command I have included below.
Code:
tc qdisc del root dev wlan0
tc qdisc del root dev rmnet_data0
zzmoove Breakdown:
To set a zzmoove profile using a kernel tweaker such as kernel adiutor, put the corresponding profile number into the tunable labeled "profile_number". I have included a description of the profiles below from @ZaneZam over here: http://forum.xda-developers.com/showpost.php?p=42637787&postcount=3
1 - for Default (set governor defaults)
2 - for Yank Battery -> old untouched setting (a very good battery/performance balanced setting DEV-NOTE: highly recommended!)
3 - for Yank Battery Extreme -> old untouched setting (like yank battery but focus on battery saving)
4 - for ZaneZam Battery -> old untouched setting (a more 'harsh' setting strictly focused on battery saving DEV-NOTE: might give some lags!)
5 - for ZaneZam Battery Plus -> NEW! reworked 'faster' battery setting (DEV-NOTE: recommended too! )
6 - for ZaneZam Optimized -> old untouched setting (balanced setting with no focus in any direction DEV-NOTE: relict from back in the days, even though some people still like it!)
7 - for ZaneZam Moderate -> NEW! setting based on 'zzopt' which has mainly (but not strictly only!) 2 cores online
8 - for ZaneZam Performance -> old untouched setting (all you can get from zzmoove in terms of performance but still has the fast down scaling/hotplugging behaving)
9 - for ZaneZam InZane -> NEW! based on performance with new auto fast scaling active. a new experience!
10 - for ZaneZam Gaming -> NEW! based on performance with new scaling block enabled to avoid cpu overheating during gameplay
Will this work with CM12 nightly builds?
Sent from my Nexus 6 using Tapatalk
violentbydezign said:
Will this work with CM12 nightly builds?
Sent from my Nexus 6 using Tapatalk
Click to expand...
Click to collapse
Yes sir!
Righteous
Sent from my Nexus 6 using Tapatalk
Well that was a fast leave of absence lol...I was just doing my daily routine of reading through my XDA threads and found this new revision of your kernel. I think i pissed off my sleeping girlfriend by shouting with excitement, oops (totally worth it though). Glad you came back ripng, er, DespairFactor. Keep up the great work!!
downloaded now time to play
thanks op
Sweet man. Thanks for continuing on....
@DespairFactor would you please add up the Kernel version in the title too.
Thanks
dany20mh said:
@DespairFactor would you please add up the Kernel version in the title too.
Thanks
Click to expand...
Click to collapse
I will when I am on R2
In for execution......
You´re fast my friend hahaha... Thanks for staying with us.
R2 is up, I added a changelog to the OP
Does your kernel still support all cores on all the time with mako hotplug?
Sent from my Nexus 6
rignfool said:
Does your kernel still support all cores on all the time with mako hotplug?
Sent from my Nexus 6
Click to expand...
Click to collapse
Of course, its part of mako hotplug
Woot your the man, glad to see you back.. Time to feed my addiction, and get rid of the shakes..
Is there a difference between this r2 and the one from a few days ago?
joeyddr said:
Is there a difference between this r2 and the one from a few days ago?
Click to expand...
Click to collapse
Compare the feature lists, its basically a cleaner code base and added only the useful stuff back.
Is force encryption on? I booted the kernel up on a clean flash of Euphoria and it gave me a decryption error? Just juandering.
bmg1001 said:
Is force encryption on? I booted the kernel up on a clean flash of Euphoria and it gave me a decryption error? Just juandering.
Click to expand...
Click to collapse
Never, I would never force anyone to encrypt their device I don't encrypt mine and obviously I would run my own work

[KERNEL(Nougat)][ROM]Phasma Kernel/UBERSTOCK

This is the new refined home for DarkRoom Development. If you submit bug reports without a log, you may be prosecuted...or executed.
Disclaimer:
If your device fails to comply with your standards of what you consider functioning, I am not liable. This is provided free of charge and does not come with a warranty. Although, if you provide a log, I can provide some sort of assurance that I will look into your issue.
Links:
Social:
Twitter - http://twitter.com/DespairDev
G+ Community - https://plus.google.com/u/0/communities/117685307734094084120
Telegram - https://t.me/darkroomdev
Discord - https://discord.gg/BGTFutW
Downloads:
https://go.hunternott.com/darkroom
Source:
Github – https://github.com/matthewdalex/
Github – https://github.com/UBERROMS/
Credits:
faux123
franco
Google
flar2
imoseyon
Cl3Kener
neobuddy89
Star Wars
XDA:DevDB Information
[KERNEL(Nougat)][ROM]Phasma Kernel/UBERSTOCK, ROM for the LG Nexus 5X
Contributors
DespairFactor, Cl3Kener
Source Code: https://github.com/UBERROMS
ROM OS Version: 6.0.x Marshmallow
ROM Kernel: Linux 3.10.x
Based On: AOSP
Version Information
Status: Testing
Created 2015-11-18
Last Updated 2017-12-28
Packet Schedulers/Congestion Avoidance Algorithms:
CDG vs. Cubic vs. Westwood:
CDG
CAIA-Delay Gradient (CDG) is a hybrid congestion control algorithm which reacts to both packet loss and inferred queuing delay. It attempts to operate as a delay-based algorithm where possible, but utilises heuristics to detect loss-based TCP cross traffic and will compete effectively as required. CDG is therefore incrementally deployable and suitable for use on shared networks. During delay-based operation, CDG uses a delay-gradient based probabilistic backoff mechanism, and will also try to infer non congestion related packet losses and avoid backing off when they occur. During loss-based operation, CDG essentially reverts to reno-like behaviour. CDG switches to loss-based operation when it detects that a configurable number of consecutive delay-based backoffs have had no measurable effect. It periodically attempts to return to delay-based operation, but will keep switching back to loss-based operation as required.
Cubic
CUBIC is an enhanced version of BIC: it simplifies the BIC window control and improves its TCP-friendliness and RTT-fairness. The window growth function of CUBIC is governed by a cubic function in terms of the elapsed time since the last loss event. Our experience indicates that the cubic function provides a good stability and scalability. Furthermore, the real-time nature of the protocol keeps the window growth rate independent of RTT, which keeps the protocol TCP friendly under both short and long RTT paths.
Westwood
TCP Westwood estimates the available bandwidth by counting and filtering the flow of returning ACKs and adaptively sets the cwnd and the sshtresh after congestion by taking into account the estimated bandwidth.TCP Westwood, is a sender-side-only modification to TCP New Reno that is intended to better handle large bandwidth-delay product paths (large pipes), with potential packet loss due to transmission or other errors (leaky pipes) and with dynamic load (dynamic pipes). TCP Westwood+ is an evolution of TCP Westwood, in fact it was soon discovered that the Westwood bandwidth estimation algorithm did not work well in the presence of reverse traffic due to ACK compression. Westwood+ is friendly towards TCP Reno and fairer than Reno in bandwidth allocation.
Packet Schedulers:
Why use a non default packet scheduler?
Packet schedulers are a portion of the kernel that queues network data on a specific interface and governs how they are transmitted and received including buffers. Below I will breakdown a couple of the packet schedulers included in this kernel.
fq_codel
FQ_Codel (Fair Queuing Controlled Delay) is queuing discipline that combines Fair Queuing with the CoDel AQM scheme. FQ_Codel uses a stochastic model to classify incoming packets into different flows and is used to provide a fair share of the bandwidth to all the flows using the queue. Each such flow is managed by the CoDel queuing discipline. Reordering within a flow is avoided since Codel internally uses a FIFO queue.
pfifo_fast
The FIFO algorithm forms the basis for the default qdisc on all Linux network interfaces (pfifo_fast). It performs no shaping or rearranging of packets. It simply transmits packets as soon as it can after receiving and queuing them. This is also the qdisc used inside all newly created classes until another qdisc or a class replaces the FIFO.
A real FIFO qdisc must, however, have a size limit (a buffer size) to prevent it from overflowing in case it is unable to dequeue packets as quickly as it receives them. Linux implements two basic FIFO qdiscs, one based on bytes, and one on packets. Regardless of the type of FIFO used, the size of the queue is defined by the parameter limit. For a pfifo the unit is understood to be packets and for a bfifo the unit is understood to be bytes.
pie
PIE is designed to control delay effectively. First, an average dequeue rate is estimated based on the standing queue. The rate is used to calculate the current delay. Then, on a periodic basis, the delay is used to calculate the dropping probabilty. Finally, on arrival, a packet is dropped (or marked) based on this probability. PIE makes adjustments to the probability based on the trend of the delay i.e. whether it is going up or down.The delay converges quickly to the target value specified. alpha and beta are statically chosen parameters chosen to control the drop probability growth and are determined through control theoretic approaches. alpha determines how the deviation between the current and target latency changes probability. beta exerts additional adjustments depending on the latency trend. The drop probabilty is used to mark packets in ecn mode. However, as in RED, beyond 10% packets are dropped based on this probability. The bytemode is used to drop packets proportional to the packet size.
fq
A packet scheduler is charged with organizing the flow of packets through the network stack to meet a set of policy objectives. The kernel has quite a few of them, including CBQ for fancy class-based routing, CHOKe for routers, and a couple of variants on the CoDel queue management algorithm. FQ joins this list as a relatively simple scheduler designed to implement fair access across large numbers of flows with local endpoints while keeping buffer sizes down; it also happens to implement TCP pacing.
FQ keeps track of every flow it sees passing through the system. To do so, it calculates an eight-bit hash based on the socket associated with the flow, then uses the result as an index into an array of red-black trees. The data structure is designed, according to Eric, to scale well up to millions of concurrent flows. A number of parameters are associated with each flow, including its current transmission quota and, optionally, the time at which the next packet can be transmitted.
That transmission time is used to implement the TCP pacing support. If a given socket has a pace specified for it, FQ will calculate how far the packets should be spaced in time to conform to that pace. If a flow's next transmission time is in the future, that flow is added to another red-black tree with the transmission time used as the key; that tree, thus, allows the kernel to track delayed flows and quickly find the one whose next packet is due to go out the soonest. A single timer is then used, if needed, to ensure that said packet is transmitted at the right time.
The scheduler maintains two linked lists of active flows, the "new" and "old" lists. When a flow is first encountered, it is placed on the new list. The packet dispatcher services flows on the new list first; once a flow uses up its quota, that flow is moved to the old list. The idea here appears to be to give preferential treatment to new, short-lived connections — a DNS lookup or HTTP "GET" command, for example — and not let those connections be buried underneath larger, longer-lasting flows. Eventually the scheduler works its way through all active flows, sending a quota of data from each; then the process starts over.
There are a number of additional details, of course. There are limits on the amount of data queued for each flow, as well as a limit on the amount of data buffered within the scheduler as a whole; any packet that would exceed one of those limits is dropped. A special "internal" queue exists for high-priority traffic, allowing it to reach the wire more quickly. And so on.
One other detail is garbage collection. One problem with this kind of flow tracking is that nothing tells the scheduler when a particular flow is shut down; indeed, nothing can tell the scheduler for flows without local endpoints or for non-connection-oriented protocols. So the scheduler must figure out on its own when it can stop tracking any given flow. One way to do that would be to drop the flow as soon as there are no packets associated with it, but that would cause some thrashing as the queues empty and refill; it is better to keep flow data around for a little while in anticipation of more traffic. FQ handles this by putting idle flows into a special "detached" state, off the lists of active flows. Whenever a new flow is added, a pass is made over the associated red-black tree to clean out flows that have been detached for a sufficiently long time — three seconds in the current patch.
cake
The CAKE Principle:
(or, how to have your cake and eat it too)
This is a combination of several shaping, AQM and FQ
techniques into one easy-to-use package:
- An overall bandwidth shaper, to move the bottleneck away
from dumb CPE equipment and bloated MACs. This operates
in deficit mode (as in sch_fq), eliminating the need for
any sort of burst parameter (eg. token buxket depth).
Burst support is limited to that necessary to overcome
scheduling latency.
- A Diffserv-aware priority queue, giving more priority to
certain classes, up to a specified fraction of bandwidth.
Above that bandwidth threshold, the priority is reduced to
avoid starving other classes.
- Each priority class has a separate Flow Queue system, to
isolate traffic flows from each other. This prevents a
burst on one flow from increasing the delay to another.
Flows are distributed to queues using a set-associative
hash function.
- Each queue is actively managed by Codel. This serves
flows fairly, and signals congestion early via ECN
(if available) and/or packet drops, to keep latency low.
The codel parameters are auto-tuned based on the bandwidth
setting, as is necessary at low bandwidths.
The configuration parameters are kept deliberately simple
for ease of use. Everything has sane defaults. Complete
generality of configuration is not a goal.
The priority queue operates according to a weighted DRR
scheme, combined with a bandwidth tracker which reuses the
shaper logic to detect which side of the bandwidth sharing
threshold the class is operating. This determines whether
a priority-based weight (high) or a bandwidth-based weight
(low) is used for that class in the current pass.
This qdisc incorporates much of Eric Dumazet's fq_codel code,
customised for use as an integrated subordinate.
How to apply a packet scheduler:
1. Open terminal on your device
2. Use the "su" command to become root
3. Use tc to change the packet scheduler(qdisc) on your device. I have included an example below, the first line is for WiFi and the second for data. In the example, we are setting the qdisc to fq_pie, which is a mix of PIE with per flow rate shaping from fq.
Code:
tc qdisc add dev wlan0 root fq_pie
tc qdisc add dev rmnet_data0 root fq_pie
4. Confirm your packet scheduler has been applied by using the tc tool again. I have included an example below.
Code:
tc qdisc
To use another packet scheduler after applying a previous one, you will need to either reboot or remove the added qdisc from each interface using the command I have included below.
Code:
tc qdisc del root dev wlan0
tc qdisc del root dev rmnet_data0
Ubermallow is coming for the 5X as well, it is compiling now.
Support fauxsound?
added to index
[INDEX] LG NEXUS 5X Resources Compilation Roll-Up
Awesome! Thanks Despair!
Dwayne01 said:
Support fauxsound?
Click to expand...
Click to collapse
Just added it in R1.3
dbrohrer said:
Awesome! Thanks Despair!
Click to expand...
Click to collapse
You are welcome!
Thanks, any details of your rom?, aosp based?
NisseGurra said:
Thanks, any details of your rom?, aosp based?
Click to expand...
Click to collapse
It's aosp based with tons of optimizations
Sent from my Nexus 6P using Tapatalk
DespairFactor said:
It's aosp based with tons of optimizations
Sent from my Nexus 6P using Tapatalk
Click to expand...
Click to collapse
Nice, i try it, any recommendations on gapps?
NisseGurra said:
Nice, i try it, any recommendations on gapps?
Click to expand...
Click to collapse
Use the purenexus arm64 gapps
So far, feels snappy, notification leds functional, charging led not.
No bugs so far.
@DespairFactor I took a gander at some of your other kernels in your signature. They seem pretty well optimized. The BFS scheduler for the Nexus 6 intrigued me as well. Are some of those features and optimizations built in (or planned to be built into) this kernel or is this simply a loosened up stock kernel that allows users to tweak more settings?
Alcolawl said:
@DespairFactor I took a gander at some of your other kernels in your signature. They seem pretty well optimized. The BFS scheduler for the Nexus 6 intrigued me as well. Are some of those features and optimizations built in (or planned to be built into) this kernel or is this simply a loosened up stock kernel that allows users to tweak more settings?
Click to expand...
Click to collapse
Check my github
Sent from my Nexus 6P using Tapatalk
Where do I find feature list etc for this ROM?
stackz07 said:
Where do I find feature list etc for this ROM?
Click to expand...
Click to collapse
On github or on your phone when you flash it...
https://github.com/ubermallow
NisseGurra said:
So far, feels snappy, notification leds functional, charging led not.
No bugs so far.
Click to expand...
Click to collapse
can you help? how to flash it on mda98e ?stuck on boot
georgiem9 said:
can you help? how to flash it on mda98e ?stuck on boot
Click to expand...
Click to collapse
Wipe system/data/cache/dalvik and then flash rom, gapps and kernel
Sent from my Nexus 6P using Tapatalk
georgiem9 said:
can you help? how to flash it on mda98e ?stuck on boot
Click to expand...
Click to collapse
I think its not working on mda89e flashed mdb08i then root install recovery , then flashed uber + gaps + kernel booting now thanks
georgiem9 said:
I think its not working on mda89e flashed mdb08i then root install recovery , then flashed uber + gaps + kernel booting now thanks
Click to expand...
Click to collapse
I suppose there are different ramdisk offsets for the older build.

Categories

Resources