Need help in UV vdd - Android Q&A, Help & Troubleshooting

Anyone who can help me out in undervolting the vdd? Here's my code for vdd.
Whenever I try to lower the value of voltage by subtracting a small integer eg. 4, the vdd's dont correctly apply and it shows "VDD set failed" in dmesg. But raising of voltage works..
I think there is some minimum limit of voltage required that's why its not going down. Any way I can lower that limit somehow? Or can I do something so that users can reduce the voltage? Any workarounds for that?
EDIT: Mission succesful

Related

[Q] disable cpu thermal throttling

i make a simple question:
how do i disable the thermal throttling policy?
i have some kind of failure hw sf on my phone and even with cpu temp below 30C my phone cut off 50% of cpu frequency.
no matter what it runs at 501 mhz even if i set performance governor, even with cpufreq max and min set to 1000 mhz.
but yes it can scale down below 501 mhz, it can run all the frequencies i give it.
so must be this policy activated somehow permanently while is set in kernel to activate when cpu temp is above 90C...
how can i turn off?
a module would be useful? i need a custom kernel? i can just disable with some commands? maybe in init?
none of the devs in this section knows what i'm talking about or none of the devs reads this section?

[Q] setCPU Voltage Table

So after installing kholk's OC kernel, setCPU doesn't give me the correct voltages. For example, 1000MHz is set to 7mV and 1100MHz is set to 101mV. I know this isn't right and doesn't apply these voltages until you click apply, then it freezes. It will only let me subtract voltages and not add anything.
Also it gives me the option of 0MHz as a clock speed, however setCPU converts it to 216MHz.
Do I need to change a default file somewhere to fix this?

Tuning the Linux (Android) Virtual Machine - Part 1, CPU tuning

I believe there is a great deal of confusion or lack of technical explanation available here in the community, when we discuss the how’s, why’s and what’s behind the things we choose to modify in the Android OS in an attempt to squeeze better performance from a very complex operating system. Many of the things I tend to see presented to users are focused on very ineffective and ancient mentalities, pertinent to an older version of the operating system. Much of this is attempted through modifying build properties, and that’s usually about where it stops. My objective here is to describe some of the ins and outs of tuning a mobile operating system such as Android, and looking at it in a different light - not the skin you lay on top of it, but as advanced hardware and software, with many adjustable knobs you can turn for a desired result.
The key players here are, usually, without fail a couple of things alone:
Debloating – which, I suppose, is an effective way to reduce the operating system’s memory footprint. But I would then ask, why not also improve the operating system’s memory management functions? (arguably more important than merely removing unwanted apps)
“Build prop tweaks” – the famous build.prop, which is a property file by which you can apply very effective changes like the ones presented in my post_boot file (the only difference being when they are executed, and how they are written out), but most of the “tuning” done here focuses on principles that were only once true and, thereby, mostly irrelevant in today’s latest versions of Android. There are many things within the build.prop that can (and sometimes should) be altered to directly impact the performance of the DVM/JVM. However, this is almost always untouched.
Every now and then, somebody will throw a kernel together with some added schedulers, or some merged sound drivers, etc., but there is really little to no change that would effect real time performance observed by the user.
So, what about the virtual machine? What about the core operating system? – what Android actually is – Linux.
You’d be surprised how effective some simple modifications to just 1 shell file on your system can be at improving your experience as a user.
So, how do we make our devices feel like they have been reborn with just 1 file and not an entire ROM? That stock ROM you are on will suddenly feel not so stock.
My aim here is to talk about, at a medium to in-depth level, what exactly went into the file I added to the development section that turned a performance corner for your device. For now, let’s just talk about the CPU.
Let’s look at a snippet of some code from the portion of the file where most of the CPU tuning is achieved, we’ll use cluster two’s example. Bear in mind, the methodology here was used for cluster 1 as well – your smaller cores were treated the same, in theory:
Code:
# configure governor settings for big cluster
echo 1 > /sys/devices/system/cpu/cpu2/cpufreq/interactive/use_sched_load
echo 1 > /sys/devices/system/cpu/cpu2/cpufreq/interactive/use_migration_notif
echo "10000 1401600:30000 2073600:60000" > /sys/devices/system/cpu/cpu2/cpufreq/interactive/above_hispeed_delay
echo 20 > /sys/devices/system/cpu/cpu2/cpufreq/interactive/go_hispeed_load
echo 10000 > /sys/devices/system/cpu/cpu2/cpufreq/interactive/timer_rate
echo 20000 > /sys/devices/system/cpu/cpu2/cpufreq/interactive/timer_slack
echo 806400 > /sys/devices/system/cpu/cpu2/cpufreq/interactive/hispeed_freq
echo 1 > /sys/devices/system/cpu/cpu2/cpufreq/interactive/io_is_busy
echo "40 1190400:60 1478400:80 1824000:95" > /sys/devices/system/cpu/cpu2/cpufreq/interactive/target_loads
echo 30000 > /sys/devices/system/cpu/cpu2/cpufreq/interactive/min_sample_time
echo 0 > /sys/devices/system/cpu/cpu2/cpufreq/interactive/max_freq_hysteresis
echo 307200 > /sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq
echo 0 > /sys/devices/system/cpu/cpu2/cpufreq/interactive/ignore_hispeed_on_notif
So what did I do here? Well, let’s start by explaining the governor, and then its modules.
Interactive: the interactive governor, in short, it works based on timers and load (or tasks). Based on load when the timers are ticked and the CPU is polled, the governor decides how to respond to that load, with consideration taken from its tunables. Because of this, interactive can be extremely exact when handling CPU load. If these tunables are dialed in properly, according to usage and hardware capability, what you achieve is maximum throughput for an operation, at a nominal frequency for that specific task, with an optimal delay. Most of the activity seen in an Android ecosystem is short, bursty usage, with the occasional sustained load intensive operations (gaming, web browsing, HD video playback and recording, etc.). Because of this unique user-interaction with the device, the default settings for interactive are, usually, a little too aggressive for a nominal experience – nominal meaning not “over-performing” (or the opposite) to complete the task and wasting CPU capability or overusing it. The interactive tunables:
use_sched_load: when this value is set to 1, the timer windows (polling intervals) for all cores are synchronized. The default is 0. I set this to 1 because it allows evaluation of current system-wide load, rather than core specific. A small, but very important change for the GTS (global task scheduler).
above_hispeed_delay: when the cpu is at or above hispeed_freq, wait this long before increasing frequency. The values called out here will always take priority, no matter how busy the system is. Notice how I tuned this particular setting to allow an unbiased ramp up until 1.40 GHz, which then calls for .4 seconds delay before allowing an increase. I did this to handle the short bursts quickly and efficiently as needed, without impacting target_load (the module, in this way, allows the governor free range and roam according to load, then, is forced to wait if it wants to utilize the faster but power-costly speeds up top). However, sustained load (like gaming, or loading web pages) would likely tax the CPU at intervals larger than .4 seconds. The default setting here was 20000. You can represent this expression as a single value, followed by a CPU speed and delay for that speed, which is what I did at the 1.40 GHz range. I usually design this around differences in voltage usage per frequency when my objective is more to save power, while sacrificing a slight amount of performance.
go_hispeed_load: when the CPU is polled (at the timer_rate interval) and overall load is determined to be above this value (which represents a percentage) immediately increase CPU speed to the speed set in hispeed_freq. Default value here was 99. I changed it to 20. You’ll understand why in a second.
timer_rate: intervals to check CPU load across the system (keep in mind use_sched_load). Default was 20000. I changed it to 10000 to check more often, and reduce the stack up delay the timer rate causes with other tunables such as above_hispeed_delay, as the timer rate is added on top of that value. Meaning if you set the timer rate to 10000 and the above higspeed delay to 50000, your total delay above hispeed is 60000).
hispeed_freq: counterpart to go_hispeed_load. Immediately jump to this frequency when that load is achieved. Default here, in Linux, is whatever the max frequency is for the core. So, the CPU would be tapped out when load is 99%. I usually set this to a lower speed and compliment it with a smaller go_hispeed_low value, and adjust dynamically all the way to max frequency. The reason I do this is to respond appropriately to tiny bits of usage here and there, which minimizes the probability that the CPU will start overstepping. There are a lot of small tasks constantly running, which can and should be handled by lower frequencies. The trick with this method of approach is to stay slightly ahead of the activity, which increases efficiency, while removing observed latency as much as possible. There is no hit in power by doing this. This principle of approach (on a broad scale) is how to use interactive to your advantage. I remove its subjective behavior by telling it exactly where to be for a set amount of time based on activity alone. There are no other variables. “When CPU load is xxxx, operate at these parameters.” Or, “when CPU speed is xxxx, operate at these parameters.”
io_is_busy: when this value is set to 1, the interactive governor evaluates IO activity, and attempts to calculate it as expected CPU load. The default value is 0. I always set this to 1, to allow the system to get a more accurate representation of anticipated CPU usage. Again, that “staying ahead of the curve” idea is stressed here in this simple but effective change.
target_loads: a general, objective tuneable. Default is 90. This tells the governor to try to keep the CPU load below this value by increasing frequency until <90 is achieved. This can also be represented as a dynamic expression, which is what I did. In short, mine says “do not increase CPU speeds above 1.19 GHz unless CPU load is over 60%... and so on…
min_sample_time: this is an interval which tells the CPU to “wait this long” before scaling back down when you are not at idle. This is to make sure the CPU doesn’t scale down too quickly, only to then have to spin right back up again for the same task. The default here was 80000, which is way too aggressive IMO. Your processor, stock, would hang for nearly a second at each step on its way down. 3/10th of a second is plenty of time for consistent high load, and just right for short, bursty bits of activity. The trick here is balancing response, effectiveness, acceptable drain on power, with consideration to nominal throughput.
max_freq_hysteresis: This one is a ramp down delay only enforced when the core’s max scaling frequency is hit, specified in µs (microseconds, i.e. 20000) which tells the CPU that when the core hits its max frequency, keep it there for this long before ramping back down. This parameter is used as an assumption, really, in the sense that “because CPU core0 was tapped out to max frequency, there is probably more heavy lifting coming, so we’ll remain here to make sure there isn’t more to do”
So, you can see how we are starting to better address the “activity vs. response” computing conundrum a little more precisely. Rather than throw some arbitrary numbers out there, I specifically utilize a frequency windows with a percentage of system-wide usage or activity. This is ideal, but takes careful dialing in, as hardware is always different. Some processors are a little more efficient, so lower speeds are ok for a given load when compared to another processor. The key is understanding the capability of your hardware to handle your usage patterns appropriately, is absolutely critical to get this part right – the objective is not to overwork, or underwork, but to do just the right amount of work. Turn small knobs here and there, watch how much time your CPU spends at a given speed, and comparing that with real time performance characteristics you observe, etc… maybe there is a little more stuttering in that game you play after this last adjustment? OK, make it slightly more aggressive, or let the processor hang out a bit more at those high/moderately high speeds.
One way I like to measure the effects of these adjustments is to use graphical benchmarks that don’t really push the limits of the hardware, but bring it right to the edge. You simply watch framerates, stuttering, and turn knobs as needed.
That’s about it for this, hope this provided a little bit of clarity for some of you! I’ll do another write up on the vm (virtual machine) adjustments another time.
thank you for this topic , Any details on ' build.prop tweaks ' ?
and what's the best app for CPU Config ?
Please, can you maie a GPU tweak tutorial?
H-banGG said:
thank you for this topic , Any details on ' build.prop tweaks ' ?
and what's the best app for CPU Config ?
Click to expand...
Click to collapse
Kernel auditor
And which shell file do i have to edit?
I was tired of badly designed SIP clients eating 100% CPU and keeping the device awake, when trying to re-register on a SIP server. So i changed some settings in Synapse for my Note 4 N910F.
When device is asleep (screen locked), the CPU is on minimum freq (268 Mhz).
when screen is unlocked, maximum frequency is 2000Mhz (vs 2800 by default).
these settings helped me get solid 1 day uptime (with quite a lot of browsing), or sometimes 2 days. No problem with calls, or waking the device. (Note 4 Snapdragon).
even when the sip client is keeping the device awake, it is still manageable (due to minimum cpu freq).
This seems so cool!!! Thank you for this write up!! Has anyone ran any battery life and performance benchmarks to see if these mods makes any difference, good or bad?
Also, which shell file do we modify? Which folder is it in?
Neo3D said:
This seems so cool!!! Thank you for this write up!! Has anyone ran any battery life and performance benchmarks to see if these mods makes any difference, good or bad?
Also, which shell file do we modify? Which folder is it in?
Click to expand...
Click to collapse
@warBeard_actual shared his modified file here: https://forum.xda-developers.com/ax...017-axon-7-msm-8996-cpu-vm-ram-t3557392/page6

VDD Restriction

Hello
i have problem with play games ( pubg and other )
my moto z automatically underclock when start game . After 3 min. is activated vdd restriction.
how disable it ?
i go root/sys/module/msm thermal/vdd restriction and i want changed value to disabled, but it not work.
I already know what the error is.
when I play the game for a minute, cpu clock fall to 1244 mhz ( big cpu ) but temperature protection is off.
Finally, I found out that there was an error in governor ( interactive )
i changed hispeed_freq to 2ghz but cpu fall to 1244
where is a problem ?
I also have this issue, maybe custom rom or oreo does not lag?
What is the value set to ? If it is set to 1 change the value to 0 and save. However this may or may not work and will require any root explorer and the value will be reset on reboot.
If you are rooted, I would suggest installing a custom kernel and use any kernel modifying app from the store to modify the value safely.
This restriction is a thermal restriction to avoid damage to your battery/screen/etc.. if your phone stays above 65°C for long time it will definitely damage many internal components in the long run(the CPU/GPU/RAM usually handles higher temps but surrounding components will deteriorate especially the battery)
you can delete the thermal-engine.conf on system/etc and disable in kernel auditor the core enable, VDD and soc temp in thermal panel, reboot the phone and try, btw i recommend you make a backup of this files, in my case are 3(griffin, sheridan and sheridan-retcn), and dont forget enable "apply on boot" in kernel auditor
moto z has poor thermal protection. It is activated at 40 ° C. I can fix it.
Delete this files ....
SYSTEM / ETC
thermal-engine.conf
thermal-engine-griffin.conf
thermal-engine-sheridan.conf
thermal-engine-sheridan-retcn.conf
and problem solved.
in what ROM are you?
RR 5.8.5
but most of them have the same structure.
Beside deleting thermal files , would it better to just bump temp limit, like here:
https://www.google.co.id/amp/s/foru...w-to-disable-thermal-throttling-t3636574/amp/
I hope someone can modify the files and share

[HELP] [KERNEL] Why there are 4 separate sections for schedtune.prefer_idle?

From my understanding, Kernels that have schedutil governor has a CPU frequency boosting feature which gives the CPU a push from the back to do a job faster than it was supposed to.
It does make the phone notably faster and smoother but if not configured properly the battery takes a hit.
Enabling schedtune.prefer_idle turns off that frequency boost of a running core and forces the workload on an idle core. But there seem to be 4 separate options to enable schedtune.prefer_idle and I have no idea what they are and how they are gonna affect the kernel after enabling/disabling
Those 4 options are :
• Foreground
• Background
• Real-Time
• Top App
Here are the pictures if you want to take a look.
https://imgur.com/a/sxuBvWR
https://imgur.com/a/8e4nuxj
Can someone tell me what each function does? Thanks in advance
@TrenchFullOfSlime can you help me out one more time if possible?
schedtune.prefer_idle appears to influence only one thing: whether a process will share time on an already-active core or wake up an idle core. The latter uses more power but gives the process more resources. CPU frequencies are determined by the _boost toggles, with schedtune.boost determining how aggressively the frequency is changed (accepted values are 0-100, so I guess it's an abstract scale hiding the actual size of the frequency steps). This apparently works by lying to the governor about how heavy a process is: https://lwn.net/Articles/706374/
https://forum.xda-developers.com/t/...-pixel-xl-and-eas-even-more-smoother.3528807/ suggests not going over 10.
https://forum.xda-developers.com/t/screen-on-time.3923963/page-9 suggests "turning off prefer.idle and enabling stune.boost reduced power use by 15% without affecting responsiveness" (paraphrased). Turning both on could increase responsiveness at the cost of greater power use, turning both off would do the opposite.
As for the different categories, they determine what triggers this behavior. You might prioritize performance for foreground apps, but battery efficiency for background apps. It can also be used to increase boot times: https://source.android.com/devices/tech/perf/boot-times
More information: https://www.fatalerrors.org/a/schedtune-learning-notes.html
TrenchFullOfSlime said:
schedtune.prefer_idle appears to influence only one thing: whether a process will share time on an already-active core or wake up an idle core. The latter uses more power but gives the process more resources. CPU frequencies are determined by the _boost toggles, with schedtune.boost determining how aggressively the frequency is changed (accepted values are 0-100, so I guess it's an abstract scale hiding the actual size of the frequency steps). This apparently works by lying to the governor about how heavy a process is: https://lwn.net/Articles/706374/
https://forum.xda-developers.com/t/...-pixel-xl-and-eas-even-more-smoother.3528807/ suggests not going over 10.
https://forum.xda-developers.com/t/screen-on-time.3923963/page-9 suggests "turning off prefer.idle and enabling stune.boost reduced power use by 15% without affecting responsiveness" (paraphrased). Turning both on could increase responsiveness at the cost of greater power use, turning both off would do the opposite.
As for the different categories, they determine what triggers this behavior. You might prioritize performance for foreground apps, but battery efficiency for background apps. It can also be used to increase boot times: https://source.android.com/devices/tech/perf/boot-times
More information: https://www.fatalerrors.org/a/schedtune-learning-notes.html
Click to expand...
Click to collapse
I don't know who you are, I don't know what you do but I feel like you are an angel! You went through all the troubles to get those links only to answer me which made me so happy.
If I'm being honest, the way you describe things is incredible. I didn't have to read a sentence twice. You can be a good writer or a teacher if you try.
Thank you so much

Categories

Resources