Wrap 'struct binder_proc' inside 'struct binder_proc_wrap' to add the
alloc->lock equivalent without breaking the KMI. Also, add convenient
apis to access/modify this new spinlock.
Without this patch, the following KMI issues show up:
type 'struct binder_proc' changed
byte size changed from 616 to 576
type 'struct binder_alloc' changed
byte size changed from 152 to 112
member 'spinlock_t lock' was added
member 'struct mutex mutex' was removed
Bug: 254650075
Bug: 319778300
Change-Id: Ic31dc39fb82800a3e47be10a7873cd210f7b60be
Signed-off-by: Carlos Llamas <cmllamas@google.com>
[cmllamas: fixed trivial conflicts]
The correct printk format specifier when calculating buffer offsets
should be "%tx" as it is a pointer difference (a.k.a ptrdiff_t). This
fixes some W=1 build warnings reported by the kernel test robot.
Bug: 329799092
Fixes: 63f7ddea2e48 ("ANDROID: binder: fix KMI-break due to address type change")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202401100511.A4BKMwoq-lkp@intel.com/
Change-Id: Iaa87433897b507c47fe8601464445cb6de4b61db
Signed-off-by: Carlos Llamas <cmllamas@google.com>
In commit ("binder: keep vma addresses type as unsigned long") the vma
address type in 'struct binder_alloc' and 'struct binder_buffer' is
changed from 'void __user *' to 'unsigned long'.
This triggers the following KMI issues:
type 'struct binder_buffer' changed
member changed from 'void* user_data' to 'unsigned long user_data'
type changed from 'void*' to 'unsigned long'
type 'struct binder_alloc' changed
member changed from 'void* buffer' to 'unsigned long buffer'
type changed from 'void*' to 'unsigned long'
This offending commit is being backported as part of a larger patchset
from upstream in [1]. Lets fix these issues by doing a partial revert
that restores the original types and casts to an integer type where
necessary.
Note this approach is preferred over dropping the single KMI-breaking
patch from the backport, as this would have created non-trivial merge
conflicts in the subsequent cherry-picks.
Bug: 254650075
Bug: 319778300
Link: https://lore.kernel.org/all/20231201172212.1813387-1-cmllamas@google.com/ [1]
Change-Id: Ief9de717d0f34642f5954ffa2e306075a5b4e02e
Signed-off-by: Carlos Llamas <cmllamas@google.com>
[cmllamas: fixed trivial conflicts]
This reverts commit 637c8e0d372f1dfff53337a5db89f772577828d7.
Also squash commit db91c5d31a ("ANDROID: vendor_hook: rename the the
name of hooks"), which fixes the length of the vendor hook's name.
Rework the error return for a goto as this has been refactor too.
Finally, also fix spaces vs tabs.
Change-Id: I22c495eb81237c51c0f9f4d4f9f4f1cf0c8438a8
Signed-off-by: Carlos Llamas <cmllamas@google.com>
This reverts commit eeb899e9f54bef5286fd5044db481ecc01e417b4.
Change-Id: I810727a6872c16ccb484023bfbc587daca8a2515
Signed-off-by: Carlos Llamas <cmllamas@google.com>
The alloc->mutex is a highly contended lock that causes performance
issues on Android devices. When a low-priority task is given this lock
and it sleeps, it becomes difficult for the task to wake up and complete
its work. This delays other tasks that are also waiting on the mutex.
The problem gets worse when there is memory pressure in the system,
because this increases the contention on the alloc->mutex while the
shrinker reclaims binder pages.
Switching to a spinlock helps to keep the waiters running and avoids the
overhead of waking up tasks. This significantly improves the transaction
latency when the problematic scenario occurs.
The performance impact of this patchset was measured by stress-testing
the binder alloc contention. In this test, several clients of different
priorities send thousands of transactions of different sizes to a single
server. In parallel, pages get reclaimed using the shinker's debugfs.
The test was run on a Pixel 8, Pixel 6 and qemu machine. The results
were similar on all three devices:
after:
| sched | prio | average | max | min |
|--------+------+---------+-----------+---------|
| fifo | 99 | 0.135ms | 1.197ms | 0.022ms |
| fifo | 01 | 0.136ms | 5.232ms | 0.018ms |
| other | -20 | 0.180ms | 7.403ms | 0.019ms |
| other | 19 | 0.241ms | 58.094ms | 0.018ms |
before:
| sched | prio | average | max | min |
|--------+------+---------+-----------+---------|
| fifo | 99 | 0.350ms | 248.730ms | 0.020ms |
| fifo | 01 | 0.357ms | 248.817ms | 0.024ms |
| other | -20 | 0.399ms | 249.906ms | 0.020ms |
| other | 19 | 0.477ms | 297.756ms | 0.022ms |
The key metrics above are the average and max latencies (wall time).
These improvements should roughly translate to p95-p99 latencies on real
workloads. The response time is up to 200x faster in these scenarios and
there is no penalty in the regular path.
Note that it is only possible to convert this lock after a series of
changes made by previous patches. These mainly include refactoring the
sections that might_sleep() and changing the locking order with the
mmap_lock amongst others.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-29-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 7710e2cca32e7f3958480e8bd44f50e29d0c2509)
Change-Id: I67121be071d5f072ac0e5eb719c95c0f1dee5eb5
Signed-off-by: Carlos Llamas <cmllamas@google.com>
The locking order currently requires the alloc->mutex to be acquired
first followed by the mmap lock. However, the alloc->mutex is converted
into a spinlock in subsequent commits so the order needs to be reversed
to avoid nesting the sleeping mmap lock under the spinlock.
The shrinker's callback binder_alloc_free_page() is the only place that
needs to be reordered since other functions have been refactored and no
longer nest these locks.
Some minor cosmetic changes are also included in this patch.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-28-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit e50f4e6cc9bfaca655d3b6a3506d27cf2caa1d40)
Change-Id: I7f7501945a477ac5571082a5dd2a7934f484b8ab
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Prefer logging vma offsets instead of addresses or simply drop the debug
log altogether if not useful. Note this covers the instances affected by
the switch to store addresses as unsigned long. However, there are other
sections in the driver that could do the same.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-27-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 162c79731448a5a052e93af7753df579dfe0bf7a)
Change-Id: I92b7f409e45d9006492d56302e911ccdd8efc950
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Skip the freelist call immediately as needed, instead of continuing the
pointless checks. Also, drop the debug logs that we don't really need.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-26-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit f07b83a48e944c8a1cc1e9f6703fae5e34df2ba4)
Change-Id: I035bd6cd5c06ec984cd6eb3c3b53e0958c64df4f
Signed-off-by: Carlos Llamas <cmllamas@google.com>
The code in print_binder_buffer() is quite small so it can be collapsed
into its single caller binder_alloc_print_allocated().
No functional change in this patch.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-25-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 8e905217c4543af9cf1754809846157a7dbbb261)
Change-Id: Ic3e2522b4702e60e09be3d5940f88ec8252ac793
Signed-off-by: Carlos Llamas <cmllamas@google.com>
The code to determine the page range for binder_lru_freelist_del() is
quite obscure. It leverages the buffer_size calculated before doing an
oversized buffer split. This is used to figure out if the last page is
being shared with another active buffer. If so, the page gets trimmed
out of the range as it has been previously removed from the freelist.
This would be equivalent to getting the start page of the next in-use
buffer explicitly. However, the code for this is much larger as we can
see in binder_free_buf_locked() routine. Instead, lets settle on
documenting the tricky step and using better names for now.
I believe an ideal solution would be to count the binder_page->users to
determine when a page should be added or removed from the freelist.
However, this is a much bigger change than what I'm willing to risk at
this time.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-24-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 67dcc880780569ec40391cae4d8299adc1e7a44e)
Change-Id: Iec2466605fe7f8aa338c8313f586cdb7519a36e7
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Now that the page allocation step is done separately we should rename
the binder_free_page_range() and binder_allocate_page_range() functions
to provide a more accurate description of what they do. Lets borrow the
freelist concept used in other parts of the kernel for this.
No functional change here.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-23-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit ea9cdbf0c7273b55e251b2ed8f85794cfadab5d5)
Change-Id: I0d0dfcc6f72d54209da310be2ad5e30f3d722652
[cmllamas: fixed trivial conflicts due to missing commits e33c267ab7
95a542da5322e]
Signed-off-by: Carlos Llamas <cmllamas@google.com>
The sections in binder_alloc_new_buf_locked() dealing with oversized
buffers are scattered which makes them difficult to read. Instead,
consolidate this code into a single block to improve readability.
No functional change here.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-22-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit de0e6573125f8ea7a01a9b05a45b0c73116c73b2)
Change-Id: I62c2cec7341e13d9174b4f0839a1345df7cfd808
Signed-off-by: Carlos Llamas <cmllamas@google.com>
The debug information in this statement is already logged earlier in the
same function. We can get rid of this duplicate log.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-21-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 258ce20ede33c551002705fa1488864fb287752c)
Change-Id: Ie533a55ea10b2af927004f1d0e244b386ba25360
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Split out the insertion of pages to be outside of the alloc->mutex in a
separate binder_install_buffer_pages() routine. Since this is no longer
serialized, we must look at the full range of pages used by the buffers.
The installation is protected with mmap_sem in write mode since multiple
tasks might race to install the same page.
Besides avoiding unnecessary nested locking this helps in preparation of
switching the alloc->mutex into a spinlock_t in subsequent patches.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-20-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 37ebbb4f73a0d299fa0c7dd043932a2f5fbbb779)
Change-Id: I7b0684310b8824194d7e4a51a1fd67944f8ec06a
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Rather than repeatedly initializing some of the binder_lru_page members
during binder_alloc_new_buf(), perform this initialization just once in
binder_alloc_mmap_handler(), after the pages have been created.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-19-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 68aef12d094e4c96d972790f1620415460a4f3cf)
Change-Id: I3197038683f76a5cb98a79d017d1515429df2d73
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Preallocate new_buffer before acquiring the alloc->mutex and hand it
down to binder_alloc_new_buf_locked(). The new buffer will be used in
the vast majority of requests (measured at 98.2% in field data). The
buffer is discarded otherwise. This change is required in preparation
for transitioning alloc->mutex into a spinlock in subsequent commits.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-18-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit c7ac30fad18231a1637d38aa8a97d6b4788ed8ad)
Change-Id: Ib7c8eb3c53e8383694a118fabc776a6a22783c75
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Instead of looping through the page range twice to first determine if
the mmap lock is required, simply do it per-page as needed. Split out
all this logic into a separate binder_install_single_page() function.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-17-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit ea2735ce19c1c6ce0f6011f813a1eea0272c231d)
Change-Id: Ic057e9cfaeb22754f99bdec2a51076cf58c86855
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Move this function up along with binder_alloc_get_page() so that their
prototypes aren't necessary.
No functional change in this patch.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-16-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit cbc174a64b8d0ab542752c167dc1334b52b88624)
Change-Id: I0d3c69c9a26c7415308202c4b7868a36b83d089c
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Move the low async space calculation to debug_low_async_space_locked().
This logic not only fits better here but also offloads some of the many
tasks currently done in binder_alloc_new_buf_locked().
No functional change in this patch.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-15-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit c13500eaabd2343aa4cbb76b54ec624cb0c0ef8d)
Change-Id: I1b396f59f2a5b6640d8664767f2d45a675af7197
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Move the no-space debugging logic into a separate function. Lets also
mark this branch as unlikely in binder_alloc_new_buf_locked() as most
requests will fit without issue.
Also add a few cosmetic changes and suggestions from checkpatch.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-14-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 9409af24e4503d14093b27db9425f7c99e64fef4)
Change-Id: I4ff8ced5728a63815f7d47df9eb9ac85aa0a362d
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Binder attributes the buffer allocation to the current->tgid everytime.
There is no need to pass this as a parameter so drop it.
Also add a few touchups to follow the coding guidelines. No functional
changes are introduced in this patch.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-13-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 89f71743bf42217dd4092fda703a8e4f6f4e55ac)
Change-Id: Ib21fdc5afd7eeb4723b08913ba40ded762421b0b
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Extract non-critical sections from binder_alloc_new_buf_locked() that
don't require holding the alloc->mutex. While we are here, consolidate
the checks for size overflow and zero-sized padding into a separate
sanitized_size() helper function.
Also add a few touchups to follow the coding guidelines.
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-12-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 377e1684db7a1e23261f3c3ebf76523c0554d512)
Change-Id: I8fc18c06563ad2c26536633034fb3e94b0aaf510
Signed-off-by: Carlos Llamas <cmllamas@google.com>
The binder_update_page_range() function performs both allocation and
freeing of binder pages. However, these two operations are unrelated and
have no common logic. In fact, when a free operation is requested, the
allocation logic is skipped entirely. This behavior makes the error path
unnecessarily complex. To improve readability of the code, this patch
splits the allocation and freeing operations into separate functions.
No functional changes are introduced by this patch.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-11-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit 0d35bf3bf2da8d43fd12fea7699dc936999bf96e)
Change-Id: Iaf64f94564d2017c4633f2421c15b0bdee914738
Signed-off-by: Carlos Llamas <cmllamas@google.com>
The vma addresses in binder are currently stored as void __user *. This
requires casting back and forth between the mm/ api which uses unsigned
long. Since we also do internal arithmetic on these addresses we end up
having to cast them _again_ to an integer type.
Lets stop all the unnecessary casting which kills code readability and
store the virtual addresses as the native unsigned long from mm/. Note
that this approach is preferred over uintptr_t as Linus explains in [1].
Opportunistically add a few cosmetic touchups.
Link: https://lore.kernel.org/all/CAHk-=wj2OHy-5e+srG1fy+ZU00TmZ1NFp6kFLbVLMXHe7A1d-g@mail.gmail.com/ [1]
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-10-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit df9aabead791d7a3d59938abe288720f5c1367f7)
Change-Id: Ib2fbaf0ad881973eb77957863f079f986fe0d926
Signed-off-by: Carlos Llamas <cmllamas@google.com>
The kernel coding style does not require 'extern' in function prototypes
in .h files, so remove them from drivers/android/binder_alloc.h as they
are not needed.
No functional changes in this patch.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-9-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 254650075
(cherry picked from commit da483f8b390546fbe36abd72f58d612a8032e2a8)
Change-Id: I75e4ee9cf08fada7378f448bc5992d125174132f
Signed-off-by: Carlos Llamas <cmllamas@google.com>
This reverts commit 17fff41db8.
The alloc->mutex to spinlock_t patches from [1] are being backported
into this branch. The vendor hooks will be reapplied on top of these
backports in a way that matches the new structure of the code.
Link: https://lore.kernel.org/all/20231201172212.1813387-1-cmllamas@google.com/ [1]
Change-Id: Ic1acdd3401f985614d2d7383bdaabd6d71bb0c44
Signed-off-by: Carlos Llamas <cmllamas@google.com>
This reverts commit 7ce117301e.
The alloc->mutex to spinlock_t patches from [1] are being backported
into this branch. The vendor hooks will be reapplied on top of these
backports in a way that matches the new structure of the code.
Link: https://lore.kernel.org/all/20231201172212.1813387-1-cmllamas@google.com/ [1]
Change-Id: I7f4aaab31b4462a40881c596abdcbef835a32e4a
Signed-off-by: Carlos Llamas <cmllamas@google.com>
This reverts commit db91c5d31a.
The alloc->mutex to spinlock_t patches from [1] are being backported
into this branch. The vendor hooks will be reapplied on top of these
backports in a way that matches the new structure of the code.
Link: https://lore.kernel.org/all/20231201172212.1813387-1-cmllamas@google.com/ [1]
Change-Id: I39dd50bb58a08f39942322ee014dd08ebbd83168
Signed-off-by: Carlos Llamas <cmllamas@google.com>
This patch add a restricted vendor hook in do_read_fault() for tracking
which file and offsets are faulted.
Bug: 336736235
Change-Id: I425690e58550c4ac44912daa10b5eac0728bfb4e
Signed-off-by: liangjlee <liangjlee@google.com>
Add vendor hooks to reclaim MADV_PAGEOUT memory for asynchrnous
device. It also exports reclaim_pages to reclaim memory.
Bug: 326662423
Change-Id: Ic2516c64a9dbd53173a3bfb19b6cd21636916c27
Signed-off-by: Minchan Kim <minchan@google.com>
with this vendor_hook, oem can dynamically adjust fault_around_bytes to
balance memory usage and performance
Bug: 340749845
Change-Id: I429f4302caf44a769696ccec84e9cc13ea8892ea
Signed-off-by: Dezhi Huang <huangdezhi@hihonor.com>
Since upstream commit 617f3ef951 ("locking/rwsem: Remove
reader optimistic spinning"), vendors have seen increased
contention and blocking on rwsems.
There are attempts to actively fix this upstream:
https://lore.kernel.org/lkml/20240406081126.8030-1-bongkyu7.kim@samsung.com/
But in the meantime, provide vendorhooks so that vendors can
implement their own optimistic spin routine. In doing so,
vendors see improvements in cold launch times on important apps.
Bug: 331742151
Change-Id: I7466413de9ee1293e86f73880931235d7a9142ac
Signed-off-by: xieliujie <xieliujie@oppo.com>
[jstultz: Rewrote commit message]
Signed-off-by: John Stultz <jstultz@google.com>
In case of hibernation with compression enabled, 'n' number of pages
will be compressed to 'x' number of pages before being written to the
disk. Keep a note of these compressed block counts so that bootloader
can directly read 'x' pages and pass it on to the decompressor. An
array will be maintained which will hold the count of these compressed
blocks and later on written to the the disk as part of the hibernation
image save process.
The vendor hook '__tracepoint_android_vh_hibernated_do_mem_alloc' does
the required memory allocations, for example, the array which is
dynamically allocated based on the snapshot image size so as to hold
the compressed block counts etc. This memory is later freed as part of
PM_POST_HIBERNATION notifier call.
The vendor hook '__tracepoint_android_vh_hibernate_save_cmp_len' saves
the compressed block counts to the array which is later written to the
disk.
Bug: 335581841
Change-Id: I574b641e2d9f4cd503c7768a66a7be3142c2686b
Signed-off-by: Nikhil V <quic_nprakash@quicinc.com>
This adds `struct dwc3` and `struct kernel_all_info` to the KMI via fake
GKI symbols as we know some partners are using these in their
out-of-tree drivers. This ensures that future changes to these structs
will not break partner builds.
Bug: 332277393
Bug: 236036821
Change-Id: Ifa1ac6b71d58415339a63f16a79c1f713dda789f
Signed-off-by: Will McVicker <willmcvicker@google.com>
To report vendor-specific memory statistics, add restricted
vendor hook since normal vendor hook work with only atomic
context.
Bug: 333482947
Change-Id: I5c32961b30f082a8a4aa78906d2fce1cdf4b0d2b
Signed-off-by: Minchan Kim <minchan@google.com>
This reverts commit 6bad1052c2, it is the
LTS merge that had to previously get reverted due to being merged too
early.
Cc: Todd Kjos <tkjos@google.com>
Change-Id: I31b7d660bd833cf022ac4870f6d01e723fda5182
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Commit 6d98eb95b4 ("binder: avoid potential data leakage when copying
txn") introduced changes to how binder objects are copied. In doing so,
it unintentionally removed an offset alignment check done through calls
to binder_alloc_copy_from_buffer() -> check_buffer().
These calls were replaced in binder_get_object() with copy_from_user(),
so now an explicit offset alignment check is needed here. This avoids
later complications when unwinding the objects gets harder.
It is worth noting this check existed prior to commit 7a67a39320
("binder: add function to copy binder object from buffer"), likely
removed due to redundancy at the time.
Fixes: 6d98eb95b4 ("binder: avoid potential data leakage when copying txn")
Cc: <stable@vger.kernel.org>
Acked-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Bug: 320661088
Link: https://lore.kernel.org/all/20240330190115.1877819-1-cmllamas@google.com/
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Change-Id: Iaddabaa28de7ba7b7d35dbb639d38ca79dbc5077
Currently, most of the thermal_zones are IRQ capable and they do not need
to be updated while resuming. To improve the system performance and reduce
the resume time. Add a vendor function to check if the thermal_zone is
not IRQ capable and needs to be updated.
Bug: 170905417
Bug: 332221925
Test: boot and vendor function worked properly.
Change-Id: I9389985bba29b551a7a20b55e1ed26b6c4da9b3d
Signed-off-by: David Chao <davidchao@google.com>
Signed-off-by: Dylan Chang <dylan.chang@nothing.tech>
to make sure trace_android_vh_binder_has_special_work_ilocked will be called
in any case in android native logic
(here just fix is:
binder_thread_read (non_block case) ->
| binder_wait_for_work ->
| if(binder_has_work_ilocked(...)) ->
| fase: schedule true: break ->
),
if binder_has_work_ilocked do not deal with trace_android_vh_binder_has_special_work_ilocked
vip thread maybe return true because proc->todo list is not empty but it has not vip work
(special work with special binder_transaction:flag)
fix it by: move trace_android_vh_binder_has_special_work_ilocked for binder_has_work
to binder_has_work_ilocked
Fixs: 24bb8fc82e60("ANDROID: vendor_hooks: add hooks in driver/android/binder.c")
| https://android-review.googlesource.com/c/kernel/common/+/2897624
Bug: 318782978
Change-Id: I8ced722c71c82942e626f04dce950e8df580ae95
Signed-off-by: songfeng <songfeng@oppo.com>
Add vendor hook android_vh_sound_check_support_cpu_suspend
to allow ACPU to suspend during USB playback/capture,
if this is supported.
Bug: 329345852
Bug: 192206510
Change-Id: Ia8d4c335db27de5fcefab13cab653fd1ae34f691
Signed-off-by: JJ Lee <leejj@google.com>
(cherry picked from commit e8516fd3af3f81a65eedb751716e886733d8fcf3)
(cherry picked from commit 4cbf19a6f8635cd0227ab790ce570d8da7b1564a)
Add the hook that vendor can design and bypass the suspend/resume.
When the bypass is set, skip the orignal suspend/resume methods.
In mobile, a co-processor can be used with USB audio, and ACPU may
be able to sleep in such condition to improve power consumption.
We will need vendor hook to support this.
Bug: 329345852
Bug: 302982919
Signed-off-by: Puma Hsu <pumahsu@google.com>
Change-Id: Ic62a8a1e662bbe3fb0aa17af7491daace0b9f18a
(cherry picked from commit 98085b5dd8afd1ec65500eb091d061a3fee21b84)
(cherry picked from commit 358b59f1bce213fe1d83e09ae2e1fba718682e4a)
Renamed trace_android_vh_binder_detect_low_async_space_locked to
trace_android_vh_binder_detect_low_async_space.
Because the orignal name is too long, which results to the
compile-err of .ko that uses the symbol:
ERROR: modpost:
too long symbol "__tracepoint_android_vh_binder_detect_low_async_space_locked"
There is not any users of the the orignal hooks so that it is safe to
rename it.
Bug: 322915513
Change-Id: If45046efe8b2dba2c2ce0b9b41d2a794272f1887
Signed-off-by: chenweitao <chenweitao@oppo.com>
This reverts commit 1dbafe61e3.
Reason for revert: Too early. Needs to wait until 2024-03-27
Change-Id: I769b944bd089aa2278659ec87f7ba4ac4e74ee4a
Signed-off-by: Todd Kjos <tkjos@google.com>
CONFIG_TASK_DELAY_ACCT cannot be enabled since `struct task_struct`
is KMI frozen. Instead, use vendor hooks to allow delay accounting
to be implemented in a vendor module.
Bug: 327566572
Bug: 310129610
Bug: 314931189
Change-Id: If814d7834889fe162aba3dd97e935289127ca3ae
Signed-off-by: Dongyun Liu <dongyun.liu@transsion.com>
(cherry picked from commit bb57557246d39dba8a66df7f43983fe1ec71bff6)
(cherry picked from commit 896cff873452d9a3853c489bb2a173a1e290ca95)
When cpu loading is high, the task maybe preempted after restoring the
sched priority in trace_android_vh_binder_free_buf(). This means that
node->has_async_transaction can't be cleared immediately and the work
isn't added to the proc->todo queue as soon as possible.
To fix this we add a new hook trace_android_vh_binder_buffer_release()
to restore the priority after node->has_async_transaction has been
updated and the node->work has been added to the proc->todo queue.
Note: the old trace_android_vh_binder_free_buf() hook is kept to avoid
breaking KMI but is not extrictly needed.
Bug: 327307900
Fixes: 0eb66ec39ca8 ("ANDROID: vendor_hooks: Add hooks for binder")
Change-Id: I8126c79c9c68faa3ce0cd87ce83e2591bd61d5dd
Signed-off-by: Fuchun Liao <lfc@oppo.com>
[cmllamas: fix-up commit log and variable naming]
Signed-off-by: Carlos Llamas <cmllamas@google.com>
path_lookupat() is capable of safely reading unampped VAs. If an
unmapped VA is read whilst the function is being called, the resulting
page fault will get re-directed to __do_page_fault(), which will call
fixup_exception() to handle the aforementioned unmapped VA read.
Now, for an OS running in a VM, let's say that memory was still mapped
at S1 but lent to another VM (i.e. unmapped at S2 for the given VM).
The reading of an unmapped VA in path_lookupat() still needs to be
handled. For hypervisors that inject an abort leading to a do_sea()
call, call fixup_exception() from do_sea() if
trace_android_vh_try_fixup_sea() indicates that we can do so.
Bug: 320358381
Change-Id: I0aedcd954f08e3011b27524f9a7b038debbb246d
Signed-off-by: Chris Goldsworthy <quic_cgoldswo@quicinc.com>
Currently the core UFS driver does not have a vops to notify when the
device is operational. This commit introduces a hook, which serves to
notify device completing initialization and is ready to accept I/O.
This is required by the FIPS140-2 [1] self integrity test of inline
encryption engine, which must run whenever the host controller is reset.
The code requires sleeping while waiting for I/O to complete and allocating
some memory dynamically, which requires the vendor hook to be restricted.
[1] https://csrc.nist.gov/publications/detail/fips/140/2/final
Bug: 185809932
Signed-off-by: Konstantin Vyshetsky <vkon@google.com>
(cherry picked from commit e774e4eca69ce8ab60df04b27f524b586ab74f17)
(cherry picked from https://android-review.googlesource.com/q/commit:c0f24579002c3fb0e404f223f8574c7f4fdac200)
Merged-In: I6f476f9c2e2b50574d2898c3f1ef6b648d92df24
Change-Id: I6f476f9c2e2b50574d2898c3f1ef6b648d92df24