Merge branch 'android14-6.1' into branch 'android14-6.1-lts'
This catches the android14-6.1-lts branch up with the latest changes in the android14-6.1 branch, including a number of important symbols being added for tracking. This includes the following commits: *2d8a5ddebbANDROID: Update the ABI symbol list *ddf142e5a8ANDROID: netlink: add netlink poll and hooks *c9b5c232e7ANDROID: Update the ABI symbol list *3c9cb9c06fANDROID: GKI: Update symbol list for mtk *5723833390ANDROID: mm: lru_cache_disable skips lru cache drainnig *0de2f42977ANDROID: mm: cma: introduce __cma_alloc API *db9d7ba706ANDROID: Update the ABI representation *6b972d6047BACKPORT: fscrypt: support crypto data unit size less than filesystem block size *72bdb74622UPSTREAM: netfilter: nf_tables: remove catchall element in GC sync path *924116f1b8ANDROID: GKI: Update oplus symbol list *0ad2a3cd4dANDROID: vendor_hooks: export tracepoint symbol trace_mm_vmscan_kswapd_wake *6465e29536BACKPORT: HID: input: map battery system charging *cfdfc17a46ANDROID: fuse-bpf: Ignore readaheads unless they go to the daemon *354b1b716cFROMGIT: f2fs: skip adding a discard command if exists *ccbea4f458UPSTREAM: f2fs: clean up zones when not successfully unmounted *88cccede6dUPSTREAM: f2fs: use finish zone command when closing a zone *b2d3a555d3UPSTREAM: f2fs: check zone write pointer points to the end of zone *c9e29a0073UPSTREAM: f2fs: close unused open zones while mounting *e92b866e22UPSTREAM: f2fs: maintain six open zones for zoned devices *088f228370ANDROID: update symbol for unisoc whitelist *aa71a02cf3ANDROID: vendor_hooks: mm: add hook to count the number pages allocated for each slab *4326c78f84ANDROID: Update the ABI symbol list *eb67f58322ANDROID: sched: Add trace_android_rvh_set_user_nice_locked *855511173dUPSTREAM: ASoC: soc-compress: Fix deadlock in soc_compr_open_fe *6cb2109589BACKPORT: ASoC: add snd_soc_card_mutex_lock/unlock() *edfef8fdc9BACKPORT: ASoC: expand snd_soc_dpcm_mutex_lock/unlock() *52771d9792BACKPORT: ASoC: expand snd_soc_dapm_mutex_lock/unlock() Change-Id: I81dd834d6a7b6a32fae56cdc3ebd6a29f0decb80 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
0c2e40b9a3
44 changed files with 2142 additions and 318 deletions
|
|
@ -261,9 +261,9 @@ DIRECT_KEY policies
|
||||||
|
|
||||||
The Adiantum encryption mode (see `Encryption modes and usage`_) is
|
The Adiantum encryption mode (see `Encryption modes and usage`_) is
|
||||||
suitable for both contents and filenames encryption, and it accepts
|
suitable for both contents and filenames encryption, and it accepts
|
||||||
long IVs --- long enough to hold both an 8-byte logical block number
|
long IVs --- long enough to hold both an 8-byte data unit index and a
|
||||||
and a 16-byte per-file nonce. Also, the overhead of each Adiantum key
|
16-byte per-file nonce. Also, the overhead of each Adiantum key is
|
||||||
is greater than that of an AES-256-XTS key.
|
greater than that of an AES-256-XTS key.
|
||||||
|
|
||||||
Therefore, to improve performance and save memory, for Adiantum a
|
Therefore, to improve performance and save memory, for Adiantum a
|
||||||
"direct key" configuration is supported. When the user has enabled
|
"direct key" configuration is supported. When the user has enabled
|
||||||
|
|
@ -300,8 +300,8 @@ IV_INO_LBLK_32 policies
|
||||||
|
|
||||||
IV_INO_LBLK_32 policies work like IV_INO_LBLK_64, except that for
|
IV_INO_LBLK_32 policies work like IV_INO_LBLK_64, except that for
|
||||||
IV_INO_LBLK_32, the inode number is hashed with SipHash-2-4 (where the
|
IV_INO_LBLK_32, the inode number is hashed with SipHash-2-4 (where the
|
||||||
SipHash key is derived from the master key) and added to the file
|
SipHash key is derived from the master key) and added to the file data
|
||||||
logical block number mod 2^32 to produce a 32-bit IV.
|
unit index mod 2^32 to produce a 32-bit IV.
|
||||||
|
|
||||||
This format is optimized for use with inline encryption hardware
|
This format is optimized for use with inline encryption hardware
|
||||||
compliant with the eMMC v5.2 standard, which supports only 32 IV bits
|
compliant with the eMMC v5.2 standard, which supports only 32 IV bits
|
||||||
|
|
@ -384,31 +384,62 @@ with ciphertext expansion.
|
||||||
Contents encryption
|
Contents encryption
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
For file contents, each filesystem block is encrypted independently.
|
For contents encryption, each file's contents is divided into "data
|
||||||
Starting from Linux kernel 5.5, encryption of filesystems with block
|
units". Each data unit is encrypted independently. The IV for each
|
||||||
size less than system's page size is supported.
|
data unit incorporates the zero-based index of the data unit within
|
||||||
|
the file. This ensures that each data unit within a file is encrypted
|
||||||
|
differently, which is essential to prevent leaking information.
|
||||||
|
|
||||||
Each block's IV is set to the logical block number within the file as
|
Note: the encryption depending on the offset into the file means that
|
||||||
a little endian number, except that:
|
operations like "collapse range" and "insert range" that rearrange the
|
||||||
|
extent mapping of files are not supported on encrypted files.
|
||||||
|
|
||||||
- With CBC mode encryption, ESSIV is also used. Specifically, each IV
|
There are two cases for the sizes of the data units:
|
||||||
is encrypted with AES-256 where the AES-256 key is the SHA-256 hash
|
|
||||||
of the file's data encryption key.
|
|
||||||
|
|
||||||
- With `DIRECT_KEY policies`_, the file's nonce is appended to the IV.
|
* Fixed-size data units. This is how all filesystems other than UBIFS
|
||||||
Currently this is only allowed with the Adiantum encryption mode.
|
work. A file's data units are all the same size; the last data unit
|
||||||
|
is zero-padded if needed. By default, the data unit size is equal
|
||||||
|
to the filesystem block size. On some filesystems, users can select
|
||||||
|
a sub-block data unit size via the ``log2_data_unit_size`` field of
|
||||||
|
the encryption policy; see `FS_IOC_SET_ENCRYPTION_POLICY`_.
|
||||||
|
|
||||||
- With `IV_INO_LBLK_64 policies`_, the logical block number is limited
|
* Variable-size data units. This is what UBIFS does. Each "UBIFS
|
||||||
to 32 bits and is placed in bits 0-31 of the IV. The inode number
|
data node" is treated as a crypto data unit. Each contains variable
|
||||||
(which is also limited to 32 bits) is placed in bits 32-63.
|
length, possibly compressed data, zero-padded to the next 16-byte
|
||||||
|
boundary. Users cannot select a sub-block data unit size on UBIFS.
|
||||||
|
|
||||||
- With `IV_INO_LBLK_32 policies`_, the logical block number is limited
|
In the case of compression + encryption, the compressed data is
|
||||||
to 32 bits and is placed in bits 0-31 of the IV. The inode number
|
encrypted. UBIFS compression works as described above. f2fs
|
||||||
is then hashed and added mod 2^32.
|
compression works a bit differently; it compresses a number of
|
||||||
|
filesystem blocks into a smaller number of filesystem blocks.
|
||||||
|
Therefore a f2fs-compressed file still uses fixed-size data units, and
|
||||||
|
it is encrypted in a similar way to a file containing holes.
|
||||||
|
|
||||||
Note that because file logical block numbers are included in the IVs,
|
As mentioned in `Key hierarchy`_, the default encryption setting uses
|
||||||
filesystems must enforce that blocks are never shifted around within
|
per-file keys. In this case, the IV for each data unit is simply the
|
||||||
encrypted files, e.g. via "collapse range" or "insert range".
|
index of the data unit in the file. However, users can select an
|
||||||
|
encryption setting that does not use per-file keys. For these, some
|
||||||
|
kind of file identifier is incorporated into the IVs as follows:
|
||||||
|
|
||||||
|
- With `DIRECT_KEY policies`_, the data unit index is placed in bits
|
||||||
|
0-63 of the IV, and the file's nonce is placed in bits 64-191.
|
||||||
|
|
||||||
|
- With `IV_INO_LBLK_64 policies`_, the data unit index is placed in
|
||||||
|
bits 0-31 of the IV, and the file's inode number is placed in bits
|
||||||
|
32-63. This setting is only allowed when data unit indices and
|
||||||
|
inode numbers fit in 32 bits.
|
||||||
|
|
||||||
|
- With `IV_INO_LBLK_32 policies`_, the file's inode number is hashed
|
||||||
|
and added to the data unit index. The resulting value is truncated
|
||||||
|
to 32 bits and placed in bits 0-31 of the IV. This setting is only
|
||||||
|
allowed when data unit indices and inode numbers fit in 32 bits.
|
||||||
|
|
||||||
|
The byte order of the IV is always little endian.
|
||||||
|
|
||||||
|
If the user selects FSCRYPT_MODE_AES_128_CBC for the contents mode, an
|
||||||
|
ESSIV layer is automatically included. In this case, before the IV is
|
||||||
|
passed to AES-128-CBC, it is encrypted with AES-256 where the AES-256
|
||||||
|
key is the SHA-256 hash of the file's contents encryption key.
|
||||||
|
|
||||||
Filenames encryption
|
Filenames encryption
|
||||||
--------------------
|
--------------------
|
||||||
|
|
@ -477,7 +508,8 @@ follows::
|
||||||
__u8 contents_encryption_mode;
|
__u8 contents_encryption_mode;
|
||||||
__u8 filenames_encryption_mode;
|
__u8 filenames_encryption_mode;
|
||||||
__u8 flags;
|
__u8 flags;
|
||||||
__u8 __reserved[4];
|
__u8 log2_data_unit_size;
|
||||||
|
__u8 __reserved[3];
|
||||||
__u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
__u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
@ -512,6 +544,29 @@ This structure must be initialized as follows:
|
||||||
The DIRECT_KEY, IV_INO_LBLK_64, and IV_INO_LBLK_32 flags are
|
The DIRECT_KEY, IV_INO_LBLK_64, and IV_INO_LBLK_32 flags are
|
||||||
mutually exclusive.
|
mutually exclusive.
|
||||||
|
|
||||||
|
- ``log2_data_unit_size`` is the log2 of the data unit size in bytes,
|
||||||
|
or 0 to select the default data unit size. The data unit size is
|
||||||
|
the granularity of file contents encryption. For example, setting
|
||||||
|
``log2_data_unit_size`` to 12 causes file contents be passed to the
|
||||||
|
underlying encryption algorithm (such as AES-256-XTS) in 4096-byte
|
||||||
|
data units, each with its own IV.
|
||||||
|
|
||||||
|
Not all filesystems support setting ``log2_data_unit_size``. ext4
|
||||||
|
and f2fs support it since Linux v6.7. On filesystems that support
|
||||||
|
it, the supported nonzero values are 9 through the log2 of the
|
||||||
|
filesystem block size, inclusively. The default value of 0 selects
|
||||||
|
the filesystem block size.
|
||||||
|
|
||||||
|
The main use case for ``log2_data_unit_size`` is for selecting a
|
||||||
|
data unit size smaller than the filesystem block size for
|
||||||
|
compatibility with inline encryption hardware that only supports
|
||||||
|
smaller data unit sizes. ``/sys/block/$disk/queue/crypto/`` may be
|
||||||
|
useful for checking which data unit sizes are supported by a
|
||||||
|
particular system's inline encryption hardware.
|
||||||
|
|
||||||
|
Leave this field zeroed unless you are certain you need it. Using
|
||||||
|
an unnecessarily small data unit size reduces performance.
|
||||||
|
|
||||||
- For v2 encryption policies, ``__reserved`` must be zeroed.
|
- For v2 encryption policies, ``__reserved`` must be zeroed.
|
||||||
|
|
||||||
- For v1 encryption policies, ``master_key_descriptor`` specifies how
|
- For v1 encryption policies, ``master_key_descriptor`` specifies how
|
||||||
|
|
|
||||||
File diff suppressed because it is too large
Load diff
|
|
@ -454,6 +454,7 @@
|
||||||
devlink_unregister
|
devlink_unregister
|
||||||
dev_load
|
dev_load
|
||||||
devm_add_action
|
devm_add_action
|
||||||
|
devm_alloc_etherdev_mqs
|
||||||
__devm_alloc_percpu
|
__devm_alloc_percpu
|
||||||
devm_backlight_device_register
|
devm_backlight_device_register
|
||||||
devm_bitmap_zalloc
|
devm_bitmap_zalloc
|
||||||
|
|
@ -503,6 +504,7 @@
|
||||||
devm_led_classdev_register_ext
|
devm_led_classdev_register_ext
|
||||||
devm_led_classdev_unregister
|
devm_led_classdev_unregister
|
||||||
devm_mbox_controller_register
|
devm_mbox_controller_register
|
||||||
|
devm_mdiobus_alloc_size
|
||||||
devm_memremap
|
devm_memremap
|
||||||
devm_mfd_add_devices
|
devm_mfd_add_devices
|
||||||
devm_nvmem_cell_get
|
devm_nvmem_cell_get
|
||||||
|
|
@ -877,6 +879,7 @@
|
||||||
dst_cache_set_ip6
|
dst_cache_set_ip6
|
||||||
dst_release
|
dst_release
|
||||||
dump_stack
|
dump_stack
|
||||||
|
efi
|
||||||
em_cpu_get
|
em_cpu_get
|
||||||
em_dev_register_perf_domain
|
em_dev_register_perf_domain
|
||||||
enable_irq
|
enable_irq
|
||||||
|
|
@ -888,6 +891,7 @@
|
||||||
eth_header_parse
|
eth_header_parse
|
||||||
eth_mac_addr
|
eth_mac_addr
|
||||||
eth_platform_get_mac_address
|
eth_platform_get_mac_address
|
||||||
|
eth_prepare_mac_addr_change
|
||||||
ethtool_convert_legacy_u32_to_link_mode
|
ethtool_convert_legacy_u32_to_link_mode
|
||||||
ethtool_convert_link_mode_to_legacy_u32
|
ethtool_convert_link_mode_to_legacy_u32
|
||||||
__ethtool_get_link_ksettings
|
__ethtool_get_link_ksettings
|
||||||
|
|
@ -1689,6 +1693,7 @@
|
||||||
__of_get_address
|
__of_get_address
|
||||||
of_get_child_by_name
|
of_get_child_by_name
|
||||||
of_get_cpu_node
|
of_get_cpu_node
|
||||||
|
of_get_mac_address
|
||||||
of_get_named_gpio_flags
|
of_get_named_gpio_flags
|
||||||
of_get_next_available_child
|
of_get_next_available_child
|
||||||
of_get_next_child
|
of_get_next_child
|
||||||
|
|
@ -1717,6 +1722,7 @@
|
||||||
of_pci_get_max_link_speed
|
of_pci_get_max_link_speed
|
||||||
of_phandle_iterator_init
|
of_phandle_iterator_init
|
||||||
of_phandle_iterator_next
|
of_phandle_iterator_next
|
||||||
|
of_phy_get_and_connect
|
||||||
of_phy_simple_xlate
|
of_phy_simple_xlate
|
||||||
of_platform_depopulate
|
of_platform_depopulate
|
||||||
of_platform_device_create
|
of_platform_device_create
|
||||||
|
|
@ -1784,9 +1790,14 @@
|
||||||
pci_dev_put
|
pci_dev_put
|
||||||
pci_disable_ats
|
pci_disable_ats
|
||||||
pci_disable_device
|
pci_disable_device
|
||||||
|
pci_disable_msi
|
||||||
|
pci_disable_msix
|
||||||
pcie_capability_clear_and_set_word
|
pcie_capability_clear_and_set_word
|
||||||
pcie_capability_read_word
|
pcie_capability_read_word
|
||||||
pci_enable_ats
|
pci_enable_ats
|
||||||
|
pci_enable_device_mem
|
||||||
|
pci_enable_msi
|
||||||
|
pci_enable_msix_range
|
||||||
pci_find_ext_capability
|
pci_find_ext_capability
|
||||||
pci_free_irq
|
pci_free_irq
|
||||||
pci_free_irq_vectors
|
pci_free_irq_vectors
|
||||||
|
|
@ -1806,14 +1817,19 @@
|
||||||
pci_msi_mask_irq
|
pci_msi_mask_irq
|
||||||
pci_msi_unmask_irq
|
pci_msi_unmask_irq
|
||||||
pci_pio_to_address
|
pci_pio_to_address
|
||||||
|
pci_prepare_to_sleep
|
||||||
pci_read_config_dword
|
pci_read_config_dword
|
||||||
pci_read_config_word
|
pci_read_config_word
|
||||||
__pci_register_driver
|
__pci_register_driver
|
||||||
|
pci_release_selected_regions
|
||||||
pci_remove_root_bus
|
pci_remove_root_bus
|
||||||
pci_request_irq
|
pci_request_irq
|
||||||
|
pci_request_selected_regions
|
||||||
pci_restore_state
|
pci_restore_state
|
||||||
pci_save_state
|
pci_save_state
|
||||||
|
pci_select_bars
|
||||||
pci_set_master
|
pci_set_master
|
||||||
|
pci_set_power_state
|
||||||
pci_stop_root_bus
|
pci_stop_root_bus
|
||||||
pci_store_saved_state
|
pci_store_saved_state
|
||||||
pci_unlock_rescan_remove
|
pci_unlock_rescan_remove
|
||||||
|
|
@ -1839,14 +1855,21 @@
|
||||||
pfn_is_map_memory
|
pfn_is_map_memory
|
||||||
phy_attached_info
|
phy_attached_info
|
||||||
phy_connect
|
phy_connect
|
||||||
|
phy_connect_direct
|
||||||
phy_disconnect
|
phy_disconnect
|
||||||
phy_do_ioctl_running
|
phy_do_ioctl_running
|
||||||
|
phy_ethtool_get_eee
|
||||||
phy_ethtool_get_link_ksettings
|
phy_ethtool_get_link_ksettings
|
||||||
|
phy_ethtool_get_wol
|
||||||
phy_ethtool_nway_reset
|
phy_ethtool_nway_reset
|
||||||
|
phy_ethtool_set_eee
|
||||||
phy_ethtool_set_link_ksettings
|
phy_ethtool_set_link_ksettings
|
||||||
|
phy_ethtool_set_wol
|
||||||
phy_exit
|
phy_exit
|
||||||
|
phy_find_first
|
||||||
phy_get
|
phy_get
|
||||||
phy_init
|
phy_init
|
||||||
|
phy_init_eee
|
||||||
phylink_connect_phy
|
phylink_connect_phy
|
||||||
phylink_create
|
phylink_create
|
||||||
phylink_destroy
|
phylink_destroy
|
||||||
|
|
@ -1858,13 +1881,17 @@
|
||||||
phylink_start
|
phylink_start
|
||||||
phylink_stop
|
phylink_stop
|
||||||
phylink_suspend
|
phylink_suspend
|
||||||
|
phy_mii_ioctl
|
||||||
phy_power_off
|
phy_power_off
|
||||||
phy_power_on
|
phy_power_on
|
||||||
phy_print_status
|
phy_print_status
|
||||||
phy_put
|
phy_put
|
||||||
|
phy_remove_link_mode
|
||||||
phy_set_mode_ext
|
phy_set_mode_ext
|
||||||
phy_start
|
phy_start
|
||||||
|
phy_start_aneg
|
||||||
phy_stop
|
phy_stop
|
||||||
|
phy_support_asym_pause
|
||||||
phy_suspend
|
phy_suspend
|
||||||
pick_migrate_task
|
pick_migrate_task
|
||||||
pid_task
|
pid_task
|
||||||
|
|
@ -1984,6 +2011,12 @@
|
||||||
pstore_register
|
pstore_register
|
||||||
pstore_type_to_name
|
pstore_type_to_name
|
||||||
pstore_unregister
|
pstore_unregister
|
||||||
|
ptp_clock_event
|
||||||
|
ptp_clock_index
|
||||||
|
ptp_clock_register
|
||||||
|
ptp_clock_unregister
|
||||||
|
ptp_find_pin
|
||||||
|
ptp_schedule_worker
|
||||||
put_cmsg
|
put_cmsg
|
||||||
__put_cred
|
__put_cred
|
||||||
put_device
|
put_device
|
||||||
|
|
@ -2694,10 +2727,10 @@
|
||||||
__traceiter_android_vh_iommu_iovad_alloc_iova
|
__traceiter_android_vh_iommu_iovad_alloc_iova
|
||||||
__traceiter_android_vh_iommu_iovad_free_iova
|
__traceiter_android_vh_iommu_iovad_free_iova
|
||||||
__traceiter_android_vh_is_fpsimd_save
|
__traceiter_android_vh_is_fpsimd_save
|
||||||
__traceiter_android_vh_mmc_update_mmc_queue
|
|
||||||
__traceiter_android_vh_mm_alloc_pages_direct_reclaim_enter
|
__traceiter_android_vh_mm_alloc_pages_direct_reclaim_enter
|
||||||
__traceiter_android_vh_mm_alloc_pages_direct_reclaim_exit
|
__traceiter_android_vh_mm_alloc_pages_direct_reclaim_exit
|
||||||
__traceiter_android_vh_mm_alloc_pages_may_oom_exit
|
__traceiter_android_vh_mm_alloc_pages_may_oom_exit
|
||||||
|
__traceiter_android_vh_mmc_update_mmc_queue
|
||||||
__traceiter_android_vh_rwsem_init
|
__traceiter_android_vh_rwsem_init
|
||||||
__traceiter_android_vh_rwsem_wake
|
__traceiter_android_vh_rwsem_wake
|
||||||
__traceiter_android_vh_rwsem_write_finished
|
__traceiter_android_vh_rwsem_write_finished
|
||||||
|
|
@ -2802,10 +2835,10 @@
|
||||||
__tracepoint_android_vh_iommu_iovad_alloc_iova
|
__tracepoint_android_vh_iommu_iovad_alloc_iova
|
||||||
__tracepoint_android_vh_iommu_iovad_free_iova
|
__tracepoint_android_vh_iommu_iovad_free_iova
|
||||||
__tracepoint_android_vh_is_fpsimd_save
|
__tracepoint_android_vh_is_fpsimd_save
|
||||||
__tracepoint_android_vh_mmc_update_mmc_queue
|
|
||||||
__tracepoint_android_vh_mm_alloc_pages_direct_reclaim_enter
|
__tracepoint_android_vh_mm_alloc_pages_direct_reclaim_enter
|
||||||
__tracepoint_android_vh_mm_alloc_pages_direct_reclaim_exit
|
__tracepoint_android_vh_mm_alloc_pages_direct_reclaim_exit
|
||||||
__tracepoint_android_vh_mm_alloc_pages_may_oom_exit
|
__tracepoint_android_vh_mm_alloc_pages_may_oom_exit
|
||||||
|
__tracepoint_android_vh_mmc_update_mmc_queue
|
||||||
__tracepoint_android_vh_rwsem_init
|
__tracepoint_android_vh_rwsem_init
|
||||||
__tracepoint_android_vh_rwsem_wake
|
__tracepoint_android_vh_rwsem_wake
|
||||||
__tracepoint_android_vh_rwsem_write_finished
|
__tracepoint_android_vh_rwsem_write_finished
|
||||||
|
|
|
||||||
|
|
@ -175,6 +175,7 @@
|
||||||
__traceiter_block_rq_issue
|
__traceiter_block_rq_issue
|
||||||
__traceiter_block_rq_merge
|
__traceiter_block_rq_merge
|
||||||
__traceiter_block_rq_requeue
|
__traceiter_block_rq_requeue
|
||||||
|
__traceiter_mm_vmscan_kswapd_wake
|
||||||
__traceiter_net_dev_queue
|
__traceiter_net_dev_queue
|
||||||
__traceiter_net_dev_xmit
|
__traceiter_net_dev_xmit
|
||||||
__traceiter_netif_receive_skb
|
__traceiter_netif_receive_skb
|
||||||
|
|
@ -284,6 +285,7 @@
|
||||||
__tracepoint_block_rq_issue
|
__tracepoint_block_rq_issue
|
||||||
__tracepoint_block_rq_merge
|
__tracepoint_block_rq_merge
|
||||||
__tracepoint_block_rq_requeue
|
__tracepoint_block_rq_requeue
|
||||||
|
__tracepoint_mm_vmscan_kswapd_wake
|
||||||
__tracepoint_net_dev_queue
|
__tracepoint_net_dev_queue
|
||||||
__tracepoint_net_dev_xmit
|
__tracepoint_net_dev_xmit
|
||||||
__tracepoint_netif_receive_skb
|
__tracepoint_netif_receive_skb
|
||||||
|
|
|
||||||
|
|
@ -188,6 +188,7 @@
|
||||||
clockevents_config_and_register
|
clockevents_config_and_register
|
||||||
clocks_calc_mult_shift
|
clocks_calc_mult_shift
|
||||||
__clocksource_register_scale
|
__clocksource_register_scale
|
||||||
|
__cma_alloc
|
||||||
cma_alloc
|
cma_alloc
|
||||||
cma_for_each_area
|
cma_for_each_area
|
||||||
cma_get_name
|
cma_get_name
|
||||||
|
|
@ -410,6 +411,7 @@
|
||||||
devm_device_add_groups
|
devm_device_add_groups
|
||||||
devm_device_remove_group
|
devm_device_remove_group
|
||||||
__devm_drm_dev_alloc
|
__devm_drm_dev_alloc
|
||||||
|
devm_drm_of_get_bridge
|
||||||
devm_drm_panel_bridge_add_typed
|
devm_drm_panel_bridge_add_typed
|
||||||
devm_extcon_dev_allocate
|
devm_extcon_dev_allocate
|
||||||
devm_extcon_dev_register
|
devm_extcon_dev_register
|
||||||
|
|
@ -727,6 +729,7 @@
|
||||||
drm_helper_mode_fill_fb_struct
|
drm_helper_mode_fill_fb_struct
|
||||||
drm_helper_probe_single_connector_modes
|
drm_helper_probe_single_connector_modes
|
||||||
drm_ioctl
|
drm_ioctl
|
||||||
|
drm_kms_helper_connector_hotplug_event
|
||||||
drm_kms_helper_hotplug_event
|
drm_kms_helper_hotplug_event
|
||||||
drm_kms_helper_poll_fini
|
drm_kms_helper_poll_fini
|
||||||
drm_kms_helper_poll_init
|
drm_kms_helper_poll_init
|
||||||
|
|
@ -1407,6 +1410,7 @@
|
||||||
of_find_i2c_adapter_by_node
|
of_find_i2c_adapter_by_node
|
||||||
of_find_i2c_device_by_node
|
of_find_i2c_device_by_node
|
||||||
of_find_matching_node_and_match
|
of_find_matching_node_and_match
|
||||||
|
of_find_mipi_dsi_host_by_node
|
||||||
of_find_node_by_name
|
of_find_node_by_name
|
||||||
of_find_node_by_phandle
|
of_find_node_by_phandle
|
||||||
of_find_node_by_type
|
of_find_node_by_type
|
||||||
|
|
@ -1421,11 +1425,13 @@
|
||||||
of_get_next_available_child
|
of_get_next_available_child
|
||||||
of_get_next_child
|
of_get_next_child
|
||||||
of_get_next_parent
|
of_get_next_parent
|
||||||
|
of_get_parent
|
||||||
of_get_property
|
of_get_property
|
||||||
of_get_regulator_init_data
|
of_get_regulator_init_data
|
||||||
of_graph_get_next_endpoint
|
of_graph_get_next_endpoint
|
||||||
of_graph_get_port_parent
|
of_graph_get_port_parent
|
||||||
of_graph_get_remote_endpoint
|
of_graph_get_remote_endpoint
|
||||||
|
of_graph_get_remote_node
|
||||||
of_graph_is_present
|
of_graph_is_present
|
||||||
of_graph_parse_endpoint
|
of_graph_parse_endpoint
|
||||||
of_iomap
|
of_iomap
|
||||||
|
|
@ -2184,6 +2190,7 @@
|
||||||
__traceiter_android_rvh_setscheduler
|
__traceiter_android_rvh_setscheduler
|
||||||
__traceiter_android_rvh_set_task_cpu
|
__traceiter_android_rvh_set_task_cpu
|
||||||
__traceiter_android_rvh_set_user_nice
|
__traceiter_android_rvh_set_user_nice
|
||||||
|
__traceiter_android_rvh_set_user_nice_locked
|
||||||
__traceiter_android_rvh_typec_tcpci_get_vbus
|
__traceiter_android_rvh_typec_tcpci_get_vbus
|
||||||
__traceiter_android_rvh_uclamp_eff_get
|
__traceiter_android_rvh_uclamp_eff_get
|
||||||
__traceiter_android_rvh_update_blocked_fair
|
__traceiter_android_rvh_update_blocked_fair
|
||||||
|
|
@ -2290,6 +2297,7 @@
|
||||||
__tracepoint_android_rvh_setscheduler
|
__tracepoint_android_rvh_setscheduler
|
||||||
__tracepoint_android_rvh_set_task_cpu
|
__tracepoint_android_rvh_set_task_cpu
|
||||||
__tracepoint_android_rvh_set_user_nice
|
__tracepoint_android_rvh_set_user_nice
|
||||||
|
__tracepoint_android_rvh_set_user_nice_locked
|
||||||
__tracepoint_android_rvh_typec_tcpci_get_vbus
|
__tracepoint_android_rvh_typec_tcpci_get_vbus
|
||||||
__tracepoint_android_rvh_uclamp_eff_get
|
__tracepoint_android_rvh_uclamp_eff_get
|
||||||
__tracepoint_android_rvh_update_blocked_fair
|
__tracepoint_android_rvh_update_blocked_fair
|
||||||
|
|
|
||||||
|
|
@ -715,6 +715,7 @@
|
||||||
__traceiter_android_vh_cpu_idle_exit
|
__traceiter_android_vh_cpu_idle_exit
|
||||||
__traceiter_android_vh_enable_thermal_power_throttle
|
__traceiter_android_vh_enable_thermal_power_throttle
|
||||||
__traceiter_android_vh_get_thermal_zone_device
|
__traceiter_android_vh_get_thermal_zone_device
|
||||||
|
__traceiter_android_vh_kmalloc_large_alloced
|
||||||
__traceiter_android_vh_modify_thermal_request_freq
|
__traceiter_android_vh_modify_thermal_request_freq
|
||||||
__traceiter_android_vh_modify_thermal_target_freq
|
__traceiter_android_vh_modify_thermal_target_freq
|
||||||
__traceiter_android_vh_regmap_update
|
__traceiter_android_vh_regmap_update
|
||||||
|
|
@ -795,6 +796,7 @@
|
||||||
__tracepoint_android_vh_cpu_idle_exit
|
__tracepoint_android_vh_cpu_idle_exit
|
||||||
__tracepoint_android_vh_enable_thermal_power_throttle
|
__tracepoint_android_vh_enable_thermal_power_throttle
|
||||||
__tracepoint_android_vh_get_thermal_zone_device
|
__tracepoint_android_vh_get_thermal_zone_device
|
||||||
|
__tracepoint_android_vh_kmalloc_large_alloced
|
||||||
__tracepoint_android_vh_modify_thermal_request_freq
|
__tracepoint_android_vh_modify_thermal_request_freq
|
||||||
__tracepoint_android_vh_modify_thermal_target_freq
|
__tracepoint_android_vh_modify_thermal_target_freq
|
||||||
__tracepoint_android_vh_regmap_update
|
__tracepoint_android_vh_regmap_update
|
||||||
|
|
|
||||||
|
|
@ -363,3 +363,5 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_filemap_get_folio);
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mmc_blk_mq_rw_recovery);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mmc_blk_mq_rw_recovery);
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_sd_update_bus_speed_mode);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_sd_update_bus_speed_mode);
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_slab_folio_alloced);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_slab_folio_alloced);
|
||||||
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_kmalloc_large_alloced);
|
||||||
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_netlink_poll);
|
||||||
|
|
|
||||||
|
|
@ -636,6 +636,7 @@ static bool hidinput_set_battery_charge_status(struct hid_device *dev,
|
||||||
dev->battery_charge_status = value ?
|
dev->battery_charge_status = value ?
|
||||||
POWER_SUPPLY_STATUS_CHARGING :
|
POWER_SUPPLY_STATUS_CHARGING :
|
||||||
POWER_SUPPLY_STATUS_DISCHARGING;
|
POWER_SUPPLY_STATUS_DISCHARGING;
|
||||||
|
power_supply_changed(dev->battery);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -111,10 +111,14 @@ out:
|
||||||
int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
|
int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
|
||||||
sector_t pblk, unsigned int len)
|
sector_t pblk, unsigned int len)
|
||||||
{
|
{
|
||||||
const unsigned int blockbits = inode->i_blkbits;
|
const struct fscrypt_info *ci = inode->i_crypt_info;
|
||||||
const unsigned int blocksize = 1 << blockbits;
|
const unsigned int du_bits = ci->ci_data_unit_bits;
|
||||||
const unsigned int blocks_per_page_bits = PAGE_SHIFT - blockbits;
|
const unsigned int du_size = 1U << du_bits;
|
||||||
const unsigned int blocks_per_page = 1 << blocks_per_page_bits;
|
const unsigned int du_per_page_bits = PAGE_SHIFT - du_bits;
|
||||||
|
const unsigned int du_per_page = 1U << du_per_page_bits;
|
||||||
|
u64 du_index = (u64)lblk << (inode->i_blkbits - du_bits);
|
||||||
|
u64 du_remaining = (u64)len << (inode->i_blkbits - du_bits);
|
||||||
|
sector_t sector = pblk << (inode->i_blkbits - SECTOR_SHIFT);
|
||||||
struct page *pages[16]; /* write up to 16 pages at a time */
|
struct page *pages[16]; /* write up to 16 pages at a time */
|
||||||
unsigned int nr_pages;
|
unsigned int nr_pages;
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
|
|
@ -130,8 +134,8 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
|
||||||
len);
|
len);
|
||||||
|
|
||||||
BUILD_BUG_ON(ARRAY_SIZE(pages) > BIO_MAX_VECS);
|
BUILD_BUG_ON(ARRAY_SIZE(pages) > BIO_MAX_VECS);
|
||||||
nr_pages = min_t(unsigned int, ARRAY_SIZE(pages),
|
nr_pages = min_t(u64, ARRAY_SIZE(pages),
|
||||||
(len + blocks_per_page - 1) >> blocks_per_page_bits);
|
(du_remaining + du_per_page - 1) >> du_per_page_bits);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We need at least one page for ciphertext. Allocate the first one
|
* We need at least one page for ciphertext. Allocate the first one
|
||||||
|
|
@ -154,21 +158,22 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
|
||||||
bio = bio_alloc(inode->i_sb->s_bdev, nr_pages, REQ_OP_WRITE, GFP_NOFS);
|
bio = bio_alloc(inode->i_sb->s_bdev, nr_pages, REQ_OP_WRITE, GFP_NOFS);
|
||||||
|
|
||||||
do {
|
do {
|
||||||
bio->bi_iter.bi_sector = pblk << (blockbits - 9);
|
bio->bi_iter.bi_sector = sector;
|
||||||
|
|
||||||
i = 0;
|
i = 0;
|
||||||
offset = 0;
|
offset = 0;
|
||||||
do {
|
do {
|
||||||
err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk,
|
err = fscrypt_crypt_data_unit(ci, FS_ENCRYPT, du_index,
|
||||||
ZERO_PAGE(0), pages[i],
|
ZERO_PAGE(0), pages[i],
|
||||||
blocksize, offset, GFP_NOFS);
|
du_size, offset,
|
||||||
|
GFP_NOFS);
|
||||||
if (err)
|
if (err)
|
||||||
goto out;
|
goto out;
|
||||||
lblk++;
|
du_index++;
|
||||||
pblk++;
|
sector += 1U << (du_bits - SECTOR_SHIFT);
|
||||||
len--;
|
du_remaining--;
|
||||||
offset += blocksize;
|
offset += du_size;
|
||||||
if (offset == PAGE_SIZE || len == 0) {
|
if (offset == PAGE_SIZE || du_remaining == 0) {
|
||||||
ret = bio_add_page(bio, pages[i++], offset, 0);
|
ret = bio_add_page(bio, pages[i++], offset, 0);
|
||||||
if (WARN_ON_ONCE(ret != offset)) {
|
if (WARN_ON_ONCE(ret != offset)) {
|
||||||
err = -EIO;
|
err = -EIO;
|
||||||
|
|
@ -176,13 +181,13 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
|
||||||
}
|
}
|
||||||
offset = 0;
|
offset = 0;
|
||||||
}
|
}
|
||||||
} while (i != nr_pages && len != 0);
|
} while (i != nr_pages && du_remaining != 0);
|
||||||
|
|
||||||
err = submit_bio_wait(bio);
|
err = submit_bio_wait(bio);
|
||||||
if (err)
|
if (err)
|
||||||
goto out;
|
goto out;
|
||||||
bio_reset(bio, inode->i_sb->s_bdev, REQ_OP_WRITE);
|
bio_reset(bio, inode->i_sb->s_bdev, REQ_OP_WRITE);
|
||||||
} while (len != 0);
|
} while (du_remaining != 0);
|
||||||
err = 0;
|
err = 0;
|
||||||
out:
|
out:
|
||||||
bio_put(bio);
|
bio_put(bio);
|
||||||
|
|
|
||||||
|
|
@ -70,14 +70,14 @@ void fscrypt_free_bounce_page(struct page *bounce_page)
|
||||||
EXPORT_SYMBOL(fscrypt_free_bounce_page);
|
EXPORT_SYMBOL(fscrypt_free_bounce_page);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Generate the IV for the given logical block number within the given file.
|
* Generate the IV for the given data unit index within the given file.
|
||||||
* For filenames encryption, lblk_num == 0.
|
* For filenames encryption, index == 0.
|
||||||
*
|
*
|
||||||
* Keep this in sync with fscrypt_limit_io_blocks(). fscrypt_limit_io_blocks()
|
* Keep this in sync with fscrypt_limit_io_blocks(). fscrypt_limit_io_blocks()
|
||||||
* needs to know about any IV generation methods where the low bits of IV don't
|
* needs to know about any IV generation methods where the low bits of IV don't
|
||||||
* simply contain the lblk_num (e.g., IV_INO_LBLK_32).
|
* simply contain the data unit index (e.g., IV_INO_LBLK_32).
|
||||||
*/
|
*/
|
||||||
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
|
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 index,
|
||||||
const struct fscrypt_info *ci)
|
const struct fscrypt_info *ci)
|
||||||
{
|
{
|
||||||
u8 flags = fscrypt_policy_flags(&ci->ci_policy);
|
u8 flags = fscrypt_policy_flags(&ci->ci_policy);
|
||||||
|
|
@ -85,29 +85,29 @@ void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
|
||||||
memset(iv, 0, ci->ci_mode->ivsize);
|
memset(iv, 0, ci->ci_mode->ivsize);
|
||||||
|
|
||||||
if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) {
|
if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) {
|
||||||
WARN_ON_ONCE(lblk_num > U32_MAX);
|
WARN_ON_ONCE(index > U32_MAX);
|
||||||
WARN_ON_ONCE(ci->ci_inode->i_ino > U32_MAX);
|
WARN_ON_ONCE(ci->ci_inode->i_ino > U32_MAX);
|
||||||
lblk_num |= (u64)ci->ci_inode->i_ino << 32;
|
index |= (u64)ci->ci_inode->i_ino << 32;
|
||||||
} else if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32) {
|
} else if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32) {
|
||||||
WARN_ON_ONCE(lblk_num > U32_MAX);
|
WARN_ON_ONCE(index > U32_MAX);
|
||||||
lblk_num = (u32)(ci->ci_hashed_ino + lblk_num);
|
index = (u32)(ci->ci_hashed_ino + index);
|
||||||
} else if (flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
|
} else if (flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
|
||||||
memcpy(iv->nonce, ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE);
|
memcpy(iv->nonce, ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE);
|
||||||
}
|
}
|
||||||
iv->lblk_num = cpu_to_le64(lblk_num);
|
iv->index = cpu_to_le64(index);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Encrypt or decrypt a single filesystem block of file contents */
|
/* Encrypt or decrypt a single "data unit" of file contents. */
|
||||||
int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
|
int fscrypt_crypt_data_unit(const struct fscrypt_info *ci,
|
||||||
u64 lblk_num, struct page *src_page,
|
fscrypt_direction_t rw, u64 index,
|
||||||
struct page *dest_page, unsigned int len,
|
struct page *src_page, struct page *dest_page,
|
||||||
unsigned int offs, gfp_t gfp_flags)
|
unsigned int len, unsigned int offs,
|
||||||
|
gfp_t gfp_flags)
|
||||||
{
|
{
|
||||||
union fscrypt_iv iv;
|
union fscrypt_iv iv;
|
||||||
struct skcipher_request *req = NULL;
|
struct skcipher_request *req = NULL;
|
||||||
DECLARE_CRYPTO_WAIT(wait);
|
DECLARE_CRYPTO_WAIT(wait);
|
||||||
struct scatterlist dst, src;
|
struct scatterlist dst, src;
|
||||||
struct fscrypt_info *ci = inode->i_crypt_info;
|
|
||||||
struct crypto_skcipher *tfm = ci->ci_enc_key.tfm;
|
struct crypto_skcipher *tfm = ci->ci_enc_key.tfm;
|
||||||
int res = 0;
|
int res = 0;
|
||||||
|
|
||||||
|
|
@ -116,7 +116,7 @@ int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
|
||||||
if (WARN_ON_ONCE(len % FSCRYPT_CONTENTS_ALIGNMENT != 0))
|
if (WARN_ON_ONCE(len % FSCRYPT_CONTENTS_ALIGNMENT != 0))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
fscrypt_generate_iv(&iv, lblk_num, ci);
|
fscrypt_generate_iv(&iv, index, ci);
|
||||||
|
|
||||||
req = skcipher_request_alloc(tfm, gfp_flags);
|
req = skcipher_request_alloc(tfm, gfp_flags);
|
||||||
if (!req)
|
if (!req)
|
||||||
|
|
@ -137,28 +137,29 @@ int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
|
||||||
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
|
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
|
||||||
skcipher_request_free(req);
|
skcipher_request_free(req);
|
||||||
if (res) {
|
if (res) {
|
||||||
fscrypt_err(inode, "%scryption failed for block %llu: %d",
|
fscrypt_err(ci->ci_inode,
|
||||||
(rw == FS_DECRYPT ? "De" : "En"), lblk_num, res);
|
"%scryption failed for data unit %llu: %d",
|
||||||
|
(rw == FS_DECRYPT ? "De" : "En"), index, res);
|
||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* fscrypt_encrypt_pagecache_blocks() - Encrypt filesystem blocks from a
|
* fscrypt_encrypt_pagecache_blocks() - Encrypt data from a pagecache page
|
||||||
* pagecache page
|
* @page: the locked pagecache page containing the data to encrypt
|
||||||
* @page: The locked pagecache page containing the block(s) to encrypt
|
* @len: size of the data to encrypt, in bytes
|
||||||
* @len: Total size of the block(s) to encrypt. Must be a nonzero
|
* @offs: offset within @page of the data to encrypt, in bytes
|
||||||
* multiple of the filesystem's block size.
|
* @gfp_flags: memory allocation flags; see details below
|
||||||
* @offs: Byte offset within @page of the first block to encrypt. Must be
|
|
||||||
* a multiple of the filesystem's block size.
|
|
||||||
* @gfp_flags: Memory allocation flags. See details below.
|
|
||||||
*
|
*
|
||||||
* A new bounce page is allocated, and the specified block(s) are encrypted into
|
* This allocates a new bounce page and encrypts the given data into it. The
|
||||||
* it. In the bounce page, the ciphertext block(s) will be located at the same
|
* length and offset of the data must be aligned to the file's crypto data unit
|
||||||
* offsets at which the plaintext block(s) were located in the source page; any
|
* size. Alignment to the filesystem block size fulfills this requirement, as
|
||||||
* other parts of the bounce page will be left uninitialized. However, normally
|
* the filesystem block size is always a multiple of the data unit size.
|
||||||
* blocksize == PAGE_SIZE and the whole page is encrypted at once.
|
*
|
||||||
|
* In the bounce page, the ciphertext data will be located at the same offset at
|
||||||
|
* which the plaintext data was located in the source page. Any other parts of
|
||||||
|
* the bounce page will be left uninitialized.
|
||||||
*
|
*
|
||||||
* This is for use by the filesystem's ->writepages() method.
|
* This is for use by the filesystem's ->writepages() method.
|
||||||
*
|
*
|
||||||
|
|
@ -176,28 +177,29 @@ struct page *fscrypt_encrypt_pagecache_blocks(struct page *page,
|
||||||
|
|
||||||
{
|
{
|
||||||
const struct inode *inode = page->mapping->host;
|
const struct inode *inode = page->mapping->host;
|
||||||
const unsigned int blockbits = inode->i_blkbits;
|
const struct fscrypt_info *ci = inode->i_crypt_info;
|
||||||
const unsigned int blocksize = 1 << blockbits;
|
const unsigned int du_bits = ci->ci_data_unit_bits;
|
||||||
|
const unsigned int du_size = 1U << du_bits;
|
||||||
struct page *ciphertext_page;
|
struct page *ciphertext_page;
|
||||||
u64 lblk_num = ((u64)page->index << (PAGE_SHIFT - blockbits)) +
|
u64 index = ((u64)page->index << (PAGE_SHIFT - du_bits)) +
|
||||||
(offs >> blockbits);
|
(offs >> du_bits);
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
if (WARN_ON_ONCE(!PageLocked(page)))
|
if (WARN_ON_ONCE(!PageLocked(page)))
|
||||||
return ERR_PTR(-EINVAL);
|
return ERR_PTR(-EINVAL);
|
||||||
|
|
||||||
if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offs, blocksize)))
|
if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offs, du_size)))
|
||||||
return ERR_PTR(-EINVAL);
|
return ERR_PTR(-EINVAL);
|
||||||
|
|
||||||
ciphertext_page = fscrypt_alloc_bounce_page(gfp_flags);
|
ciphertext_page = fscrypt_alloc_bounce_page(gfp_flags);
|
||||||
if (!ciphertext_page)
|
if (!ciphertext_page)
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
|
|
||||||
for (i = offs; i < offs + len; i += blocksize, lblk_num++) {
|
for (i = offs; i < offs + len; i += du_size, index++) {
|
||||||
err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk_num,
|
err = fscrypt_crypt_data_unit(ci, FS_ENCRYPT, index,
|
||||||
page, ciphertext_page,
|
page, ciphertext_page,
|
||||||
blocksize, i, gfp_flags);
|
du_size, i, gfp_flags);
|
||||||
if (err) {
|
if (err) {
|
||||||
fscrypt_free_bounce_page(ciphertext_page);
|
fscrypt_free_bounce_page(ciphertext_page);
|
||||||
return ERR_PTR(err);
|
return ERR_PTR(err);
|
||||||
|
|
@ -224,30 +226,34 @@ EXPORT_SYMBOL(fscrypt_encrypt_pagecache_blocks);
|
||||||
* arbitrary page, not necessarily in the original pagecache page. The @inode
|
* arbitrary page, not necessarily in the original pagecache page. The @inode
|
||||||
* and @lblk_num must be specified, as they can't be determined from @page.
|
* and @lblk_num must be specified, as they can't be determined from @page.
|
||||||
*
|
*
|
||||||
|
* This is not compatible with FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS.
|
||||||
|
*
|
||||||
* Return: 0 on success; -errno on failure
|
* Return: 0 on success; -errno on failure
|
||||||
*/
|
*/
|
||||||
int fscrypt_encrypt_block_inplace(const struct inode *inode, struct page *page,
|
int fscrypt_encrypt_block_inplace(const struct inode *inode, struct page *page,
|
||||||
unsigned int len, unsigned int offs,
|
unsigned int len, unsigned int offs,
|
||||||
u64 lblk_num, gfp_t gfp_flags)
|
u64 lblk_num, gfp_t gfp_flags)
|
||||||
{
|
{
|
||||||
return fscrypt_crypt_block(inode, FS_ENCRYPT, lblk_num, page, page,
|
if (WARN_ON_ONCE(inode->i_sb->s_cop->flags &
|
||||||
len, offs, gfp_flags);
|
FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS))
|
||||||
|
return -EOPNOTSUPP;
|
||||||
|
return fscrypt_crypt_data_unit(inode->i_crypt_info, FS_ENCRYPT,
|
||||||
|
lblk_num, page, page, len, offs,
|
||||||
|
gfp_flags);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(fscrypt_encrypt_block_inplace);
|
EXPORT_SYMBOL(fscrypt_encrypt_block_inplace);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* fscrypt_decrypt_pagecache_blocks() - Decrypt filesystem blocks in a
|
* fscrypt_decrypt_pagecache_blocks() - Decrypt data from a pagecache folio
|
||||||
* pagecache folio
|
* @folio: the pagecache folio containing the data to decrypt
|
||||||
* @folio: The locked pagecache folio containing the block(s) to decrypt
|
* @len: size of the data to decrypt, in bytes
|
||||||
* @len: Total size of the block(s) to decrypt. Must be a nonzero
|
* @offs: offset within @folio of the data to decrypt, in bytes
|
||||||
* multiple of the filesystem's block size.
|
|
||||||
* @offs: Byte offset within @folio of the first block to decrypt. Must be
|
|
||||||
* a multiple of the filesystem's block size.
|
|
||||||
*
|
*
|
||||||
* The specified block(s) are decrypted in-place within the pagecache folio,
|
* Decrypt data that has just been read from an encrypted file. The data must
|
||||||
* which must still be locked and not uptodate.
|
* be located in a pagecache folio that is still locked and not yet uptodate.
|
||||||
*
|
* The length and offset of the data must be aligned to the file's crypto data
|
||||||
* This is for use by the filesystem's ->readahead() method.
|
* unit size. Alignment to the filesystem block size fulfills this requirement,
|
||||||
|
* as the filesystem block size is always a multiple of the data unit size.
|
||||||
*
|
*
|
||||||
* Return: 0 on success; -errno on failure
|
* Return: 0 on success; -errno on failure
|
||||||
*/
|
*/
|
||||||
|
|
@ -255,25 +261,26 @@ int fscrypt_decrypt_pagecache_blocks(struct folio *folio, size_t len,
|
||||||
size_t offs)
|
size_t offs)
|
||||||
{
|
{
|
||||||
const struct inode *inode = folio->mapping->host;
|
const struct inode *inode = folio->mapping->host;
|
||||||
const unsigned int blockbits = inode->i_blkbits;
|
const struct fscrypt_info *ci = inode->i_crypt_info;
|
||||||
const unsigned int blocksize = 1 << blockbits;
|
const unsigned int du_bits = ci->ci_data_unit_bits;
|
||||||
u64 lblk_num = ((u64)folio->index << (PAGE_SHIFT - blockbits)) +
|
const unsigned int du_size = 1U << du_bits;
|
||||||
(offs >> blockbits);
|
u64 index = ((u64)folio->index << (PAGE_SHIFT - du_bits)) +
|
||||||
|
(offs >> du_bits);
|
||||||
size_t i;
|
size_t i;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
if (WARN_ON_ONCE(!folio_test_locked(folio)))
|
if (WARN_ON_ONCE(!folio_test_locked(folio)))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offs, blocksize)))
|
if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offs, du_size)))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
for (i = offs; i < offs + len; i += blocksize, lblk_num++) {
|
for (i = offs; i < offs + len; i += du_size, index++) {
|
||||||
struct page *page = folio_page(folio, i >> PAGE_SHIFT);
|
struct page *page = folio_page(folio, i >> PAGE_SHIFT);
|
||||||
|
|
||||||
err = fscrypt_crypt_block(inode, FS_DECRYPT, lblk_num, page,
|
err = fscrypt_crypt_data_unit(ci, FS_DECRYPT, index, page,
|
||||||
page, blocksize, i & ~PAGE_MASK,
|
page, du_size, i & ~PAGE_MASK,
|
||||||
GFP_NOFS);
|
GFP_NOFS);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
@ -295,14 +302,20 @@ EXPORT_SYMBOL(fscrypt_decrypt_pagecache_blocks);
|
||||||
* arbitrary page, not necessarily in the original pagecache page. The @inode
|
* arbitrary page, not necessarily in the original pagecache page. The @inode
|
||||||
* and @lblk_num must be specified, as they can't be determined from @page.
|
* and @lblk_num must be specified, as they can't be determined from @page.
|
||||||
*
|
*
|
||||||
|
* This is not compatible with FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS.
|
||||||
|
*
|
||||||
* Return: 0 on success; -errno on failure
|
* Return: 0 on success; -errno on failure
|
||||||
*/
|
*/
|
||||||
int fscrypt_decrypt_block_inplace(const struct inode *inode, struct page *page,
|
int fscrypt_decrypt_block_inplace(const struct inode *inode, struct page *page,
|
||||||
unsigned int len, unsigned int offs,
|
unsigned int len, unsigned int offs,
|
||||||
u64 lblk_num)
|
u64 lblk_num)
|
||||||
{
|
{
|
||||||
return fscrypt_crypt_block(inode, FS_DECRYPT, lblk_num, page, page,
|
if (WARN_ON_ONCE(inode->i_sb->s_cop->flags &
|
||||||
len, offs, GFP_NOFS);
|
FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS))
|
||||||
|
return -EOPNOTSUPP;
|
||||||
|
return fscrypt_crypt_data_unit(inode->i_crypt_info, FS_DECRYPT,
|
||||||
|
lblk_num, page, page, len, offs,
|
||||||
|
GFP_NOFS);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(fscrypt_decrypt_block_inplace);
|
EXPORT_SYMBOL(fscrypt_decrypt_block_inplace);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -68,7 +68,8 @@ struct fscrypt_context_v2 {
|
||||||
u8 contents_encryption_mode;
|
u8 contents_encryption_mode;
|
||||||
u8 filenames_encryption_mode;
|
u8 filenames_encryption_mode;
|
||||||
u8 flags;
|
u8 flags;
|
||||||
u8 __reserved[4];
|
u8 log2_data_unit_size;
|
||||||
|
u8 __reserved[3];
|
||||||
u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
||||||
u8 nonce[FSCRYPT_FILE_NONCE_SIZE];
|
u8 nonce[FSCRYPT_FILE_NONCE_SIZE];
|
||||||
};
|
};
|
||||||
|
|
@ -186,6 +187,26 @@ fscrypt_policy_flags(const union fscrypt_policy *policy)
|
||||||
BUG();
|
BUG();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline int
|
||||||
|
fscrypt_policy_v2_du_bits(const struct fscrypt_policy_v2 *policy,
|
||||||
|
const struct inode *inode)
|
||||||
|
{
|
||||||
|
return policy->log2_data_unit_size ?: inode->i_blkbits;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline int
|
||||||
|
fscrypt_policy_du_bits(const union fscrypt_policy *policy,
|
||||||
|
const struct inode *inode)
|
||||||
|
{
|
||||||
|
switch (policy->version) {
|
||||||
|
case FSCRYPT_POLICY_V1:
|
||||||
|
return inode->i_blkbits;
|
||||||
|
case FSCRYPT_POLICY_V2:
|
||||||
|
return fscrypt_policy_v2_du_bits(&policy->v2, inode);
|
||||||
|
}
|
||||||
|
BUG();
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* For encrypted symlinks, the ciphertext length is stored at the beginning
|
* For encrypted symlinks, the ciphertext length is stored at the beginning
|
||||||
* of the string in little-endian format.
|
* of the string in little-endian format.
|
||||||
|
|
@ -232,6 +253,16 @@ struct fscrypt_info {
|
||||||
bool ci_inlinecrypt;
|
bool ci_inlinecrypt;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
/*
|
||||||
|
* log2 of the data unit size (granularity of contents encryption) of
|
||||||
|
* this file. This is computable from ci_policy and ci_inode but is
|
||||||
|
* cached here for efficiency. Only used for regular files.
|
||||||
|
*/
|
||||||
|
u8 ci_data_unit_bits;
|
||||||
|
|
||||||
|
/* Cached value: log2 of number of data units per FS block */
|
||||||
|
u8 ci_data_units_per_block_bits;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Encryption mode used for this inode. It corresponds to either the
|
* Encryption mode used for this inode. It corresponds to either the
|
||||||
* contents or filenames encryption mode, depending on the inode type.
|
* contents or filenames encryption mode, depending on the inode type.
|
||||||
|
|
@ -286,10 +317,11 @@ typedef enum {
|
||||||
/* crypto.c */
|
/* crypto.c */
|
||||||
extern struct kmem_cache *fscrypt_info_cachep;
|
extern struct kmem_cache *fscrypt_info_cachep;
|
||||||
int fscrypt_initialize(struct super_block *sb);
|
int fscrypt_initialize(struct super_block *sb);
|
||||||
int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
|
int fscrypt_crypt_data_unit(const struct fscrypt_info *ci,
|
||||||
u64 lblk_num, struct page *src_page,
|
fscrypt_direction_t rw, u64 index,
|
||||||
struct page *dest_page, unsigned int len,
|
struct page *src_page, struct page *dest_page,
|
||||||
unsigned int offs, gfp_t gfp_flags);
|
unsigned int len, unsigned int offs,
|
||||||
|
gfp_t gfp_flags);
|
||||||
struct page *fscrypt_alloc_bounce_page(gfp_t gfp_flags);
|
struct page *fscrypt_alloc_bounce_page(gfp_t gfp_flags);
|
||||||
|
|
||||||
void __printf(3, 4) __cold
|
void __printf(3, 4) __cold
|
||||||
|
|
@ -304,8 +336,8 @@ fscrypt_msg(const struct inode *inode, const char *level, const char *fmt, ...);
|
||||||
|
|
||||||
union fscrypt_iv {
|
union fscrypt_iv {
|
||||||
struct {
|
struct {
|
||||||
/* logical block number within the file */
|
/* zero-based index of data unit within the file */
|
||||||
__le64 lblk_num;
|
__le64 index;
|
||||||
|
|
||||||
/* per-file nonce; only set in DIRECT_KEY mode */
|
/* per-file nonce; only set in DIRECT_KEY mode */
|
||||||
u8 nonce[FSCRYPT_FILE_NONCE_SIZE];
|
u8 nonce[FSCRYPT_FILE_NONCE_SIZE];
|
||||||
|
|
@ -314,9 +346,19 @@ union fscrypt_iv {
|
||||||
__le64 dun[FSCRYPT_MAX_IV_SIZE / sizeof(__le64)];
|
__le64 dun[FSCRYPT_MAX_IV_SIZE / sizeof(__le64)];
|
||||||
};
|
};
|
||||||
|
|
||||||
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
|
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 index,
|
||||||
const struct fscrypt_info *ci);
|
const struct fscrypt_info *ci);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Return the number of bits used by the maximum file data unit index that is
|
||||||
|
* possible on the given filesystem, using the given log2 data unit size.
|
||||||
|
*/
|
||||||
|
static inline int
|
||||||
|
fscrypt_max_file_dun_bits(const struct super_block *sb, int du_bits)
|
||||||
|
{
|
||||||
|
return fls64(sb->s_maxbytes - 1) - du_bits;
|
||||||
|
}
|
||||||
|
|
||||||
/* fname.c */
|
/* fname.c */
|
||||||
bool __fscrypt_fname_encrypted_size(const union fscrypt_policy *policy,
|
bool __fscrypt_fname_encrypted_size(const union fscrypt_policy *policy,
|
||||||
u32 orig_len, u32 max_len,
|
u32 orig_len, u32 max_len,
|
||||||
|
|
|
||||||
|
|
@ -43,7 +43,7 @@ static unsigned int fscrypt_get_dun_bytes(const struct fscrypt_info *ci)
|
||||||
{
|
{
|
||||||
struct super_block *sb = ci->ci_inode->i_sb;
|
struct super_block *sb = ci->ci_inode->i_sb;
|
||||||
unsigned int flags = fscrypt_policy_flags(&ci->ci_policy);
|
unsigned int flags = fscrypt_policy_flags(&ci->ci_policy);
|
||||||
int ino_bits = 64, lblk_bits = 64;
|
int dun_bits;
|
||||||
|
|
||||||
if (flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY)
|
if (flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY)
|
||||||
return offsetofend(union fscrypt_iv, nonce);
|
return offsetofend(union fscrypt_iv, nonce);
|
||||||
|
|
@ -54,10 +54,9 @@ static unsigned int fscrypt_get_dun_bytes(const struct fscrypt_info *ci)
|
||||||
if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)
|
if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)
|
||||||
return sizeof(__le32);
|
return sizeof(__le32);
|
||||||
|
|
||||||
/* Default case: IVs are just the file logical block number */
|
/* Default case: IVs are just the file data unit index */
|
||||||
if (sb->s_cop->get_ino_and_lblk_bits)
|
dun_bits = fscrypt_max_file_dun_bits(sb, ci->ci_data_unit_bits);
|
||||||
sb->s_cop->get_ino_and_lblk_bits(sb, &ino_bits, &lblk_bits);
|
return DIV_ROUND_UP(dun_bits, 8);
|
||||||
return DIV_ROUND_UP(lblk_bits, 8);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
@ -130,7 +129,7 @@ int fscrypt_select_encryption_impl(struct fscrypt_info *ci,
|
||||||
* crypto configuration that the file would use.
|
* crypto configuration that the file would use.
|
||||||
*/
|
*/
|
||||||
crypto_cfg.crypto_mode = ci->ci_mode->blk_crypto_mode;
|
crypto_cfg.crypto_mode = ci->ci_mode->blk_crypto_mode;
|
||||||
crypto_cfg.data_unit_size = sb->s_blocksize;
|
crypto_cfg.data_unit_size = 1U << ci->ci_data_unit_bits;
|
||||||
crypto_cfg.dun_bytes = fscrypt_get_dun_bytes(ci);
|
crypto_cfg.dun_bytes = fscrypt_get_dun_bytes(ci);
|
||||||
crypto_cfg.key_type =
|
crypto_cfg.key_type =
|
||||||
is_hw_wrapped_key ? BLK_CRYPTO_KEY_TYPE_HW_WRAPPED :
|
is_hw_wrapped_key ? BLK_CRYPTO_KEY_TYPE_HW_WRAPPED :
|
||||||
|
|
@ -176,7 +175,7 @@ int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
|
||||||
|
|
||||||
err = blk_crypto_init_key(blk_key, raw_key, raw_key_size, key_type,
|
err = blk_crypto_init_key(blk_key, raw_key, raw_key_size, key_type,
|
||||||
crypto_mode, fscrypt_get_dun_bytes(ci),
|
crypto_mode, fscrypt_get_dun_bytes(ci),
|
||||||
sb->s_blocksize);
|
1U << ci->ci_data_unit_bits);
|
||||||
if (err) {
|
if (err) {
|
||||||
fscrypt_err(inode, "error %d initializing blk-crypto key", err);
|
fscrypt_err(inode, "error %d initializing blk-crypto key", err);
|
||||||
goto fail;
|
goto fail;
|
||||||
|
|
@ -271,10 +270,11 @@ EXPORT_SYMBOL_GPL(__fscrypt_inode_uses_inline_crypto);
|
||||||
static void fscrypt_generate_dun(const struct fscrypt_info *ci, u64 lblk_num,
|
static void fscrypt_generate_dun(const struct fscrypt_info *ci, u64 lblk_num,
|
||||||
u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
|
u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
|
||||||
{
|
{
|
||||||
|
u64 index = lblk_num << ci->ci_data_units_per_block_bits;
|
||||||
union fscrypt_iv iv;
|
union fscrypt_iv iv;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
fscrypt_generate_iv(&iv, lblk_num, ci);
|
fscrypt_generate_iv(&iv, index, ci);
|
||||||
|
|
||||||
BUILD_BUG_ON(FSCRYPT_MAX_IV_SIZE > BLK_CRYPTO_MAX_IV_SIZE);
|
BUILD_BUG_ON(FSCRYPT_MAX_IV_SIZE > BLK_CRYPTO_MAX_IV_SIZE);
|
||||||
memset(dun, 0, BLK_CRYPTO_MAX_IV_SIZE);
|
memset(dun, 0, BLK_CRYPTO_MAX_IV_SIZE);
|
||||||
|
|
|
||||||
|
|
@ -627,6 +627,11 @@ fscrypt_setup_encryption_info(struct inode *inode,
|
||||||
WARN_ON_ONCE(mode->ivsize > FSCRYPT_MAX_IV_SIZE);
|
WARN_ON_ONCE(mode->ivsize > FSCRYPT_MAX_IV_SIZE);
|
||||||
crypt_info->ci_mode = mode;
|
crypt_info->ci_mode = mode;
|
||||||
|
|
||||||
|
crypt_info->ci_data_unit_bits =
|
||||||
|
fscrypt_policy_du_bits(&crypt_info->ci_policy, inode);
|
||||||
|
crypt_info->ci_data_units_per_block_bits =
|
||||||
|
inode->i_blkbits - crypt_info->ci_data_unit_bits;
|
||||||
|
|
||||||
res = setup_file_encryption_key(crypt_info, need_dirhash_key, &mk);
|
res = setup_file_encryption_key(crypt_info, need_dirhash_key, &mk);
|
||||||
if (res)
|
if (res)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
|
||||||
|
|
@ -158,9 +158,15 @@ static bool supported_iv_ino_lblk_policy(const struct fscrypt_policy_v2 *policy,
|
||||||
type, sb->s_id);
|
type, sb->s_id);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
if (lblk_bits > max_lblk_bits) {
|
|
||||||
|
/*
|
||||||
|
* IV_INO_LBLK_64 and IV_INO_LBLK_32 both require that file data unit
|
||||||
|
* indices fit in 32 bits.
|
||||||
|
*/
|
||||||
|
if (fscrypt_max_file_dun_bits(sb,
|
||||||
|
fscrypt_policy_v2_du_bits(policy, inode)) > 32) {
|
||||||
fscrypt_warn(inode,
|
fscrypt_warn(inode,
|
||||||
"Can't use %s policy on filesystem '%s' because its block numbers are too long",
|
"Can't use %s policy on filesystem '%s' because its maximum file size is too large",
|
||||||
type, sb->s_id);
|
type, sb->s_id);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
@ -233,6 +239,32 @@ static bool fscrypt_supported_v2_policy(const struct fscrypt_policy_v2 *policy,
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (policy->log2_data_unit_size) {
|
||||||
|
if (!(inode->i_sb->s_cop->flags &
|
||||||
|
FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS)) {
|
||||||
|
fscrypt_warn(inode,
|
||||||
|
"Filesystem does not support configuring crypto data unit size");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (policy->log2_data_unit_size > inode->i_blkbits ||
|
||||||
|
policy->log2_data_unit_size < SECTOR_SHIFT /* 9 */) {
|
||||||
|
fscrypt_warn(inode,
|
||||||
|
"Unsupported log2_data_unit_size in encryption policy: %d",
|
||||||
|
policy->log2_data_unit_size);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (policy->log2_data_unit_size != inode->i_blkbits &&
|
||||||
|
(policy->flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)) {
|
||||||
|
/*
|
||||||
|
* Not safe to enable yet, as we need to ensure that DUN
|
||||||
|
* wraparound can only occur on a FS block boundary.
|
||||||
|
*/
|
||||||
|
fscrypt_warn(inode,
|
||||||
|
"Sub-block data units not yet supported with IV_INO_LBLK_32");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if ((policy->flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) &&
|
if ((policy->flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) &&
|
||||||
!supported_direct_key_modes(inode, policy->contents_encryption_mode,
|
!supported_direct_key_modes(inode, policy->contents_encryption_mode,
|
||||||
policy->filenames_encryption_mode))
|
policy->filenames_encryption_mode))
|
||||||
|
|
@ -330,6 +362,7 @@ static int fscrypt_new_context(union fscrypt_context *ctx_u,
|
||||||
ctx->filenames_encryption_mode =
|
ctx->filenames_encryption_mode =
|
||||||
policy->filenames_encryption_mode;
|
policy->filenames_encryption_mode;
|
||||||
ctx->flags = policy->flags;
|
ctx->flags = policy->flags;
|
||||||
|
ctx->log2_data_unit_size = policy->log2_data_unit_size;
|
||||||
memcpy(ctx->master_key_identifier,
|
memcpy(ctx->master_key_identifier,
|
||||||
policy->master_key_identifier,
|
policy->master_key_identifier,
|
||||||
sizeof(ctx->master_key_identifier));
|
sizeof(ctx->master_key_identifier));
|
||||||
|
|
@ -390,6 +423,7 @@ int fscrypt_policy_from_context(union fscrypt_policy *policy_u,
|
||||||
policy->filenames_encryption_mode =
|
policy->filenames_encryption_mode =
|
||||||
ctx->filenames_encryption_mode;
|
ctx->filenames_encryption_mode;
|
||||||
policy->flags = ctx->flags;
|
policy->flags = ctx->flags;
|
||||||
|
policy->log2_data_unit_size = ctx->log2_data_unit_size;
|
||||||
memcpy(policy->__reserved, ctx->__reserved,
|
memcpy(policy->__reserved, ctx->__reserved,
|
||||||
sizeof(policy->__reserved));
|
sizeof(policy->__reserved));
|
||||||
memcpy(policy->master_key_identifier,
|
memcpy(policy->master_key_identifier,
|
||||||
|
|
|
||||||
|
|
@ -240,6 +240,7 @@ static void ext4_get_ino_and_lblk_bits(struct super_block *sb,
|
||||||
}
|
}
|
||||||
|
|
||||||
const struct fscrypt_operations ext4_cryptops = {
|
const struct fscrypt_operations ext4_cryptops = {
|
||||||
|
.flags = FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS,
|
||||||
.key_prefix = "ext4:",
|
.key_prefix = "ext4:",
|
||||||
.get_context = ext4_get_context,
|
.get_context = ext4_get_context,
|
||||||
.set_context = ext4_set_context,
|
.set_context = ext4_set_context,
|
||||||
|
|
|
||||||
|
|
@ -384,6 +384,17 @@ static void f2fs_write_end_io(struct bio *bio)
|
||||||
bio_put(bio);
|
bio_put(bio);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_BLK_DEV_ZONED
|
||||||
|
static void f2fs_zone_write_end_io(struct bio *bio)
|
||||||
|
{
|
||||||
|
struct f2fs_bio_info *io = (struct f2fs_bio_info *)bio->bi_private;
|
||||||
|
|
||||||
|
bio->bi_private = io->bi_private;
|
||||||
|
complete(&io->zone_wait);
|
||||||
|
f2fs_write_end_io(bio);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
struct block_device *f2fs_target_device(struct f2fs_sb_info *sbi,
|
struct block_device *f2fs_target_device(struct f2fs_sb_info *sbi,
|
||||||
block_t blk_addr, sector_t *sector)
|
block_t blk_addr, sector_t *sector)
|
||||||
{
|
{
|
||||||
|
|
@ -644,6 +655,11 @@ int f2fs_init_write_merge_io(struct f2fs_sb_info *sbi)
|
||||||
INIT_LIST_HEAD(&sbi->write_io[i][j].io_list);
|
INIT_LIST_HEAD(&sbi->write_io[i][j].io_list);
|
||||||
INIT_LIST_HEAD(&sbi->write_io[i][j].bio_list);
|
INIT_LIST_HEAD(&sbi->write_io[i][j].bio_list);
|
||||||
init_f2fs_rwsem(&sbi->write_io[i][j].bio_list_lock);
|
init_f2fs_rwsem(&sbi->write_io[i][j].bio_list_lock);
|
||||||
|
#ifdef CONFIG_BLK_DEV_ZONED
|
||||||
|
init_completion(&sbi->write_io[i][j].zone_wait);
|
||||||
|
sbi->write_io[i][j].zone_pending_bio = NULL;
|
||||||
|
sbi->write_io[i][j].bi_private = NULL;
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -970,6 +986,26 @@ alloc_new:
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_BLK_DEV_ZONED
|
||||||
|
static bool is_end_zone_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr)
|
||||||
|
{
|
||||||
|
int devi = 0;
|
||||||
|
|
||||||
|
if (f2fs_is_multi_device(sbi)) {
|
||||||
|
devi = f2fs_target_device_index(sbi, blkaddr);
|
||||||
|
if (blkaddr < FDEV(devi).start_blk ||
|
||||||
|
blkaddr > FDEV(devi).end_blk) {
|
||||||
|
f2fs_err(sbi, "Invalid block %x", blkaddr);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
blkaddr -= FDEV(devi).start_blk;
|
||||||
|
}
|
||||||
|
return bdev_zoned_model(FDEV(devi).bdev) == BLK_ZONED_HM &&
|
||||||
|
f2fs_blkz_is_seq(sbi, devi, blkaddr) &&
|
||||||
|
(blkaddr % sbi->blocks_per_blkz == sbi->blocks_per_blkz - 1);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
void f2fs_submit_page_write(struct f2fs_io_info *fio)
|
void f2fs_submit_page_write(struct f2fs_io_info *fio)
|
||||||
{
|
{
|
||||||
struct f2fs_sb_info *sbi = fio->sbi;
|
struct f2fs_sb_info *sbi = fio->sbi;
|
||||||
|
|
@ -980,6 +1016,16 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
|
||||||
f2fs_bug_on(sbi, is_read_io(fio->op));
|
f2fs_bug_on(sbi, is_read_io(fio->op));
|
||||||
|
|
||||||
f2fs_down_write(&io->io_rwsem);
|
f2fs_down_write(&io->io_rwsem);
|
||||||
|
|
||||||
|
#ifdef CONFIG_BLK_DEV_ZONED
|
||||||
|
if (f2fs_sb_has_blkzoned(sbi) && btype < META && io->zone_pending_bio) {
|
||||||
|
wait_for_completion_io(&io->zone_wait);
|
||||||
|
bio_put(io->zone_pending_bio);
|
||||||
|
io->zone_pending_bio = NULL;
|
||||||
|
io->bi_private = NULL;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
next:
|
next:
|
||||||
if (fio->in_list) {
|
if (fio->in_list) {
|
||||||
spin_lock(&io->io_lock);
|
spin_lock(&io->io_lock);
|
||||||
|
|
@ -1043,6 +1089,18 @@ skip:
|
||||||
if (fio->in_list)
|
if (fio->in_list)
|
||||||
goto next;
|
goto next;
|
||||||
out:
|
out:
|
||||||
|
#ifdef CONFIG_BLK_DEV_ZONED
|
||||||
|
if (f2fs_sb_has_blkzoned(sbi) && btype < META &&
|
||||||
|
is_end_zone_blkaddr(sbi, fio->new_blkaddr)) {
|
||||||
|
bio_get(io->bio);
|
||||||
|
reinit_completion(&io->zone_wait);
|
||||||
|
io->bi_private = io->bio->bi_private;
|
||||||
|
io->bio->bi_private = io;
|
||||||
|
io->bio->bi_end_io = f2fs_zone_write_end_io;
|
||||||
|
io->zone_pending_bio = io->bio;
|
||||||
|
__submit_merged_bio(io);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
if (is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN) ||
|
if (is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN) ||
|
||||||
!f2fs_is_checkpoint_ready(sbi))
|
!f2fs_is_checkpoint_ready(sbi))
|
||||||
__submit_merged_bio(io);
|
__submit_merged_bio(io);
|
||||||
|
|
|
||||||
|
|
@ -1217,6 +1217,11 @@ struct f2fs_bio_info {
|
||||||
struct bio *bio; /* bios to merge */
|
struct bio *bio; /* bios to merge */
|
||||||
sector_t last_block_in_bio; /* last block number */
|
sector_t last_block_in_bio; /* last block number */
|
||||||
struct f2fs_io_info fio; /* store buffered io info. */
|
struct f2fs_io_info fio; /* store buffered io info. */
|
||||||
|
#ifdef CONFIG_BLK_DEV_ZONED
|
||||||
|
struct completion zone_wait; /* condition value for the previous open zone to close */
|
||||||
|
struct bio *zone_pending_bio; /* pending bio for the previous zone */
|
||||||
|
void *bi_private; /* previous bi_private for pending bio */
|
||||||
|
#endif
|
||||||
struct f2fs_rwsem io_rwsem; /* blocking op for bio */
|
struct f2fs_rwsem io_rwsem; /* blocking op for bio */
|
||||||
spinlock_t io_lock; /* serialize DATA/NODE IOs */
|
spinlock_t io_lock; /* serialize DATA/NODE IOs */
|
||||||
struct list_head io_list; /* track fios */
|
struct list_head io_list; /* track fios */
|
||||||
|
|
|
||||||
|
|
@ -1325,7 +1325,8 @@ static void __insert_discard_cmd(struct f2fs_sb_info *sbi,
|
||||||
p = &(*p)->rb_right;
|
p = &(*p)->rb_right;
|
||||||
leftmost = false;
|
leftmost = false;
|
||||||
} else {
|
} else {
|
||||||
f2fs_bug_on(sbi, 1);
|
/* Let's skip to add, if exists */
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -4813,39 +4814,70 @@ static int check_zone_write_pointer(struct f2fs_sb_info *sbi,
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If last valid block is beyond the write pointer, report the
|
* When safely unmounted in the previous mount, we can trust write
|
||||||
* inconsistency. This inconsistency does not cause write error
|
* pointers. Otherwise, finish zones.
|
||||||
* because the zone will not be selected for write operation until
|
|
||||||
* it get discarded. Just report it.
|
|
||||||
*/
|
*/
|
||||||
if (last_valid_block >= wp_block) {
|
if (is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) {
|
||||||
f2fs_notice(sbi, "Valid block beyond write pointer: "
|
/*
|
||||||
"valid block[0x%x,0x%x] wp[0x%x,0x%x]",
|
* The write pointer matches with the valid blocks or
|
||||||
|
* already points to the end of the zone.
|
||||||
|
*/
|
||||||
|
if ((last_valid_block + 1 == wp_block) ||
|
||||||
|
(zone->wp == zone->start + zone->len))
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (last_valid_block + 1 == zone_block) {
|
||||||
|
if (is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) {
|
||||||
|
/*
|
||||||
|
* If there is no valid block in the zone and if write
|
||||||
|
* pointer is not at zone start, reset the write
|
||||||
|
* pointer.
|
||||||
|
*/
|
||||||
|
f2fs_notice(sbi,
|
||||||
|
"Zone without valid block has non-zero write "
|
||||||
|
"pointer. Reset the write pointer: wp[0x%x,0x%x]",
|
||||||
|
wp_segno, wp_blkoff);
|
||||||
|
}
|
||||||
|
ret = __f2fs_issue_discard_zone(sbi, fdev->bdev, zone_block,
|
||||||
|
zone->len >> log_sectors_per_block);
|
||||||
|
if (ret)
|
||||||
|
f2fs_err(sbi, "Discard zone failed: %s (errno=%d)",
|
||||||
|
fdev->path, ret);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) {
|
||||||
|
/*
|
||||||
|
* If there are valid blocks and the write pointer doesn't match
|
||||||
|
* with them, we need to report the inconsistency and fill
|
||||||
|
* the zone till the end to close the zone. This inconsistency
|
||||||
|
* does not cause write error because the zone will not be
|
||||||
|
* selected for write operation until it get discarded.
|
||||||
|
*/
|
||||||
|
f2fs_notice(sbi, "Valid blocks are not aligned with write "
|
||||||
|
"pointer: valid block[0x%x,0x%x] wp[0x%x,0x%x]",
|
||||||
GET_SEGNO(sbi, last_valid_block),
|
GET_SEGNO(sbi, last_valid_block),
|
||||||
GET_BLKOFF_FROM_SEG0(sbi, last_valid_block),
|
GET_BLKOFF_FROM_SEG0(sbi, last_valid_block),
|
||||||
wp_segno, wp_blkoff);
|
wp_segno, wp_blkoff);
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
ret = blkdev_zone_mgmt(fdev->bdev, REQ_OP_ZONE_FINISH,
|
||||||
* If there is no valid block in the zone and if write pointer is
|
zone->start, zone->len, GFP_NOFS);
|
||||||
* not at zone start, reset the write pointer.
|
if (ret == -EOPNOTSUPP) {
|
||||||
*/
|
ret = blkdev_issue_zeroout(fdev->bdev, zone->wp,
|
||||||
if (last_valid_block + 1 == zone_block && zone->wp != zone->start) {
|
zone->len - (zone->wp - zone->start),
|
||||||
f2fs_notice(sbi,
|
GFP_NOFS, 0);
|
||||||
"Zone without valid block has non-zero write "
|
if (ret)
|
||||||
"pointer. Reset the write pointer: wp[0x%x,0x%x]",
|
f2fs_err(sbi, "Fill up zone failed: %s (errno=%d)",
|
||||||
wp_segno, wp_blkoff);
|
fdev->path, ret);
|
||||||
ret = __f2fs_issue_discard_zone(sbi, fdev->bdev, zone_block,
|
} else if (ret) {
|
||||||
zone->len >> log_sectors_per_block);
|
f2fs_err(sbi, "Finishing zone failed: %s (errno=%d)",
|
||||||
if (ret) {
|
fdev->path, ret);
|
||||||
f2fs_err(sbi, "Discard zone failed: %s (errno=%d)",
|
|
||||||
fdev->path, ret);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct f2fs_dev_info *get_target_zoned_dev(struct f2fs_sb_info *sbi,
|
static struct f2fs_dev_info *get_target_zoned_dev(struct f2fs_sb_info *sbi,
|
||||||
|
|
@ -4903,18 +4935,27 @@ static int fix_curseg_write_pointer(struct f2fs_sb_info *sbi, int type)
|
||||||
if (zone.type != BLK_ZONE_TYPE_SEQWRITE_REQ)
|
if (zone.type != BLK_ZONE_TYPE_SEQWRITE_REQ)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
wp_block = zbd->start_blk + (zone.wp >> log_sectors_per_block);
|
/*
|
||||||
wp_segno = GET_SEGNO(sbi, wp_block);
|
* When safely unmounted in the previous mount, we could use current
|
||||||
wp_blkoff = wp_block - START_BLOCK(sbi, wp_segno);
|
* segments. Otherwise, allocate new sections.
|
||||||
wp_sector_off = zone.wp & GENMASK(log_sectors_per_block - 1, 0);
|
*/
|
||||||
|
if (is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) {
|
||||||
|
wp_block = zbd->start_blk + (zone.wp >> log_sectors_per_block);
|
||||||
|
wp_segno = GET_SEGNO(sbi, wp_block);
|
||||||
|
wp_blkoff = wp_block - START_BLOCK(sbi, wp_segno);
|
||||||
|
wp_sector_off = zone.wp & GENMASK(log_sectors_per_block - 1, 0);
|
||||||
|
|
||||||
if (cs->segno == wp_segno && cs->next_blkoff == wp_blkoff &&
|
if (cs->segno == wp_segno && cs->next_blkoff == wp_blkoff &&
|
||||||
wp_sector_off == 0)
|
wp_sector_off == 0)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
f2fs_notice(sbi, "Unaligned curseg[%d] with write pointer: "
|
f2fs_notice(sbi, "Unaligned curseg[%d] with write pointer: "
|
||||||
"curseg[0x%x,0x%x] wp[0x%x,0x%x]",
|
"curseg[0x%x,0x%x] wp[0x%x,0x%x]", type, cs->segno,
|
||||||
type, cs->segno, cs->next_blkoff, wp_segno, wp_blkoff);
|
cs->next_blkoff, wp_segno, wp_blkoff);
|
||||||
|
} else {
|
||||||
|
f2fs_notice(sbi, "Not successfully unmounted in the previous "
|
||||||
|
"mount");
|
||||||
|
}
|
||||||
|
|
||||||
f2fs_notice(sbi, "Assign new section to curseg[%d]: "
|
f2fs_notice(sbi, "Assign new section to curseg[%d]: "
|
||||||
"curseg[0x%x,0x%x]", type, cs->segno, cs->next_blkoff);
|
"curseg[0x%x,0x%x]", type, cs->segno, cs->next_blkoff);
|
||||||
|
|
|
||||||
|
|
@ -3181,6 +3181,7 @@ static struct block_device **f2fs_get_devices(struct super_block *sb,
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct fscrypt_operations f2fs_cryptops = {
|
static const struct fscrypt_operations f2fs_cryptops = {
|
||||||
|
.flags = FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS,
|
||||||
.key_prefix = "f2fs:",
|
.key_prefix = "f2fs:",
|
||||||
.get_context = f2fs_get_context,
|
.get_context = f2fs_get_context,
|
||||||
.set_context = f2fs_set_context,
|
.set_context = f2fs_set_context,
|
||||||
|
|
|
||||||
|
|
@ -1026,6 +1026,16 @@ static void fuse_readahead(struct readahead_control *rac)
|
||||||
struct fuse_conn *fc = get_fuse_conn(inode);
|
struct fuse_conn *fc = get_fuse_conn(inode);
|
||||||
unsigned int i, max_pages, nr_pages = 0;
|
unsigned int i, max_pages, nr_pages = 0;
|
||||||
|
|
||||||
|
#ifdef CONFIG_FUSE_BPF
|
||||||
|
/*
|
||||||
|
* Currently no meaningful readahead is possible with fuse-bpf within
|
||||||
|
* the kernel, so unless the daemon is aware of this file, ignore this
|
||||||
|
* call.
|
||||||
|
*/
|
||||||
|
if (!get_fuse_inode(inode)->nodeid)
|
||||||
|
return;
|
||||||
|
#endif
|
||||||
|
|
||||||
if (fuse_is_bad(inode))
|
if (fuse_is_bad(inode))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -48,6 +48,8 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
|
||||||
unsigned int order_per_bit,
|
unsigned int order_per_bit,
|
||||||
const char *name,
|
const char *name,
|
||||||
struct cma **res_cma);
|
struct cma **res_cma);
|
||||||
|
extern struct page *__cma_alloc(struct cma *cma, unsigned long count, unsigned int align,
|
||||||
|
gfp_t gfp_mask);
|
||||||
extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align,
|
extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align,
|
||||||
bool no_warn);
|
bool no_warn);
|
||||||
extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count);
|
extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count);
|
||||||
|
|
|
||||||
|
|
@ -67,6 +67,18 @@ struct fscrypt_name {
|
||||||
*/
|
*/
|
||||||
#define FS_CFLG_OWN_PAGES (1U << 1)
|
#define FS_CFLG_OWN_PAGES (1U << 1)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If set, then fs/crypto/ will allow users to select a crypto data unit size
|
||||||
|
* that is less than the filesystem block size. This is done via the
|
||||||
|
* log2_data_unit_size field of the fscrypt policy. This flag is not compatible
|
||||||
|
* with filesystems that encrypt variable-length blocks (i.e. blocks that aren't
|
||||||
|
* all equal to filesystem's block size), for example as a result of
|
||||||
|
* compression. It's also not compatible with the
|
||||||
|
* fscrypt_encrypt_block_inplace() and fscrypt_decrypt_block_inplace()
|
||||||
|
* functions.
|
||||||
|
*/
|
||||||
|
#define FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS (1U << 2)
|
||||||
|
|
||||||
/* Crypto operations for filesystems */
|
/* Crypto operations for filesystems */
|
||||||
struct fscrypt_operations {
|
struct fscrypt_operations {
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -9,10 +9,25 @@
|
||||||
#define __SOC_CARD_H
|
#define __SOC_CARD_H
|
||||||
|
|
||||||
enum snd_soc_card_subclass {
|
enum snd_soc_card_subclass {
|
||||||
SND_SOC_CARD_CLASS_INIT = 0,
|
SND_SOC_CARD_CLASS_ROOT = 0,
|
||||||
SND_SOC_CARD_CLASS_RUNTIME = 1,
|
SND_SOC_CARD_CLASS_RUNTIME = 1,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static inline void snd_soc_card_mutex_lock_root(struct snd_soc_card *card)
|
||||||
|
{
|
||||||
|
mutex_lock_nested(&card->mutex, SND_SOC_CARD_CLASS_ROOT);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void snd_soc_card_mutex_lock(struct snd_soc_card *card)
|
||||||
|
{
|
||||||
|
mutex_lock_nested(&card->mutex, SND_SOC_CARD_CLASS_RUNTIME);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void snd_soc_card_mutex_unlock(struct snd_soc_card *card)
|
||||||
|
{
|
||||||
|
mutex_unlock(&card->mutex);
|
||||||
|
}
|
||||||
|
|
||||||
struct snd_kcontrol *snd_soc_card_get_kcontrol(struct snd_soc_card *soc_card,
|
struct snd_kcontrol *snd_soc_card_get_kcontrol(struct snd_soc_card *soc_card,
|
||||||
const char *name);
|
const char *name);
|
||||||
int snd_soc_card_jack_new(struct snd_soc_card *card, const char *id, int type,
|
int snd_soc_card_jack_new(struct snd_soc_card *card, const char *id, int type,
|
||||||
|
|
|
||||||
|
|
@ -560,11 +560,6 @@ enum snd_soc_dapm_type {
|
||||||
SND_SOC_DAPM_TYPE_COUNT
|
SND_SOC_DAPM_TYPE_COUNT
|
||||||
};
|
};
|
||||||
|
|
||||||
enum snd_soc_dapm_subclass {
|
|
||||||
SND_SOC_DAPM_CLASS_INIT = 0,
|
|
||||||
SND_SOC_DAPM_CLASS_RUNTIME = 1,
|
|
||||||
};
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* DAPM audio route definition.
|
* DAPM audio route definition.
|
||||||
*
|
*
|
||||||
|
|
|
||||||
|
|
@ -1384,17 +1384,112 @@ extern struct dentry *snd_soc_debugfs_root;
|
||||||
|
|
||||||
extern const struct dev_pm_ops snd_soc_pm_ops;
|
extern const struct dev_pm_ops snd_soc_pm_ops;
|
||||||
|
|
||||||
/* Helper functions */
|
/*
|
||||||
static inline void snd_soc_dapm_mutex_lock(struct snd_soc_dapm_context *dapm)
|
* DAPM helper functions
|
||||||
|
*/
|
||||||
|
enum snd_soc_dapm_subclass {
|
||||||
|
SND_SOC_DAPM_CLASS_ROOT = 0,
|
||||||
|
SND_SOC_DAPM_CLASS_RUNTIME = 1,
|
||||||
|
};
|
||||||
|
|
||||||
|
static inline void _snd_soc_dapm_mutex_lock_root_c(struct snd_soc_card *card)
|
||||||
{
|
{
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_ROOT);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void snd_soc_dapm_mutex_unlock(struct snd_soc_dapm_context *dapm)
|
static inline void _snd_soc_dapm_mutex_lock_c(struct snd_soc_card *card)
|
||||||
{
|
{
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dapm_mutex_unlock_c(struct snd_soc_card *card)
|
||||||
|
{
|
||||||
|
mutex_unlock(&card->dapm_mutex);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dapm_mutex_assert_held_c(struct snd_soc_card *card)
|
||||||
|
{
|
||||||
|
lockdep_assert_held(&card->dapm_mutex);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dapm_mutex_lock_root_d(struct snd_soc_dapm_context *dapm)
|
||||||
|
{
|
||||||
|
_snd_soc_dapm_mutex_lock_root_c(dapm->card);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dapm_mutex_lock_d(struct snd_soc_dapm_context *dapm)
|
||||||
|
{
|
||||||
|
_snd_soc_dapm_mutex_lock_c(dapm->card);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dapm_mutex_unlock_d(struct snd_soc_dapm_context *dapm)
|
||||||
|
{
|
||||||
|
_snd_soc_dapm_mutex_unlock_c(dapm->card);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dapm_mutex_assert_held_d(struct snd_soc_dapm_context *dapm)
|
||||||
|
{
|
||||||
|
_snd_soc_dapm_mutex_assert_held_c(dapm->card);
|
||||||
|
}
|
||||||
|
|
||||||
|
#define snd_soc_dapm_mutex_lock_root(x) _Generic((x), \
|
||||||
|
struct snd_soc_card * : _snd_soc_dapm_mutex_lock_root_c, \
|
||||||
|
struct snd_soc_dapm_context * : _snd_soc_dapm_mutex_lock_root_d)(x)
|
||||||
|
#define snd_soc_dapm_mutex_lock(x) _Generic((x), \
|
||||||
|
struct snd_soc_card * : _snd_soc_dapm_mutex_lock_c, \
|
||||||
|
struct snd_soc_dapm_context * : _snd_soc_dapm_mutex_lock_d)(x)
|
||||||
|
#define snd_soc_dapm_mutex_unlock(x) _Generic((x), \
|
||||||
|
struct snd_soc_card * : _snd_soc_dapm_mutex_unlock_c, \
|
||||||
|
struct snd_soc_dapm_context * : _snd_soc_dapm_mutex_unlock_d)(x)
|
||||||
|
#define snd_soc_dapm_mutex_assert_held(x) _Generic((x), \
|
||||||
|
struct snd_soc_card * : _snd_soc_dapm_mutex_assert_held_c, \
|
||||||
|
struct snd_soc_dapm_context * : _snd_soc_dapm_mutex_assert_held_d)(x)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* PCM helper functions
|
||||||
|
*/
|
||||||
|
static inline void _snd_soc_dpcm_mutex_lock_c(struct snd_soc_card *card)
|
||||||
|
{
|
||||||
|
mutex_lock_nested(&card->pcm_mutex, card->pcm_subclass);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dpcm_mutex_unlock_c(struct snd_soc_card *card)
|
||||||
|
{
|
||||||
|
mutex_unlock(&card->pcm_mutex);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dpcm_mutex_assert_held_c(struct snd_soc_card *card)
|
||||||
|
{
|
||||||
|
lockdep_assert_held(&card->pcm_mutex);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dpcm_mutex_lock_r(struct snd_soc_pcm_runtime *rtd)
|
||||||
|
{
|
||||||
|
_snd_soc_dpcm_mutex_lock_c(rtd->card);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dpcm_mutex_unlock_r(struct snd_soc_pcm_runtime *rtd)
|
||||||
|
{
|
||||||
|
_snd_soc_dpcm_mutex_unlock_c(rtd->card);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void _snd_soc_dpcm_mutex_assert_held_r(struct snd_soc_pcm_runtime *rtd)
|
||||||
|
{
|
||||||
|
_snd_soc_dpcm_mutex_assert_held_c(rtd->card);
|
||||||
|
}
|
||||||
|
|
||||||
|
#define snd_soc_dpcm_mutex_lock(x) _Generic((x), \
|
||||||
|
struct snd_soc_card * : _snd_soc_dpcm_mutex_lock_c, \
|
||||||
|
struct snd_soc_pcm_runtime * : _snd_soc_dpcm_mutex_lock_r)(x)
|
||||||
|
|
||||||
|
#define snd_soc_dpcm_mutex_unlock(x) _Generic((x), \
|
||||||
|
struct snd_soc_card * : _snd_soc_dpcm_mutex_unlock_c, \
|
||||||
|
struct snd_soc_pcm_runtime * : _snd_soc_dpcm_mutex_unlock_r)(x)
|
||||||
|
|
||||||
|
#define snd_soc_dpcm_mutex_assert_held(x) _Generic((x), \
|
||||||
|
struct snd_soc_card * : _snd_soc_dpcm_mutex_assert_held_c, \
|
||||||
|
struct snd_soc_pcm_runtime * : _snd_soc_dpcm_mutex_assert_held_r)(x)
|
||||||
|
|
||||||
#include <sound/soc-component.h>
|
#include <sound/soc-component.h>
|
||||||
#include <sound/soc-card.h>
|
#include <sound/soc-card.h>
|
||||||
#include <sound/soc-jack.h>
|
#include <sound/soc-jack.h>
|
||||||
|
|
|
||||||
|
|
@ -199,6 +199,9 @@ DECLARE_HOOK(android_vh_adjust_kvmalloc_flags,
|
||||||
DECLARE_HOOK(android_vh_slab_folio_alloced,
|
DECLARE_HOOK(android_vh_slab_folio_alloced,
|
||||||
TP_PROTO(unsigned int order, gfp_t flags),
|
TP_PROTO(unsigned int order, gfp_t flags),
|
||||||
TP_ARGS(order, flags));
|
TP_ARGS(order, flags));
|
||||||
|
DECLARE_HOOK(android_vh_kmalloc_large_alloced,
|
||||||
|
TP_PROTO(struct page *page, unsigned int order, gfp_t flags),
|
||||||
|
TP_ARGS(page, order, flags));
|
||||||
#endif /* _TRACE_HOOK_MM_H */
|
#endif /* _TRACE_HOOK_MM_H */
|
||||||
|
|
||||||
/* This part must be outside protection */
|
/* This part must be outside protection */
|
||||||
|
|
|
||||||
|
|
@ -25,6 +25,13 @@ DECLARE_RESTRICTED_HOOK(android_rvh_sk_alloc,
|
||||||
DECLARE_RESTRICTED_HOOK(android_rvh_sk_free,
|
DECLARE_RESTRICTED_HOOK(android_rvh_sk_free,
|
||||||
TP_PROTO(struct sock *sock), TP_ARGS(sock), 1);
|
TP_PROTO(struct sock *sock), TP_ARGS(sock), 1);
|
||||||
|
|
||||||
|
struct poll_table_struct;
|
||||||
|
typedef struct poll_table_struct poll_table;
|
||||||
|
DECLARE_HOOK(android_vh_netlink_poll,
|
||||||
|
TP_PROTO(struct file *file, struct socket *sock, poll_table *wait,
|
||||||
|
__poll_t *mask),
|
||||||
|
TP_ARGS(file, sock, wait, mask));
|
||||||
|
|
||||||
/* macro versions of hooks are no longer required */
|
/* macro versions of hooks are no longer required */
|
||||||
|
|
||||||
#endif /* _TRACE_HOOK_NET_VH_H */
|
#endif /* _TRACE_HOOK_NET_VH_H */
|
||||||
|
|
|
||||||
|
|
@ -76,6 +76,10 @@ DECLARE_RESTRICTED_HOOK(android_rvh_set_user_nice,
|
||||||
TP_PROTO(struct task_struct *p, long *nice, bool *allowed),
|
TP_PROTO(struct task_struct *p, long *nice, bool *allowed),
|
||||||
TP_ARGS(p, nice, allowed), 1);
|
TP_ARGS(p, nice, allowed), 1);
|
||||||
|
|
||||||
|
DECLARE_RESTRICTED_HOOK(android_rvh_set_user_nice_locked,
|
||||||
|
TP_PROTO(struct task_struct *p, long *nice),
|
||||||
|
TP_ARGS(p, nice), 1);
|
||||||
|
|
||||||
DECLARE_RESTRICTED_HOOK(android_rvh_setscheduler,
|
DECLARE_RESTRICTED_HOOK(android_rvh_setscheduler,
|
||||||
TP_PROTO(struct task_struct *p),
|
TP_PROTO(struct task_struct *p),
|
||||||
TP_ARGS(p), 1);
|
TP_ARGS(p), 1);
|
||||||
|
|
|
||||||
|
|
@ -71,7 +71,8 @@ struct fscrypt_policy_v2 {
|
||||||
__u8 contents_encryption_mode;
|
__u8 contents_encryption_mode;
|
||||||
__u8 filenames_encryption_mode;
|
__u8 filenames_encryption_mode;
|
||||||
__u8 flags;
|
__u8 flags;
|
||||||
__u8 __reserved[4];
|
__u8 log2_data_unit_size;
|
||||||
|
__u8 __reserved[3];
|
||||||
__u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
__u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -7238,6 +7238,10 @@ void set_user_nice(struct task_struct *p, long nice)
|
||||||
rq = task_rq_lock(p, &rf);
|
rq = task_rq_lock(p, &rf);
|
||||||
update_rq_clock(rq);
|
update_rq_clock(rq);
|
||||||
|
|
||||||
|
trace_android_rvh_set_user_nice_locked(p, &nice);
|
||||||
|
if (task_nice(p) == nice)
|
||||||
|
goto out_unlock;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The RT priorities are set via sched_setscheduler(), but we still
|
* The RT priorities are set via sched_setscheduler(), but we still
|
||||||
* allow the 'normal' nice value to be set - but as expected
|
* allow the 'normal' nice value to be set - but as expected
|
||||||
|
|
|
||||||
|
|
@ -25,6 +25,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_finish_prio_fork);
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_rtmutex_force_update);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_rtmutex_force_update);
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_rtmutex_prepare_setprio);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_rtmutex_prepare_setprio);
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_user_nice);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_user_nice);
|
||||||
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_user_nice_locked);
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_setscheduler);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_setscheduler);
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_find_busiest_group);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_find_busiest_group);
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_dump_throttled_rt_tasks);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_dump_throttled_rt_tasks);
|
||||||
|
|
|
||||||
43
mm/cma.c
43
mm/cma.c
|
|
@ -416,17 +416,18 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* cma_alloc() - allocate pages from contiguous area
|
* __cma_alloc() - allocate pages from contiguous area
|
||||||
* @cma: Contiguous memory region for which the allocation is performed.
|
* @cma: Contiguous memory region for which the allocation is performed.
|
||||||
* @count: Requested number of pages.
|
* @count: Requested number of pages.
|
||||||
* @align: Requested alignment of pages (in PAGE_SIZE order).
|
* @align: Requested alignment of pages (in PAGE_SIZE order).
|
||||||
* @no_warn: Avoid printing message about failed allocation
|
* @gfp_mask: GFP mask to use during the cma allocation.
|
||||||
*
|
*
|
||||||
* This function allocates part of contiguous memory on specific
|
* This function is same with cma_alloc but supports gfp_mask.
|
||||||
* contiguous memory area.
|
* Currently, the gfp_mask supports only __GFP_NOWARN and __GFP_NORETRY.
|
||||||
|
* If user passes other flags, it fails the allocation.
|
||||||
*/
|
*/
|
||||||
struct page *cma_alloc(struct cma *cma, unsigned long count,
|
struct page *__cma_alloc(struct cma *cma, unsigned long count,
|
||||||
unsigned int align, bool no_warn)
|
unsigned int align, gfp_t gfp_mask)
|
||||||
{
|
{
|
||||||
unsigned long mask, offset;
|
unsigned long mask, offset;
|
||||||
unsigned long pfn = -1;
|
unsigned long pfn = -1;
|
||||||
|
|
@ -438,6 +439,10 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
|
||||||
int num_attempts = 0;
|
int num_attempts = 0;
|
||||||
int max_retries = 5;
|
int max_retries = 5;
|
||||||
|
|
||||||
|
if (WARN_ON_ONCE((gfp_mask & GFP_KERNEL) == 0 ||
|
||||||
|
(gfp_mask & ~(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY)) != 0))
|
||||||
|
goto out;
|
||||||
|
|
||||||
if (!cma || !cma->count || !cma->bitmap)
|
if (!cma || !cma->count || !cma->bitmap)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
|
|
@ -466,7 +471,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
|
||||||
if ((num_attempts < max_retries) && (ret == -EBUSY)) {
|
if ((num_attempts < max_retries) && (ret == -EBUSY)) {
|
||||||
spin_unlock_irq(&cma->lock);
|
spin_unlock_irq(&cma->lock);
|
||||||
|
|
||||||
if (fatal_signal_pending(current))
|
if (fatal_signal_pending(current) ||
|
||||||
|
(gfp_mask & __GFP_NORETRY))
|
||||||
break;
|
break;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
@ -496,8 +502,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
|
||||||
|
|
||||||
pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
|
pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
|
||||||
mutex_lock(&cma_mutex);
|
mutex_lock(&cma_mutex);
|
||||||
ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA,
|
ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, gfp_mask);
|
||||||
GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
|
|
||||||
mutex_unlock(&cma_mutex);
|
mutex_unlock(&cma_mutex);
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
page = pfn_to_page(pfn);
|
page = pfn_to_page(pfn);
|
||||||
|
|
@ -529,7 +534,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
|
||||||
page_kasan_tag_reset(page + i);
|
page_kasan_tag_reset(page + i);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (ret && !no_warn) {
|
if (ret && !(gfp_mask & __GFP_NOWARN)) {
|
||||||
pr_err_ratelimited("%s: %s: alloc failed, req-size: %lu pages, ret: %d\n",
|
pr_err_ratelimited("%s: %s: alloc failed, req-size: %lu pages, ret: %d\n",
|
||||||
__func__, cma->name, count, ret);
|
__func__, cma->name, count, ret);
|
||||||
cma_debug_show_areas(cma);
|
cma_debug_show_areas(cma);
|
||||||
|
|
@ -548,6 +553,24 @@ out:
|
||||||
|
|
||||||
return page;
|
return page;
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(__cma_alloc);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* cma_alloc() - allocate pages from contiguous area
|
||||||
|
* @cma: Contiguous memory region for which the allocation is performed.
|
||||||
|
* @count: Requested number of pages.
|
||||||
|
* @align: Requested alignment of pages (in PAGE_SIZE order).
|
||||||
|
* @no_warn: Avoid printing message about failed allocation
|
||||||
|
*
|
||||||
|
* This function allocates part of contiguous memory on specific
|
||||||
|
* contiguous memory area.
|
||||||
|
*/
|
||||||
|
struct page *cma_alloc(struct cma *cma, unsigned long count,
|
||||||
|
unsigned int align, bool no_warn)
|
||||||
|
{
|
||||||
|
return __cma_alloc(cma, count, align, GFP_KERNEL |
|
||||||
|
(no_warn ? __GFP_NOWARN : 0));
|
||||||
|
}
|
||||||
EXPORT_SYMBOL_GPL(cma_alloc);
|
EXPORT_SYMBOL_GPL(cma_alloc);
|
||||||
|
|
||||||
bool cma_pages_valid(struct cma *cma, const struct page *pages,
|
bool cma_pages_valid(struct cma *cma, const struct page *pages,
|
||||||
|
|
|
||||||
|
|
@ -9336,12 +9336,16 @@ int __alloc_contig_migrate_range(struct compact_control *cc,
|
||||||
unsigned int nr_reclaimed;
|
unsigned int nr_reclaimed;
|
||||||
unsigned long pfn = start;
|
unsigned long pfn = start;
|
||||||
unsigned int tries = 0;
|
unsigned int tries = 0;
|
||||||
|
unsigned int max_tries = 5;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
struct migration_target_control mtc = {
|
struct migration_target_control mtc = {
|
||||||
.nid = zone_to_nid(cc->zone),
|
.nid = zone_to_nid(cc->zone),
|
||||||
.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
|
.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
if (cc->gfp_mask & __GFP_NORETRY)
|
||||||
|
max_tries = 1;
|
||||||
|
|
||||||
lru_cache_disable();
|
lru_cache_disable();
|
||||||
|
|
||||||
while (pfn < end || !list_empty(&cc->migratepages)) {
|
while (pfn < end || !list_empty(&cc->migratepages)) {
|
||||||
|
|
@ -9357,7 +9361,7 @@ int __alloc_contig_migrate_range(struct compact_control *cc,
|
||||||
break;
|
break;
|
||||||
pfn = cc->migrate_pfn;
|
pfn = cc->migrate_pfn;
|
||||||
tries = 0;
|
tries = 0;
|
||||||
} else if (++tries == 5) {
|
} else if (++tries == max_tries) {
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
@ -9428,7 +9432,11 @@ int alloc_contig_range(unsigned long start, unsigned long end,
|
||||||
.nr_migratepages = 0,
|
.nr_migratepages = 0,
|
||||||
.order = -1,
|
.order = -1,
|
||||||
.zone = page_zone(pfn_to_page(start)),
|
.zone = page_zone(pfn_to_page(start)),
|
||||||
.mode = MIGRATE_SYNC,
|
/*
|
||||||
|
* Use MIGRATE_ASYNC for __GFP_NORETRY requests as it never
|
||||||
|
* blocks.
|
||||||
|
*/
|
||||||
|
.mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
|
||||||
.ignore_skip_hint = true,
|
.ignore_skip_hint = true,
|
||||||
.no_set_skip_hint = true,
|
.no_set_skip_hint = true,
|
||||||
.gfp_mask = current_gfp_context(gfp_mask),
|
.gfp_mask = current_gfp_context(gfp_mask),
|
||||||
|
|
@ -9474,7 +9482,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
|
||||||
* -EBUSY is not accidentally used or returned to caller.
|
* -EBUSY is not accidentally used or returned to caller.
|
||||||
*/
|
*/
|
||||||
ret = __alloc_contig_migrate_range(&cc, start, end);
|
ret = __alloc_contig_migrate_range(&cc, start, end);
|
||||||
if (ret && ret != -EBUSY)
|
if (ret && (ret != -EBUSY || (gfp_mask & __GFP_NORETRY)))
|
||||||
goto done;
|
goto done;
|
||||||
ret = 0;
|
ret = 0;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1102,6 +1102,8 @@ static void *__kmalloc_large_node(size_t size, gfp_t flags, int node)
|
||||||
PAGE_SIZE << order);
|
PAGE_SIZE << order);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
trace_android_vh_kmalloc_large_alloced(page, order, flags);
|
||||||
|
|
||||||
ptr = kasan_kmalloc_large(ptr, size, flags);
|
ptr = kasan_kmalloc_large(ptr, size, flags);
|
||||||
/* As ptr might get tagged, call kmemleak hook after KASAN. */
|
/* As ptr might get tagged, call kmemleak hook after KASAN. */
|
||||||
kmemleak_alloc(ptr, size, 1, flags);
|
kmemleak_alloc(ptr, size, 1, flags);
|
||||||
|
|
|
||||||
10
mm/swap.c
10
mm/swap.c
|
|
@ -933,6 +933,7 @@ void lru_add_drain_all(void)
|
||||||
#endif /* CONFIG_SMP */
|
#endif /* CONFIG_SMP */
|
||||||
|
|
||||||
atomic_t lru_disable_count = ATOMIC_INIT(0);
|
atomic_t lru_disable_count = ATOMIC_INIT(0);
|
||||||
|
EXPORT_SYMBOL_GPL(lru_disable_count);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* lru_cache_disable() needs to be called before we start compiling
|
* lru_cache_disable() needs to be called before we start compiling
|
||||||
|
|
@ -944,7 +945,12 @@ atomic_t lru_disable_count = ATOMIC_INIT(0);
|
||||||
*/
|
*/
|
||||||
void lru_cache_disable(void)
|
void lru_cache_disable(void)
|
||||||
{
|
{
|
||||||
atomic_inc(&lru_disable_count);
|
/*
|
||||||
|
* If someone is already disabled lru_cache, just return with
|
||||||
|
* increasing the lru_disable_count.
|
||||||
|
*/
|
||||||
|
if (atomic_inc_not_zero(&lru_disable_count))
|
||||||
|
return;
|
||||||
/*
|
/*
|
||||||
* Readers of lru_disable_count are protected by either disabling
|
* Readers of lru_disable_count are protected by either disabling
|
||||||
* preemption or rcu_read_lock:
|
* preemption or rcu_read_lock:
|
||||||
|
|
@ -964,7 +970,9 @@ void lru_cache_disable(void)
|
||||||
#else
|
#else
|
||||||
lru_add_and_bh_lrus_drain();
|
lru_add_and_bh_lrus_drain();
|
||||||
#endif
|
#endif
|
||||||
|
atomic_inc(&lru_disable_count);
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(lru_cache_disable);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* release_pages - batched put_page()
|
* release_pages - batched put_page()
|
||||||
|
|
|
||||||
|
|
@ -75,6 +75,7 @@
|
||||||
|
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(mm_vmscan_direct_reclaim_begin);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(mm_vmscan_direct_reclaim_begin);
|
||||||
EXPORT_TRACEPOINT_SYMBOL_GPL(mm_vmscan_direct_reclaim_end);
|
EXPORT_TRACEPOINT_SYMBOL_GPL(mm_vmscan_direct_reclaim_end);
|
||||||
|
EXPORT_TRACEPOINT_SYMBOL_GPL(mm_vmscan_kswapd_wake);
|
||||||
|
|
||||||
struct scan_control {
|
struct scan_control {
|
||||||
/* How many pages shrink_list() should reclaim */
|
/* How many pages shrink_list() should reclaim */
|
||||||
|
|
|
||||||
|
|
@ -6194,6 +6194,12 @@ static int nft_setelem_deactivate(const struct net *net,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void nft_setelem_catchall_destroy(struct nft_set_elem_catchall *catchall)
|
||||||
|
{
|
||||||
|
list_del_rcu(&catchall->list);
|
||||||
|
kfree_rcu(catchall, rcu);
|
||||||
|
}
|
||||||
|
|
||||||
static void nft_setelem_catchall_remove(const struct net *net,
|
static void nft_setelem_catchall_remove(const struct net *net,
|
||||||
const struct nft_set *set,
|
const struct nft_set *set,
|
||||||
const struct nft_set_elem *elem)
|
const struct nft_set_elem *elem)
|
||||||
|
|
@ -6202,8 +6208,7 @@ static void nft_setelem_catchall_remove(const struct net *net,
|
||||||
|
|
||||||
list_for_each_entry_safe(catchall, next, &set->catchall_list, list) {
|
list_for_each_entry_safe(catchall, next, &set->catchall_list, list) {
|
||||||
if (catchall->elem == elem->priv) {
|
if (catchall->elem == elem->priv) {
|
||||||
list_del_rcu(&catchall->list);
|
nft_setelem_catchall_destroy(catchall);
|
||||||
kfree_rcu(catchall, rcu);
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -9270,11 +9275,12 @@ static struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc,
|
||||||
unsigned int gc_seq,
|
unsigned int gc_seq,
|
||||||
bool sync)
|
bool sync)
|
||||||
{
|
{
|
||||||
struct nft_set_elem_catchall *catchall;
|
struct nft_set_elem_catchall *catchall, *next;
|
||||||
const struct nft_set *set = gc->set;
|
const struct nft_set *set = gc->set;
|
||||||
|
struct nft_elem_priv *elem_priv;
|
||||||
struct nft_set_ext *ext;
|
struct nft_set_ext *ext;
|
||||||
|
|
||||||
list_for_each_entry_rcu(catchall, &set->catchall_list, list) {
|
list_for_each_entry_safe(catchall, next, &set->catchall_list, list) {
|
||||||
ext = nft_set_elem_ext(set, catchall->elem);
|
ext = nft_set_elem_ext(set, catchall->elem);
|
||||||
|
|
||||||
if (!nft_set_elem_expired(ext))
|
if (!nft_set_elem_expired(ext))
|
||||||
|
|
@ -9292,7 +9298,17 @@ dead_elem:
|
||||||
if (!gc)
|
if (!gc)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
nft_trans_gc_elem_add(gc, catchall->elem);
|
elem_priv = catchall->elem;
|
||||||
|
if (sync) {
|
||||||
|
struct nft_set_elem elem = {
|
||||||
|
.priv = elem_priv,
|
||||||
|
};
|
||||||
|
|
||||||
|
nft_setelem_data_deactivate(gc->net, gc->set, &elem);
|
||||||
|
nft_setelem_catchall_destroy(catchall);
|
||||||
|
}
|
||||||
|
|
||||||
|
nft_trans_gc_elem_add(gc, elem_priv);
|
||||||
}
|
}
|
||||||
|
|
||||||
return gc;
|
return gc;
|
||||||
|
|
|
||||||
|
|
@ -71,7 +71,8 @@
|
||||||
#include <net/netlink.h>
|
#include <net/netlink.h>
|
||||||
#define CREATE_TRACE_POINTS
|
#define CREATE_TRACE_POINTS
|
||||||
#include <trace/events/netlink.h>
|
#include <trace/events/netlink.h>
|
||||||
|
#undef CREATE_TRACE_POINTS
|
||||||
|
#include <trace/hooks/net.h>
|
||||||
#include "af_netlink.h"
|
#include "af_netlink.h"
|
||||||
|
|
||||||
struct listeners {
|
struct listeners {
|
||||||
|
|
@ -1966,6 +1967,15 @@ out:
|
||||||
return err ? : copied;
|
return err ? : copied;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static __poll_t netlink_poll(struct file *file, struct socket *sock,
|
||||||
|
poll_table *wait)
|
||||||
|
{
|
||||||
|
__poll_t mask = datagram_poll(file, sock, wait);
|
||||||
|
|
||||||
|
trace_android_vh_netlink_poll(file, sock, wait, &mask);
|
||||||
|
return mask;
|
||||||
|
}
|
||||||
|
|
||||||
static void netlink_data_ready(struct sock *sk)
|
static void netlink_data_ready(struct sock *sk)
|
||||||
{
|
{
|
||||||
BUG();
|
BUG();
|
||||||
|
|
@ -2766,7 +2776,7 @@ static const struct proto_ops netlink_ops = {
|
||||||
.socketpair = sock_no_socketpair,
|
.socketpair = sock_no_socketpair,
|
||||||
.accept = sock_no_accept,
|
.accept = sock_no_accept,
|
||||||
.getname = netlink_getname,
|
.getname = netlink_getname,
|
||||||
.poll = datagram_poll,
|
.poll = netlink_poll,
|
||||||
.ioctl = netlink_ioctl,
|
.ioctl = netlink_ioctl,
|
||||||
.listen = sock_no_listen,
|
.listen = sock_no_listen,
|
||||||
.shutdown = sock_no_shutdown,
|
.shutdown = sock_no_shutdown,
|
||||||
|
|
|
||||||
|
|
@ -530,7 +530,7 @@ int snd_soc_component_compr_get_caps(struct snd_compr_stream *cstream,
|
||||||
struct snd_soc_component *component;
|
struct snd_soc_component *component;
|
||||||
int i, ret = 0;
|
int i, ret = 0;
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
for_each_rtd_components(rtd, i, component) {
|
for_each_rtd_components(rtd, i, component) {
|
||||||
if (component->driver->compress_ops &&
|
if (component->driver->compress_ops &&
|
||||||
|
|
@ -541,7 +541,7 @@ int snd_soc_component_compr_get_caps(struct snd_compr_stream *cstream,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
|
|
||||||
return soc_component_ret(component, ret);
|
return soc_component_ret(component, ret);
|
||||||
}
|
}
|
||||||
|
|
@ -554,7 +554,7 @@ int snd_soc_component_compr_get_codec_caps(struct snd_compr_stream *cstream,
|
||||||
struct snd_soc_component *component;
|
struct snd_soc_component *component;
|
||||||
int i, ret = 0;
|
int i, ret = 0;
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
for_each_rtd_components(rtd, i, component) {
|
for_each_rtd_components(rtd, i, component) {
|
||||||
if (component->driver->compress_ops &&
|
if (component->driver->compress_ops &&
|
||||||
|
|
@ -565,7 +565,7 @@ int snd_soc_component_compr_get_codec_caps(struct snd_compr_stream *cstream,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
|
|
||||||
return soc_component_ret(component, ret);
|
return soc_component_ret(component, ret);
|
||||||
}
|
}
|
||||||
|
|
@ -618,7 +618,7 @@ int snd_soc_component_compr_copy(struct snd_compr_stream *cstream,
|
||||||
struct snd_soc_component *component;
|
struct snd_soc_component *component;
|
||||||
int i, ret = 0;
|
int i, ret = 0;
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
for_each_rtd_components(rtd, i, component) {
|
for_each_rtd_components(rtd, i, component) {
|
||||||
if (component->driver->compress_ops &&
|
if (component->driver->compress_ops &&
|
||||||
|
|
@ -629,7 +629,7 @@ int snd_soc_component_compr_copy(struct snd_compr_stream *cstream,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
|
|
||||||
return soc_component_ret(component, ret);
|
return soc_component_ret(component, ret);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -62,7 +62,7 @@ static int soc_compr_clean(struct snd_compr_stream *cstream, int rollback)
|
||||||
struct snd_soc_dai *codec_dai = asoc_rtd_to_codec(rtd, 0);
|
struct snd_soc_dai *codec_dai = asoc_rtd_to_codec(rtd, 0);
|
||||||
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
if (!rollback)
|
if (!rollback)
|
||||||
snd_soc_runtime_deactivate(rtd, stream);
|
snd_soc_runtime_deactivate(rtd, stream);
|
||||||
|
|
@ -84,7 +84,7 @@ static int soc_compr_clean(struct snd_compr_stream *cstream, int rollback)
|
||||||
if (!rollback)
|
if (!rollback)
|
||||||
snd_soc_dapm_stream_stop(rtd, stream);
|
snd_soc_dapm_stream_stop(rtd, stream);
|
||||||
|
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
|
|
||||||
snd_soc_pcm_component_pm_runtime_put(rtd, cstream, rollback);
|
snd_soc_pcm_component_pm_runtime_put(rtd, cstream, rollback);
|
||||||
|
|
||||||
|
|
@ -107,7 +107,7 @@ static int soc_compr_open(struct snd_compr_stream *cstream)
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
goto err_no_lock;
|
goto err_no_lock;
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
ret = snd_soc_dai_compr_startup(cpu_dai, cstream);
|
ret = snd_soc_dai_compr_startup(cpu_dai, cstream);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|
@ -123,7 +123,7 @@ static int soc_compr_open(struct snd_compr_stream *cstream)
|
||||||
|
|
||||||
snd_soc_runtime_activate(rtd, stream);
|
snd_soc_runtime_activate(rtd, stream);
|
||||||
err:
|
err:
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
err_no_lock:
|
err_no_lock:
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
soc_compr_clean(cstream, 1);
|
soc_compr_clean(cstream, 1);
|
||||||
|
|
@ -142,14 +142,14 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
|
||||||
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME);
|
snd_soc_card_mutex_lock(fe->card);
|
||||||
fe->dpcm[stream].runtime = fe_substream->runtime;
|
fe->dpcm[stream].runtime = fe_substream->runtime;
|
||||||
|
|
||||||
ret = dpcm_path_get(fe, stream, &list);
|
ret = dpcm_path_get(fe, stream, &list);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
goto be_err;
|
goto be_err;
|
||||||
|
|
||||||
mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(fe);
|
||||||
|
|
||||||
/* calculate valid and active FE <-> BE dpcms */
|
/* calculate valid and active FE <-> BE dpcms */
|
||||||
dpcm_process_paths(fe, stream, &list, 1);
|
dpcm_process_paths(fe, stream, &list, 1);
|
||||||
|
|
@ -187,9 +187,9 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
|
||||||
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
|
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
|
||||||
|
|
||||||
snd_soc_runtime_activate(fe, stream);
|
snd_soc_runtime_activate(fe, stream);
|
||||||
mutex_unlock(&fe->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(fe);
|
||||||
|
|
||||||
mutex_unlock(&fe->card->mutex);
|
snd_soc_card_mutex_unlock(fe->card);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
|
@ -199,9 +199,10 @@ open_err:
|
||||||
snd_soc_dai_compr_shutdown(cpu_dai, cstream, 1);
|
snd_soc_dai_compr_shutdown(cpu_dai, cstream, 1);
|
||||||
out:
|
out:
|
||||||
dpcm_path_put(&list);
|
dpcm_path_put(&list);
|
||||||
|
snd_soc_dpcm_mutex_unlock(fe);
|
||||||
be_err:
|
be_err:
|
||||||
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
|
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
|
||||||
mutex_unlock(&fe->card->mutex);
|
snd_soc_card_mutex_unlock(fe->card);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -212,9 +213,9 @@ static int soc_compr_free_fe(struct snd_compr_stream *cstream)
|
||||||
struct snd_soc_dpcm *dpcm;
|
struct snd_soc_dpcm *dpcm;
|
||||||
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
||||||
|
|
||||||
mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME);
|
snd_soc_card_mutex_lock(fe->card);
|
||||||
|
|
||||||
mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(fe);
|
||||||
snd_soc_runtime_deactivate(fe, stream);
|
snd_soc_runtime_deactivate(fe, stream);
|
||||||
|
|
||||||
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
|
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
|
||||||
|
|
@ -234,7 +235,7 @@ static int soc_compr_free_fe(struct snd_compr_stream *cstream)
|
||||||
|
|
||||||
dpcm_be_disconnect(fe, stream);
|
dpcm_be_disconnect(fe, stream);
|
||||||
|
|
||||||
mutex_unlock(&fe->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(fe);
|
||||||
|
|
||||||
fe->dpcm[stream].runtime = NULL;
|
fe->dpcm[stream].runtime = NULL;
|
||||||
|
|
||||||
|
|
@ -244,7 +245,7 @@ static int soc_compr_free_fe(struct snd_compr_stream *cstream)
|
||||||
|
|
||||||
snd_soc_dai_compr_shutdown(cpu_dai, cstream, 0);
|
snd_soc_dai_compr_shutdown(cpu_dai, cstream, 0);
|
||||||
|
|
||||||
mutex_unlock(&fe->card->mutex);
|
snd_soc_card_mutex_unlock(fe->card);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -256,7 +257,7 @@ static int soc_compr_trigger(struct snd_compr_stream *cstream, int cmd)
|
||||||
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
ret = snd_soc_component_compr_trigger(cstream, cmd);
|
ret = snd_soc_component_compr_trigger(cstream, cmd);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|
@ -276,7 +277,7 @@ static int soc_compr_trigger(struct snd_compr_stream *cstream, int cmd)
|
||||||
}
|
}
|
||||||
|
|
||||||
out:
|
out:
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -291,7 +292,7 @@ static int soc_compr_trigger_fe(struct snd_compr_stream *cstream, int cmd)
|
||||||
cmd == SND_COMPR_TRIGGER_DRAIN)
|
cmd == SND_COMPR_TRIGGER_DRAIN)
|
||||||
return snd_soc_component_compr_trigger(cstream, cmd);
|
return snd_soc_component_compr_trigger(cstream, cmd);
|
||||||
|
|
||||||
mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME);
|
snd_soc_card_mutex_lock(fe->card);
|
||||||
|
|
||||||
ret = snd_soc_dai_compr_trigger(cpu_dai, cstream, cmd);
|
ret = snd_soc_dai_compr_trigger(cpu_dai, cstream, cmd);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|
@ -322,7 +323,7 @@ static int soc_compr_trigger_fe(struct snd_compr_stream *cstream, int cmd)
|
||||||
|
|
||||||
out:
|
out:
|
||||||
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
|
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
|
||||||
mutex_unlock(&fe->card->mutex);
|
snd_soc_card_mutex_unlock(fe->card);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -334,7 +335,7 @@ static int soc_compr_set_params(struct snd_compr_stream *cstream,
|
||||||
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* First we call set_params for the CPU DAI, then the component
|
* First we call set_params for the CPU DAI, then the component
|
||||||
|
|
@ -359,14 +360,14 @@ static int soc_compr_set_params(struct snd_compr_stream *cstream,
|
||||||
|
|
||||||
/* cancel any delayed stream shutdown that is pending */
|
/* cancel any delayed stream shutdown that is pending */
|
||||||
rtd->pop_wait = 0;
|
rtd->pop_wait = 0;
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
|
|
||||||
cancel_delayed_work_sync(&rtd->delayed_work);
|
cancel_delayed_work_sync(&rtd->delayed_work);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err:
|
err:
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -380,7 +381,7 @@ static int soc_compr_set_params_fe(struct snd_compr_stream *cstream,
|
||||||
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
int stream = cstream->direction; /* SND_COMPRESS_xxx is same as SNDRV_PCM_STREAM_xxx */
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME);
|
snd_soc_card_mutex_lock(fe->card);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Create an empty hw_params for the BE as the machine driver must
|
* Create an empty hw_params for the BE as the machine driver must
|
||||||
|
|
@ -411,14 +412,14 @@ static int soc_compr_set_params_fe(struct snd_compr_stream *cstream,
|
||||||
ret = snd_soc_link_compr_set_params(cstream);
|
ret = snd_soc_link_compr_set_params(cstream);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
goto out;
|
goto out;
|
||||||
mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(fe);
|
||||||
dpcm_dapm_stream_event(fe, stream, SND_SOC_DAPM_STREAM_START);
|
dpcm_dapm_stream_event(fe, stream, SND_SOC_DAPM_STREAM_START);
|
||||||
mutex_unlock(&fe->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(fe);
|
||||||
fe->dpcm[stream].state = SND_SOC_DPCM_STATE_PREPARE;
|
fe->dpcm[stream].state = SND_SOC_DPCM_STATE_PREPARE;
|
||||||
|
|
||||||
out:
|
out:
|
||||||
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
|
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
|
||||||
mutex_unlock(&fe->card->mutex);
|
snd_soc_card_mutex_unlock(fe->card);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -429,7 +430,7 @@ static int soc_compr_get_params(struct snd_compr_stream *cstream,
|
||||||
struct snd_soc_dai *cpu_dai = asoc_rtd_to_cpu(rtd, 0);
|
struct snd_soc_dai *cpu_dai = asoc_rtd_to_cpu(rtd, 0);
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
ret = snd_soc_dai_compr_get_params(cpu_dai, cstream, params);
|
ret = snd_soc_dai_compr_get_params(cpu_dai, cstream, params);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|
@ -437,7 +438,7 @@ static int soc_compr_get_params(struct snd_compr_stream *cstream,
|
||||||
|
|
||||||
ret = snd_soc_component_compr_get_params(cstream, params);
|
ret = snd_soc_component_compr_get_params(cstream, params);
|
||||||
err:
|
err:
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -447,7 +448,7 @@ static int soc_compr_ack(struct snd_compr_stream *cstream, size_t bytes)
|
||||||
struct snd_soc_dai *cpu_dai = asoc_rtd_to_cpu(rtd, 0);
|
struct snd_soc_dai *cpu_dai = asoc_rtd_to_cpu(rtd, 0);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
ret = snd_soc_dai_compr_ack(cpu_dai, cstream, bytes);
|
ret = snd_soc_dai_compr_ack(cpu_dai, cstream, bytes);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|
@ -455,7 +456,7 @@ static int soc_compr_ack(struct snd_compr_stream *cstream, size_t bytes)
|
||||||
|
|
||||||
ret = snd_soc_component_compr_ack(cstream, bytes);
|
ret = snd_soc_component_compr_ack(cstream, bytes);
|
||||||
err:
|
err:
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -466,7 +467,7 @@ static int soc_compr_pointer(struct snd_compr_stream *cstream,
|
||||||
int ret;
|
int ret;
|
||||||
struct snd_soc_dai *cpu_dai = asoc_rtd_to_cpu(rtd, 0);
|
struct snd_soc_dai *cpu_dai = asoc_rtd_to_cpu(rtd, 0);
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
ret = snd_soc_dai_compr_pointer(cpu_dai, cstream, tstamp);
|
ret = snd_soc_dai_compr_pointer(cpu_dai, cstream, tstamp);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|
@ -474,7 +475,7 @@ static int soc_compr_pointer(struct snd_compr_stream *cstream,
|
||||||
|
|
||||||
ret = snd_soc_component_compr_pointer(cstream, tstamp);
|
ret = snd_soc_component_compr_pointer(cstream, tstamp);
|
||||||
out:
|
out:
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -348,7 +348,7 @@ void snd_soc_close_delayed_work(struct snd_soc_pcm_runtime *rtd)
|
||||||
struct snd_soc_dai *codec_dai = asoc_rtd_to_codec(rtd, 0);
|
struct snd_soc_dai *codec_dai = asoc_rtd_to_codec(rtd, 0);
|
||||||
int playback = SNDRV_PCM_STREAM_PLAYBACK;
|
int playback = SNDRV_PCM_STREAM_PLAYBACK;
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(rtd);
|
||||||
|
|
||||||
dev_dbg(rtd->dev,
|
dev_dbg(rtd->dev,
|
||||||
"ASoC: pop wq checking: %s status: %s waiting: %s\n",
|
"ASoC: pop wq checking: %s status: %s waiting: %s\n",
|
||||||
|
|
@ -364,7 +364,7 @@ void snd_soc_close_delayed_work(struct snd_soc_pcm_runtime *rtd)
|
||||||
SND_SOC_DAPM_STREAM_STOP);
|
SND_SOC_DAPM_STREAM_STOP);
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(rtd);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(snd_soc_close_delayed_work);
|
EXPORT_SYMBOL_GPL(snd_soc_close_delayed_work);
|
||||||
|
|
||||||
|
|
@ -1936,7 +1936,7 @@ static int snd_soc_bind_card(struct snd_soc_card *card)
|
||||||
int ret, i;
|
int ret, i;
|
||||||
|
|
||||||
mutex_lock(&client_mutex);
|
mutex_lock(&client_mutex);
|
||||||
mutex_lock_nested(&card->mutex, SND_SOC_CARD_CLASS_INIT);
|
snd_soc_card_mutex_lock_root(card);
|
||||||
|
|
||||||
snd_soc_dapm_init(&card->dapm, card, NULL);
|
snd_soc_dapm_init(&card->dapm, card, NULL);
|
||||||
|
|
||||||
|
|
@ -2093,7 +2093,7 @@ probe_end:
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
soc_cleanup_card_resources(card);
|
soc_cleanup_card_resources(card);
|
||||||
|
|
||||||
mutex_unlock(&card->mutex);
|
snd_soc_card_mutex_unlock(card);
|
||||||
mutex_unlock(&client_mutex);
|
mutex_unlock(&client_mutex);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
||||||
|
|
@ -150,7 +150,7 @@ static int dapm_down_seq[] = {
|
||||||
static void dapm_assert_locked(struct snd_soc_dapm_context *dapm)
|
static void dapm_assert_locked(struct snd_soc_dapm_context *dapm)
|
||||||
{
|
{
|
||||||
if (dapm->card && dapm->card->instantiated)
|
if (dapm->card && dapm->card->instantiated)
|
||||||
lockdep_assert_held(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_assert_held(dapm);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void pop_wait(u32 pop_time)
|
static void pop_wait(u32 pop_time)
|
||||||
|
|
@ -302,7 +302,7 @@ void dapm_mark_endpoints_dirty(struct snd_soc_card *card)
|
||||||
{
|
{
|
||||||
struct snd_soc_dapm_widget *w;
|
struct snd_soc_dapm_widget *w;
|
||||||
|
|
||||||
mutex_lock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_lock_root(card);
|
||||||
|
|
||||||
for_each_card_widgets(card, w) {
|
for_each_card_widgets(card, w) {
|
||||||
if (w->is_ep) {
|
if (w->is_ep) {
|
||||||
|
|
@ -314,7 +314,7 @@ void dapm_mark_endpoints_dirty(struct snd_soc_card *card)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(dapm_mark_endpoints_dirty);
|
EXPORT_SYMBOL_GPL(dapm_mark_endpoints_dirty);
|
||||||
|
|
||||||
|
|
@ -604,7 +604,7 @@ static void dapm_reset(struct snd_soc_card *card)
|
||||||
{
|
{
|
||||||
struct snd_soc_dapm_widget *w;
|
struct snd_soc_dapm_widget *w;
|
||||||
|
|
||||||
lockdep_assert_held(&card->dapm_mutex);
|
snd_soc_dapm_mutex_assert_held(card);
|
||||||
|
|
||||||
memset(&card->dapm_stats, 0, sizeof(card->dapm_stats));
|
memset(&card->dapm_stats, 0, sizeof(card->dapm_stats));
|
||||||
|
|
||||||
|
|
@ -1310,7 +1310,7 @@ int snd_soc_dapm_dai_get_connected_widgets(struct snd_soc_dai *dai, int stream,
|
||||||
int paths;
|
int paths;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(card);
|
||||||
|
|
||||||
if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
|
if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
|
||||||
w = dai->playback_widget;
|
w = dai->playback_widget;
|
||||||
|
|
@ -1332,7 +1332,7 @@ int snd_soc_dapm_dai_get_connected_widgets(struct snd_soc_dai *dai, int stream,
|
||||||
paths = ret;
|
paths = ret;
|
||||||
|
|
||||||
trace_snd_soc_dapm_connected(paths, stream);
|
trace_snd_soc_dapm_connected(paths, stream);
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
|
|
||||||
return paths;
|
return paths;
|
||||||
}
|
}
|
||||||
|
|
@ -1968,7 +1968,7 @@ static int dapm_power_widgets(struct snd_soc_card *card, int event)
|
||||||
enum snd_soc_bias_level bias;
|
enum snd_soc_bias_level bias;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
lockdep_assert_held(&card->dapm_mutex);
|
snd_soc_dapm_mutex_assert_held(card);
|
||||||
|
|
||||||
trace_snd_soc_dapm_start(card);
|
trace_snd_soc_dapm_start(card);
|
||||||
|
|
||||||
|
|
@ -2106,7 +2106,6 @@ static ssize_t dapm_widget_power_read_file(struct file *file,
|
||||||
size_t count, loff_t *ppos)
|
size_t count, loff_t *ppos)
|
||||||
{
|
{
|
||||||
struct snd_soc_dapm_widget *w = file->private_data;
|
struct snd_soc_dapm_widget *w = file->private_data;
|
||||||
struct snd_soc_card *card = w->dapm->card;
|
|
||||||
enum snd_soc_dapm_direction dir, rdir;
|
enum snd_soc_dapm_direction dir, rdir;
|
||||||
char *buf;
|
char *buf;
|
||||||
int in, out;
|
int in, out;
|
||||||
|
|
@ -2117,7 +2116,7 @@ static ssize_t dapm_widget_power_read_file(struct file *file,
|
||||||
if (!buf)
|
if (!buf)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
mutex_lock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_lock_root(w->dapm);
|
||||||
|
|
||||||
/* Supply widgets are not handled by is_connected_{input,output}_ep() */
|
/* Supply widgets are not handled by is_connected_{input,output}_ep() */
|
||||||
if (w->is_supply) {
|
if (w->is_supply) {
|
||||||
|
|
@ -2161,7 +2160,7 @@ static ssize_t dapm_widget_power_read_file(struct file *file,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(w->dapm);
|
||||||
|
|
||||||
ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret);
|
ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret);
|
||||||
|
|
||||||
|
|
@ -2282,7 +2281,7 @@ static int soc_dapm_mux_update_power(struct snd_soc_card *card,
|
||||||
int found = 0;
|
int found = 0;
|
||||||
bool connect;
|
bool connect;
|
||||||
|
|
||||||
lockdep_assert_held(&card->dapm_mutex);
|
snd_soc_dapm_mutex_assert_held(card);
|
||||||
|
|
||||||
/* find dapm widget path assoc with kcontrol */
|
/* find dapm widget path assoc with kcontrol */
|
||||||
dapm_kcontrol_for_each_path(path, kcontrol) {
|
dapm_kcontrol_for_each_path(path, kcontrol) {
|
||||||
|
|
@ -2309,11 +2308,11 @@ int snd_soc_dapm_mux_update_power(struct snd_soc_dapm_context *dapm,
|
||||||
struct snd_soc_card *card = dapm->card;
|
struct snd_soc_card *card = dapm->card;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(card);
|
||||||
card->update = update;
|
card->update = update;
|
||||||
ret = soc_dapm_mux_update_power(card, kcontrol, mux, e);
|
ret = soc_dapm_mux_update_power(card, kcontrol, mux, e);
|
||||||
card->update = NULL;
|
card->update = NULL;
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
if (ret > 0)
|
if (ret > 0)
|
||||||
snd_soc_dpcm_runtime_update(card);
|
snd_soc_dpcm_runtime_update(card);
|
||||||
return ret;
|
return ret;
|
||||||
|
|
@ -2328,7 +2327,7 @@ static int soc_dapm_mixer_update_power(struct snd_soc_card *card,
|
||||||
struct snd_soc_dapm_path *path;
|
struct snd_soc_dapm_path *path;
|
||||||
int found = 0;
|
int found = 0;
|
||||||
|
|
||||||
lockdep_assert_held(&card->dapm_mutex);
|
snd_soc_dapm_mutex_assert_held(card);
|
||||||
|
|
||||||
/* find dapm widget path assoc with kcontrol */
|
/* find dapm widget path assoc with kcontrol */
|
||||||
dapm_kcontrol_for_each_path(path, kcontrol) {
|
dapm_kcontrol_for_each_path(path, kcontrol) {
|
||||||
|
|
@ -2374,11 +2373,11 @@ int snd_soc_dapm_mixer_update_power(struct snd_soc_dapm_context *dapm,
|
||||||
struct snd_soc_card *card = dapm->card;
|
struct snd_soc_card *card = dapm->card;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(card);
|
||||||
card->update = update;
|
card->update = update;
|
||||||
ret = soc_dapm_mixer_update_power(card, kcontrol, connect, -1);
|
ret = soc_dapm_mixer_update_power(card, kcontrol, connect, -1);
|
||||||
card->update = NULL;
|
card->update = NULL;
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
if (ret > 0)
|
if (ret > 0)
|
||||||
snd_soc_dpcm_runtime_update(card);
|
snd_soc_dpcm_runtime_update(card);
|
||||||
return ret;
|
return ret;
|
||||||
|
|
@ -2457,7 +2456,7 @@ static ssize_t dapm_widget_show(struct device *dev,
|
||||||
struct snd_soc_dai *codec_dai;
|
struct snd_soc_dai *codec_dai;
|
||||||
int i, count = 0;
|
int i, count = 0;
|
||||||
|
|
||||||
mutex_lock(&rtd->card->dapm_mutex);
|
snd_soc_dapm_mutex_lock_root(rtd->card);
|
||||||
|
|
||||||
for_each_rtd_codec_dais(rtd, i, codec_dai) {
|
for_each_rtd_codec_dais(rtd, i, codec_dai) {
|
||||||
struct snd_soc_component *cmpnt = codec_dai->component;
|
struct snd_soc_component *cmpnt = codec_dai->component;
|
||||||
|
|
@ -2465,7 +2464,7 @@ static ssize_t dapm_widget_show(struct device *dev,
|
||||||
count = dapm_widget_show_component(cmpnt, buf, count);
|
count = dapm_widget_show_component(cmpnt, buf, count);
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&rtd->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(rtd->card);
|
||||||
|
|
||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
|
@ -2649,9 +2648,9 @@ int snd_soc_dapm_sync(struct snd_soc_dapm_context *dapm)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(dapm);
|
||||||
ret = snd_soc_dapm_sync_unlocked(dapm);
|
ret = snd_soc_dapm_sync_unlocked(dapm);
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(snd_soc_dapm_sync);
|
EXPORT_SYMBOL_GPL(snd_soc_dapm_sync);
|
||||||
|
|
@ -2720,9 +2719,9 @@ int snd_soc_dapm_update_dai(struct snd_pcm_substream *substream,
|
||||||
struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
|
struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&rtd->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(rtd->card);
|
||||||
ret = dapm_update_dai_unlocked(substream, params, dai);
|
ret = dapm_update_dai_unlocked(substream, params, dai);
|
||||||
mutex_unlock(&rtd->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(rtd->card);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
@ -3112,7 +3111,7 @@ int snd_soc_dapm_add_routes(struct snd_soc_dapm_context *dapm,
|
||||||
{
|
{
|
||||||
int i, ret = 0;
|
int i, ret = 0;
|
||||||
|
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(dapm);
|
||||||
for (i = 0; i < num; i++) {
|
for (i = 0; i < num; i++) {
|
||||||
int r = snd_soc_dapm_add_route(dapm, route);
|
int r = snd_soc_dapm_add_route(dapm, route);
|
||||||
if (r < 0) {
|
if (r < 0) {
|
||||||
|
|
@ -3124,7 +3123,7 @@ int snd_soc_dapm_add_routes(struct snd_soc_dapm_context *dapm,
|
||||||
}
|
}
|
||||||
route++;
|
route++;
|
||||||
}
|
}
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
@ -3143,12 +3142,12 @@ int snd_soc_dapm_del_routes(struct snd_soc_dapm_context *dapm,
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(dapm);
|
||||||
for (i = 0; i < num; i++) {
|
for (i = 0; i < num; i++) {
|
||||||
snd_soc_dapm_del_route(dapm, route);
|
snd_soc_dapm_del_route(dapm, route);
|
||||||
route++;
|
route++;
|
||||||
}
|
}
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
@ -3221,14 +3220,14 @@ int snd_soc_dapm_weak_routes(struct snd_soc_dapm_context *dapm,
|
||||||
int i;
|
int i;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_INIT);
|
snd_soc_dapm_mutex_lock_root(dapm);
|
||||||
for (i = 0; i < num; i++) {
|
for (i = 0; i < num; i++) {
|
||||||
int err = snd_soc_dapm_weak_route(dapm, route);
|
int err = snd_soc_dapm_weak_route(dapm, route);
|
||||||
if (err)
|
if (err)
|
||||||
ret = err;
|
ret = err;
|
||||||
route++;
|
route++;
|
||||||
}
|
}
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
@ -3247,7 +3246,7 @@ int snd_soc_dapm_new_widgets(struct snd_soc_card *card)
|
||||||
struct snd_soc_dapm_widget *w;
|
struct snd_soc_dapm_widget *w;
|
||||||
unsigned int val;
|
unsigned int val;
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_INIT);
|
snd_soc_dapm_mutex_lock_root(card);
|
||||||
|
|
||||||
for_each_card_widgets(card, w)
|
for_each_card_widgets(card, w)
|
||||||
{
|
{
|
||||||
|
|
@ -3259,7 +3258,7 @@ int snd_soc_dapm_new_widgets(struct snd_soc_card *card)
|
||||||
sizeof(struct snd_kcontrol *),
|
sizeof(struct snd_kcontrol *),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!w->kcontrols) {
|
if (!w->kcontrols) {
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -3302,7 +3301,7 @@ int snd_soc_dapm_new_widgets(struct snd_soc_card *card)
|
||||||
}
|
}
|
||||||
|
|
||||||
dapm_power_widgets(card, SND_SOC_DAPM_STREAM_NOP);
|
dapm_power_widgets(card, SND_SOC_DAPM_STREAM_NOP);
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(snd_soc_dapm_new_widgets);
|
EXPORT_SYMBOL_GPL(snd_soc_dapm_new_widgets);
|
||||||
|
|
@ -3320,7 +3319,6 @@ int snd_soc_dapm_get_volsw(struct snd_kcontrol *kcontrol,
|
||||||
struct snd_ctl_elem_value *ucontrol)
|
struct snd_ctl_elem_value *ucontrol)
|
||||||
{
|
{
|
||||||
struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
|
struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
|
||||||
struct snd_soc_card *card = dapm->card;
|
|
||||||
struct soc_mixer_control *mc =
|
struct soc_mixer_control *mc =
|
||||||
(struct soc_mixer_control *)kcontrol->private_value;
|
(struct soc_mixer_control *)kcontrol->private_value;
|
||||||
int reg = mc->reg;
|
int reg = mc->reg;
|
||||||
|
|
@ -3331,7 +3329,7 @@ int snd_soc_dapm_get_volsw(struct snd_kcontrol *kcontrol,
|
||||||
unsigned int invert = mc->invert;
|
unsigned int invert = mc->invert;
|
||||||
unsigned int reg_val, val, rval = 0;
|
unsigned int reg_val, val, rval = 0;
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(dapm);
|
||||||
if (dapm_kcontrol_is_powered(kcontrol) && reg != SND_SOC_NOPM) {
|
if (dapm_kcontrol_is_powered(kcontrol) && reg != SND_SOC_NOPM) {
|
||||||
reg_val = soc_dapm_read(dapm, reg);
|
reg_val = soc_dapm_read(dapm, reg);
|
||||||
val = (reg_val >> shift) & mask;
|
val = (reg_val >> shift) & mask;
|
||||||
|
|
@ -3348,7 +3346,7 @@ int snd_soc_dapm_get_volsw(struct snd_kcontrol *kcontrol,
|
||||||
if (snd_soc_volsw_is_stereo(mc))
|
if (snd_soc_volsw_is_stereo(mc))
|
||||||
rval = (reg_val >> width) & mask;
|
rval = (reg_val >> width) & mask;
|
||||||
}
|
}
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
|
|
||||||
if (invert)
|
if (invert)
|
||||||
ucontrol->value.integer.value[0] = max - val;
|
ucontrol->value.integer.value[0] = max - val;
|
||||||
|
|
@ -3406,7 +3404,7 @@ int snd_soc_dapm_put_volsw(struct snd_kcontrol *kcontrol,
|
||||||
rval = max - rval;
|
rval = max - rval;
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(card);
|
||||||
|
|
||||||
/* This assumes field width < (bits in unsigned int / 2) */
|
/* This assumes field width < (bits in unsigned int / 2) */
|
||||||
if (width > sizeof(unsigned int) * 8 / 2)
|
if (width > sizeof(unsigned int) * 8 / 2)
|
||||||
|
|
@ -3448,7 +3446,7 @@ int snd_soc_dapm_put_volsw(struct snd_kcontrol *kcontrol,
|
||||||
card->update = NULL;
|
card->update = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
|
|
||||||
if (ret > 0)
|
if (ret > 0)
|
||||||
snd_soc_dpcm_runtime_update(card);
|
snd_soc_dpcm_runtime_update(card);
|
||||||
|
|
@ -3470,17 +3468,16 @@ int snd_soc_dapm_get_enum_double(struct snd_kcontrol *kcontrol,
|
||||||
struct snd_ctl_elem_value *ucontrol)
|
struct snd_ctl_elem_value *ucontrol)
|
||||||
{
|
{
|
||||||
struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
|
struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
|
||||||
struct snd_soc_card *card = dapm->card;
|
|
||||||
struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
|
struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
|
||||||
unsigned int reg_val, val;
|
unsigned int reg_val, val;
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(dapm);
|
||||||
if (e->reg != SND_SOC_NOPM && dapm_kcontrol_is_powered(kcontrol)) {
|
if (e->reg != SND_SOC_NOPM && dapm_kcontrol_is_powered(kcontrol)) {
|
||||||
reg_val = soc_dapm_read(dapm, e->reg);
|
reg_val = soc_dapm_read(dapm, e->reg);
|
||||||
} else {
|
} else {
|
||||||
reg_val = dapm_kcontrol_get_value(kcontrol);
|
reg_val = dapm_kcontrol_get_value(kcontrol);
|
||||||
}
|
}
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
|
|
||||||
val = (reg_val >> e->shift_l) & e->mask;
|
val = (reg_val >> e->shift_l) & e->mask;
|
||||||
ucontrol->value.enumerated.item[0] = snd_soc_enum_val_to_item(e, val);
|
ucontrol->value.enumerated.item[0] = snd_soc_enum_val_to_item(e, val);
|
||||||
|
|
@ -3527,7 +3524,7 @@ int snd_soc_dapm_put_enum_double(struct snd_kcontrol *kcontrol,
|
||||||
mask |= e->mask << e->shift_r;
|
mask |= e->mask << e->shift_r;
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(card);
|
||||||
|
|
||||||
change = dapm_kcontrol_set_value(kcontrol, val);
|
change = dapm_kcontrol_set_value(kcontrol, val);
|
||||||
|
|
||||||
|
|
@ -3548,7 +3545,7 @@ int snd_soc_dapm_put_enum_double(struct snd_kcontrol *kcontrol,
|
||||||
card->update = NULL;
|
card->update = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
|
|
||||||
if (ret > 0)
|
if (ret > 0)
|
||||||
snd_soc_dpcm_runtime_update(card);
|
snd_soc_dpcm_runtime_update(card);
|
||||||
|
|
@ -3589,12 +3586,12 @@ int snd_soc_dapm_get_pin_switch(struct snd_kcontrol *kcontrol,
|
||||||
struct snd_soc_card *card = snd_kcontrol_chip(kcontrol);
|
struct snd_soc_card *card = snd_kcontrol_chip(kcontrol);
|
||||||
const char *pin = (const char *)kcontrol->private_value;
|
const char *pin = (const char *)kcontrol->private_value;
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(card);
|
||||||
|
|
||||||
ucontrol->value.integer.value[0] =
|
ucontrol->value.integer.value[0] =
|
||||||
snd_soc_dapm_get_pin_status(&card->dapm, pin);
|
snd_soc_dapm_get_pin_status(&card->dapm, pin);
|
||||||
|
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
@ -3613,10 +3610,10 @@ int snd_soc_dapm_put_pin_switch(struct snd_kcontrol *kcontrol,
|
||||||
const char *pin = (const char *)kcontrol->private_value;
|
const char *pin = (const char *)kcontrol->private_value;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(card);
|
||||||
ret = __snd_soc_dapm_set_pin(&card->dapm, pin,
|
ret = __snd_soc_dapm_set_pin(&card->dapm, pin,
|
||||||
!!ucontrol->value.integer.value[0]);
|
!!ucontrol->value.integer.value[0]);
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
|
|
||||||
snd_soc_dapm_sync(&card->dapm);
|
snd_soc_dapm_sync(&card->dapm);
|
||||||
return ret;
|
return ret;
|
||||||
|
|
@ -3789,9 +3786,9 @@ snd_soc_dapm_new_control(struct snd_soc_dapm_context *dapm,
|
||||||
{
|
{
|
||||||
struct snd_soc_dapm_widget *w;
|
struct snd_soc_dapm_widget *w;
|
||||||
|
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(dapm);
|
||||||
w = snd_soc_dapm_new_control_unlocked(dapm, widget);
|
w = snd_soc_dapm_new_control_unlocked(dapm, widget);
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
|
|
||||||
return w;
|
return w;
|
||||||
}
|
}
|
||||||
|
|
@ -3814,7 +3811,7 @@ int snd_soc_dapm_new_controls(struct snd_soc_dapm_context *dapm,
|
||||||
int i;
|
int i;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_INIT);
|
snd_soc_dapm_mutex_lock_root(dapm);
|
||||||
for (i = 0; i < num; i++) {
|
for (i = 0; i < num; i++) {
|
||||||
struct snd_soc_dapm_widget *w = snd_soc_dapm_new_control_unlocked(dapm, widget);
|
struct snd_soc_dapm_widget *w = snd_soc_dapm_new_control_unlocked(dapm, widget);
|
||||||
if (IS_ERR(w)) {
|
if (IS_ERR(w)) {
|
||||||
|
|
@ -3823,7 +3820,7 @@ int snd_soc_dapm_new_controls(struct snd_soc_dapm_context *dapm,
|
||||||
}
|
}
|
||||||
widget++;
|
widget++;
|
||||||
}
|
}
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(snd_soc_dapm_new_controls);
|
EXPORT_SYMBOL_GPL(snd_soc_dapm_new_controls);
|
||||||
|
|
@ -4505,9 +4502,9 @@ void snd_soc_dapm_stream_event(struct snd_soc_pcm_runtime *rtd, int stream,
|
||||||
{
|
{
|
||||||
struct snd_soc_card *card = rtd->card;
|
struct snd_soc_card *card = rtd->card;
|
||||||
|
|
||||||
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(card);
|
||||||
soc_dapm_stream_event(rtd, stream, event);
|
soc_dapm_stream_event(rtd, stream, event);
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
}
|
}
|
||||||
|
|
||||||
void snd_soc_dapm_stream_stop(struct snd_soc_pcm_runtime *rtd, int stream)
|
void snd_soc_dapm_stream_stop(struct snd_soc_pcm_runtime *rtd, int stream)
|
||||||
|
|
@ -4568,11 +4565,11 @@ int snd_soc_dapm_enable_pin(struct snd_soc_dapm_context *dapm, const char *pin)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(dapm);
|
||||||
|
|
||||||
ret = snd_soc_dapm_set_pin(dapm, pin, 1);
|
ret = snd_soc_dapm_set_pin(dapm, pin, 1);
|
||||||
|
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
@ -4636,11 +4633,11 @@ int snd_soc_dapm_force_enable_pin(struct snd_soc_dapm_context *dapm,
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(dapm);
|
||||||
|
|
||||||
ret = snd_soc_dapm_force_enable_pin_unlocked(dapm, pin);
|
ret = snd_soc_dapm_force_enable_pin_unlocked(dapm, pin);
|
||||||
|
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
@ -4680,11 +4677,11 @@ int snd_soc_dapm_disable_pin(struct snd_soc_dapm_context *dapm,
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(dapm);
|
||||||
|
|
||||||
ret = snd_soc_dapm_set_pin(dapm, pin, 0);
|
ret = snd_soc_dapm_set_pin(dapm, pin, 0);
|
||||||
|
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
@ -4731,11 +4728,11 @@ int snd_soc_dapm_nc_pin(struct snd_soc_dapm_context *dapm, const char *pin)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
|
snd_soc_dapm_mutex_lock(dapm);
|
||||||
|
|
||||||
ret = snd_soc_dapm_set_pin(dapm, pin, 0);
|
ret = snd_soc_dapm_set_pin(dapm, pin, 0);
|
||||||
|
|
||||||
mutex_unlock(&dapm->card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(dapm);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
@ -4832,7 +4829,7 @@ static void soc_dapm_shutdown_dapm(struct snd_soc_dapm_context *dapm)
|
||||||
LIST_HEAD(down_list);
|
LIST_HEAD(down_list);
|
||||||
int powerdown = 0;
|
int powerdown = 0;
|
||||||
|
|
||||||
mutex_lock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_lock_root(card);
|
||||||
|
|
||||||
for_each_card_widgets(dapm->card, w) {
|
for_each_card_widgets(dapm->card, w) {
|
||||||
if (w->dapm != dapm)
|
if (w->dapm != dapm)
|
||||||
|
|
@ -4857,7 +4854,7 @@ static void soc_dapm_shutdown_dapm(struct snd_soc_dapm_context *dapm)
|
||||||
SND_SOC_BIAS_STANDBY);
|
SND_SOC_BIAS_STANDBY);
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&card->dapm_mutex);
|
snd_soc_dapm_mutex_unlock(card);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
||||||
|
|
@ -49,19 +49,6 @@ static inline int _soc_pcm_ret(struct snd_soc_pcm_runtime *rtd,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void snd_soc_dpcm_mutex_lock(struct snd_soc_pcm_runtime *rtd)
|
|
||||||
{
|
|
||||||
mutex_lock_nested(&rtd->card->pcm_mutex, rtd->card->pcm_subclass);
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void snd_soc_dpcm_mutex_unlock(struct snd_soc_pcm_runtime *rtd)
|
|
||||||
{
|
|
||||||
mutex_unlock(&rtd->card->pcm_mutex);
|
|
||||||
}
|
|
||||||
|
|
||||||
#define snd_soc_dpcm_mutex_assert_held(rtd) \
|
|
||||||
lockdep_assert_held(&(rtd)->card->pcm_mutex)
|
|
||||||
|
|
||||||
static inline void snd_soc_dpcm_stream_lock_irq(struct snd_soc_pcm_runtime *rtd,
|
static inline void snd_soc_dpcm_stream_lock_irq(struct snd_soc_pcm_runtime *rtd,
|
||||||
int stream)
|
int stream)
|
||||||
{
|
{
|
||||||
|
|
@ -2652,7 +2639,7 @@ int snd_soc_dpcm_runtime_update(struct snd_soc_card *card)
|
||||||
struct snd_soc_pcm_runtime *fe;
|
struct snd_soc_pcm_runtime *fe;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
mutex_lock_nested(&card->pcm_mutex, card->pcm_subclass);
|
snd_soc_dpcm_mutex_lock(card);
|
||||||
/* shutdown all old paths first */
|
/* shutdown all old paths first */
|
||||||
for_each_card_rtds(card, fe) {
|
for_each_card_rtds(card, fe) {
|
||||||
ret = soc_dpcm_fe_runtime_update(fe, 0);
|
ret = soc_dpcm_fe_runtime_update(fe, 0);
|
||||||
|
|
@ -2668,7 +2655,7 @@ int snd_soc_dpcm_runtime_update(struct snd_soc_card *card)
|
||||||
}
|
}
|
||||||
|
|
||||||
out:
|
out:
|
||||||
mutex_unlock(&card->pcm_mutex);
|
snd_soc_dpcm_mutex_unlock(card);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(snd_soc_dpcm_runtime_update);
|
EXPORT_SYMBOL_GPL(snd_soc_dpcm_runtime_update);
|
||||||
|
|
|
||||||
|
|
@ -2079,6 +2079,41 @@ out:
|
||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int bpf_test_readahead(const char *mount_dir)
|
||||||
|
{
|
||||||
|
const char *file_name = "file";
|
||||||
|
|
||||||
|
int result = TEST_FAILURE;
|
||||||
|
int file_fd = -1;
|
||||||
|
int src_fd = -1;
|
||||||
|
int fuse_dev = -1;
|
||||||
|
|
||||||
|
TEST(file_fd = s_creat(s_path(s(ft_src), s(file_name)), 0777),
|
||||||
|
file_fd != -1);
|
||||||
|
TESTSYSCALL(fallocate(file_fd, 0, 0, 4096));
|
||||||
|
TESTSYSCALL(close(file_fd));
|
||||||
|
file_fd = -1;
|
||||||
|
|
||||||
|
TEST(src_fd = open(ft_src, O_DIRECTORY | O_RDONLY | O_CLOEXEC),
|
||||||
|
src_fd != -1);
|
||||||
|
TEST(fuse_dev = open("/dev/fuse", O_RDWR | O_CLOEXEC), fuse_dev != -1);
|
||||||
|
TESTEQUAL(mount_fuse(mount_dir, -1, src_fd, &fuse_dev), 0);
|
||||||
|
|
||||||
|
TEST(file_fd = s_open(s_path(s(mount_dir), s(file_name)), O_RDONLY),
|
||||||
|
file_fd != -1);
|
||||||
|
TESTSYSCALL(posix_fadvise(file_fd, 0, 4096, POSIX_FADV_WILLNEED));
|
||||||
|
usleep(1000);
|
||||||
|
TESTSYSCALL(close(file_fd));
|
||||||
|
file_fd = -1;
|
||||||
|
result = TEST_SUCCESS;
|
||||||
|
out:
|
||||||
|
umount(mount_dir);
|
||||||
|
close(fuse_dev);
|
||||||
|
close(src_fd);
|
||||||
|
close(file_fd);
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
static void parse_range(const char *ranges, bool *run_test, size_t tests)
|
static void parse_range(const char *ranges, bool *run_test, size_t tests)
|
||||||
{
|
{
|
||||||
size_t i;
|
size_t i;
|
||||||
|
|
@ -2208,6 +2243,7 @@ int main(int argc, char *argv[])
|
||||||
MAKE_TEST(flock_test),
|
MAKE_TEST(flock_test),
|
||||||
MAKE_TEST(bpf_test_create_and_remove_bpf),
|
MAKE_TEST(bpf_test_create_and_remove_bpf),
|
||||||
MAKE_TEST(bpf_test_mkdir_and_remove_bpf),
|
MAKE_TEST(bpf_test_mkdir_and_remove_bpf),
|
||||||
|
MAKE_TEST(bpf_test_readahead),
|
||||||
};
|
};
|
||||||
#undef MAKE_TEST
|
#undef MAKE_TEST
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue