Merge 6.1.18 into android14-6.1
Changes in 6.1.18 net/sched: Retire tcindex classifier auxdisplay: hd44780: Fix potential memory leak in hd44780_remove() fs/jfs: fix shift exponent db_agl2size negative driver: soc: xilinx: fix memory leak in xlnx_add_cb_for_notify_event() f2fs: don't rely on F2FS_MAP_* in f2fs_iomap_begin f2fs: fix to avoid potential deadlock objtool: Fix memory leak in create_static_call_sections() soc: mediatek: mtk-pm-domains: Allow mt8186 ADSP default power on memory: renesas-rpc-if: Split-off private data from struct rpcif memory: renesas-rpc-if: Move resource acquisition to .probe() soc: mediatek: mtk-svs: Enable the IRQ later pwm: sifive: Always let the first pwm_apply_state succeed pwm: stm32-lp: fix the check on arr and cmp registers update f2fs: introduce trace_f2fs_replace_atomic_write_block f2fs: correct i_size change for atomic writes f2fs: clear atomic_write_task in f2fs_abort_atomic_write() soc: mediatek: mtk-svs: restore default voltages when svs_init02() fail soc: mediatek: mtk-svs: reset svs when svs_resume() fail soc: mediatek: mtk-svs: Use pm_runtime_resume_and_get() in svs_init01() fs: f2fs: initialize fsdata in pagecache_write() f2fs: allow set compression option of files without blocks f2fs: fix to abort atomic write only during do_exist() um: vector: Fix memory leak in vector_config ubi: ensure that VID header offset + VID header size <= alloc, size ubifs: Fix build errors as symbol undefined ubifs: Fix memory leak in ubifs_sysfs_init() ubifs: Rectify space budget for ubifs_symlink() if symlink is encrypted ubifs: Rectify space budget for ubifs_xrename() ubifs: Fix wrong dirty space budget for dirty inode ubifs: do_rename: Fix wrong space budget when target inode's nlink > 1 ubifs: Reserve one leb for each journal head while doing budget ubi: Fix use-after-free when volume resizing failed ubi: Fix unreferenced object reported by kmemleak in ubi_resize_volume() ubifs: Fix memory leak in alloc_wbufs() ubi: Fix possible null-ptr-deref in ubi_free_volume() ubifs: Re-statistic cleaned znode count if commit failed ubifs: dirty_cow_znode: Fix memleak in error handling path ubifs: ubifs_writepage: Mark page dirty after writing inode failed ubifs: ubifs_releasepage: Remove ubifs_assert(0) to valid this process ubi: fastmap: Fix missed fm_anchor PEB in wear-leveling after disabling fastmap ubi: Fix UAF wear-leveling entry in eraseblk_count_seq_show() ubi: ubi_wl_put_peb: Fix infinite loop when wear-leveling work failed f2fs: fix to avoid potential memory corruption in __update_iostat_latency() soc: qcom: stats: Populate all subsystem debugfs files ext4: use ext4_fc_tl_mem in fast-commit replay path ext4: don't show commit interval if it is zero netfilter: nf_tables: allow to fetch set elements when table has an owner x86: um: vdso: Add '%rcx' and '%r11' to the syscall clobber list um: virtio_uml: free command if adding to virtqueue failed um: virtio_uml: mark device as unregistered when breaking it um: virtio_uml: move device breaking into workqueue um: virt-pci: properly remove PCI device from bus f2fs: synchronize atomic write aborts watchdog: rzg2l_wdt: Issue a reset before we put the PM clocks watchdog: rzg2l_wdt: Handle TYPE-B reset for RZ/V2M watchdog: at91sam9_wdt: use devm_request_irq to avoid missing free_irq() in error path watchdog: Fix kmemleak in watchdog_cdev_register watchdog: pcwd_usb: Fix attempting to access uninitialized memory watchdog: sbsa_wdog: Make sure the timeout programming is within the limits netfilter: ctnetlink: fix possible refcount leak in ctnetlink_create_conntrack() netfilter: conntrack: fix rmmod double-free race netfilter: ip6t_rpfilter: Fix regression with VRF interfaces netfilter: ebtables: fix table blob use-after-free netfilter: xt_length: use skb len to match in length_mt6 netfilter: ctnetlink: make event listener tracking global netfilter: x_tables: fix percpu counter block leak on error path when creating new netns ptp: vclock: use mutex to fix "sleep on atomic" bug drm/i915: move a Kconfig symbol to unbreak the menu presentation ipv6: Add lwtunnel encap size of all siblings in nexthop calculation octeontx2-pf: Recalculate UDP checksum for ptp 1-step sync packet net: sunhme: Fix region request sctp: add a refcnt in sctp_stream_priorities to avoid a nested loop octeontx2-pf: Use correct struct reference in test condition net: fix __dev_kfree_skb_any() vs drop monitor 9p/xen: fix version parsing 9p/xen: fix connection sequence 9p/rdma: unmap receive dma buffer in rdma_request()/post_recv() spi: tegra210-quad: Fix validate combined sequence mlx5: fix skb leak while fifo resync and push mlx5: fix possible ptp queue fifo use-after-free net/mlx5: ECPF, wait for VF pages only after disabling host PFs net/mlx5e: Verify flow_source cap before using it net/mlx5: Geneve, Fix handling of Geneve object id as error code ext4: fix incorrect options show of original mount_opt and extend mount_opt2 nfc: fix memory leak of se_io context in nfc_genl_se_io net/sched: transition act_pedit to rcu and percpu stats net/sched: act_pedit: fix action bind logic net/sched: act_mpls: fix action bind logic net/sched: act_sample: fix action bind logic net: dsa: seville: ignore mscc-miim read errors from Lynx PCS net: dsa: felix: fix internal MDIO controller resource length ARM: dts: spear320-hmi: correct STMPE GPIO compatible tcp: tcp_check_req() can be called from process context vc_screen: modify vcs_size() handling in vcs_read() spi: tegra210-quad: Fix iterator outside loop rtc: sun6i: Always export the internal oscillator genirq/ipi: Fix NULL pointer deref in irq_data_get_affinity_mask() scsi: ipr: Work around fortify-string warning scsi: mpi3mr: Fix an issue found by KASAN scsi: mpi3mr: Use number of bits to manage bitmap sizes rtc: allow rtc_read_alarm without read_alarm callback io_uring: fix size calculation when registering buf ring loop: loop_set_status_from_info() check before assignment ASoC: adau7118: don't disable regulators on device unbind ASoC: apple: mca: Fix final status read on SERDES reset ASoC: apple: mca: Fix SERDES reset sequence ASoC: apple: mca: Improve handling of unavailable DMA channels nvme: bring back auto-removal of deleted namespaces during sequential scan nvme-tcp: don't access released socket during error recovery nvme-fabrics: show well known discovery name ASoC: zl38060 add gpiolib dependency ASoC: mediatek: mt8195: add missing initialization thermal: intel: quark_dts: fix error pointer dereference thermal: intel: BXT_PMIC: select REGMAP instead of depending on it tracing: Add NULL checks for buffer in ring_buffer_free_read_page() kernel/printk/index.c: fix memory leak with using debugfs_lookup() firmware/efi sysfb_efi: Add quirk for Lenovo IdeaPad Duet 3 bootconfig: Increase max nodes of bootconfig from 1024 to 8192 for DCC support mfd: arizona: Use pm_runtime_resume_and_get() to prevent refcnt leak IB/hfi1: Update RMT size calculation iommu/amd: Fix error handling for pdev_pri_ats_enable() PCI/ACPI: Account for _S0W of the target bridge in acpi_pci_bridge_d3() media: uvcvideo: Remove format descriptions media: uvcvideo: Handle cameras with invalid descriptors media: uvcvideo: Handle errors from calls to usb_string media: uvcvideo: Quirk for autosuspend in Logitech B910 and C910 media: uvcvideo: Silence memcpy() run-time false positive warnings USB: fix memory leak with using debugfs_lookup() cacheinfo: Fix shared_cpu_map to handle shared caches at different levels staging: emxx_udc: Add checks for dma_alloc_coherent() tty: fix out-of-bounds access in tty_driver_lookup_tty() tty: serial: fsl_lpuart: disable the CTS when send break signal serial: sc16is7xx: setup GPIO controller later in probe mei: bus-fixup:upon error print return values of send and receive tools/iio/iio_utils:fix memory leak bus: mhi: ep: Fix the debug message for MHI_PKT_TYPE_RESET_CHAN_CMD cmd iio: accel: mma9551_core: Prevent uninitialized variable in mma9551_read_status_word() iio: accel: mma9551_core: Prevent uninitialized variable in mma9551_read_config_word() media: uvcvideo: Add GUID for BGRA/X 8:8:8:8 soundwire: bus_type: Avoid lockdep assert in sdw_drv_probe() PCI: loongson: Prevent LS7A MRRS increases staging: pi433: fix memory leak with using debugfs_lookup() USB: dwc3: fix memory leak with using debugfs_lookup() USB: chipidea: fix memory leak with using debugfs_lookup() USB: ULPI: fix memory leak with using debugfs_lookup() USB: uhci: fix memory leak with using debugfs_lookup() USB: sl811: fix memory leak with using debugfs_lookup() USB: fotg210: fix memory leak with using debugfs_lookup() USB: isp116x: fix memory leak with using debugfs_lookup() USB: isp1362: fix memory leak with using debugfs_lookup() USB: gadget: gr_udc: fix memory leak with using debugfs_lookup() USB: gadget: bcm63xx_udc: fix memory leak with using debugfs_lookup() USB: gadget: lpc32xx_udc: fix memory leak with using debugfs_lookup() USB: gadget: pxa25x_udc: fix memory leak with using debugfs_lookup() USB: gadget: pxa27x_udc: fix memory leak with using debugfs_lookup() usb: host: xhci: mvebu: Iterate over array indexes instead of using pointer math USB: ene_usb6250: Allocate enough memory for full object usb: uvc: Enumerate valid values for color matching usb: gadget: uvc: Make bSourceID read/write PCI: Align extra resources for hotplug bridges properly PCI: Take other bus devices into account when distributing resources PCI: Distribute available resources for root buses, too tty: pcn_uart: fix memory leak with using debugfs_lookup() misc: vmw_balloon: fix memory leak with using debugfs_lookup() drivers: base: component: fix memory leak with using debugfs_lookup() drivers: base: dd: fix memory leak with using debugfs_lookup() kernel/fail_function: fix memory leak with using debugfs_lookup() PCI: loongson: Add more devices that need MRRS quirk PCI: Add ACS quirk for Wangxun NICs PCI: pciehp: Add Qualcomm quirk for Command Completed erratum phy: rockchip-typec: Fix unsigned comparison with less than zero RDMA/cma: Distinguish between sockaddr_in and sockaddr_in6 by size iommu: Attach device group to old domain in error path soundwire: cadence: Remove wasted space in response_buf soundwire: cadence: Drain the RX FIFO after an IO timeout net: tls: avoid hanging tasks on the tx_lock x86/resctl: fix scheduler confusion with 'current' vDPA/ifcvf: decouple hw features manipulators from the adapter vDPA/ifcvf: decouple config space ops from the adapter vDPA/ifcvf: alloc the mgmt_dev before the adapter vDPA/ifcvf: decouple vq IRQ releasers from the adapter vDPA/ifcvf: decouple config IRQ releaser from the adapter vDPA/ifcvf: decouple vq irq requester from the adapter vDPA/ifcvf: decouple config/dev IRQ requester and vectors allocator from the adapter vDPA/ifcvf: ifcvf_request_irq works on ifcvf_hw vDPA/ifcvf: manage ifcvf_hw in the mgmt_dev vDPA/ifcvf: allocate the adapter in dev_add() drm/display/dp_mst: Add drm_atomic_get_old_mst_topology_state() drm/display/dp_mst: Fix down/up message handling after sink disconnect drm/display/dp_mst: Fix down message handling after a packet reception error drm/display/dp_mst: Fix payload addition on a disconnected sink drm/i915/dp_mst: Add the MST topology state for modesetted CRTCs drm/i915: Fix system suspend without fbdev being initialized media: uvcvideo: Fix race condition with usb_kill_urb io_uring: fix two assignments in if conditions io_uring/poll: allow some retries for poll triggering spuriously arm64: efi: Make efi_rt_lock a raw_spinlock arm64: mte: Fix/clarify the PG_mte_tagged semantics arm64: Reset KASAN tag in copy_highpage with HW tags only usb: gadget: uvc: fix missing mutex_unlock() if kstrtou8() fails Linux 6.1.18 Change-Id: Icb8e56528d481a17780bdd517c69efa9e76b94c0 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
d956976040
207 changed files with 2141 additions and 2113 deletions
|
|
@ -52,7 +52,7 @@ Date: Dec 2014
|
|||
KernelVersion: 4.0
|
||||
Description: Default output terminal descriptors
|
||||
|
||||
All attributes read only:
|
||||
All attributes read only except bSourceID:
|
||||
|
||||
============== =============================================
|
||||
iTerminal index of string descriptor
|
||||
|
|
|
|||
2
Makefile
2
Makefile
|
|
@ -1,7 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 1
|
||||
SUBLEVEL = 17
|
||||
SUBLEVEL = 18
|
||||
EXTRAVERSION =
|
||||
NAME = Hurr durr I'ma ninja sloth
|
||||
|
||||
|
|
|
|||
|
|
@ -241,7 +241,7 @@
|
|||
irq-trigger = <0x1>;
|
||||
|
||||
stmpegpio: stmpe-gpio {
|
||||
compatible = "stmpe,gpio";
|
||||
compatible = "st,stmpe-gpio";
|
||||
reg = <0>;
|
||||
gpio-controller;
|
||||
#gpio-cells = <2>;
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
|
|||
({ \
|
||||
efi_virtmap_load(); \
|
||||
__efi_fpsimd_begin(); \
|
||||
spin_lock(&efi_rt_lock); \
|
||||
raw_spin_lock(&efi_rt_lock); \
|
||||
})
|
||||
|
||||
#undef arch_efi_call_virt
|
||||
|
|
@ -42,12 +42,12 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
|
|||
|
||||
#define arch_efi_call_virt_teardown() \
|
||||
({ \
|
||||
spin_unlock(&efi_rt_lock); \
|
||||
raw_spin_unlock(&efi_rt_lock); \
|
||||
__efi_fpsimd_end(); \
|
||||
efi_virtmap_unload(); \
|
||||
})
|
||||
|
||||
extern spinlock_t efi_rt_lock;
|
||||
extern raw_spinlock_t efi_rt_lock;
|
||||
extern u64 *efi_rt_stack_top;
|
||||
efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...);
|
||||
|
||||
|
|
|
|||
|
|
@ -37,6 +37,29 @@ void mte_free_tag_storage(char *storage);
|
|||
/* track which pages have valid allocation tags */
|
||||
#define PG_mte_tagged PG_arch_2
|
||||
|
||||
static inline void set_page_mte_tagged(struct page *page)
|
||||
{
|
||||
/*
|
||||
* Ensure that the tags written prior to this function are visible
|
||||
* before the page flags update.
|
||||
*/
|
||||
smp_wmb();
|
||||
set_bit(PG_mte_tagged, &page->flags);
|
||||
}
|
||||
|
||||
static inline bool page_mte_tagged(struct page *page)
|
||||
{
|
||||
bool ret = test_bit(PG_mte_tagged, &page->flags);
|
||||
|
||||
/*
|
||||
* If the page is tagged, ensure ordering with a likely subsequent
|
||||
* read of the tags.
|
||||
*/
|
||||
if (ret)
|
||||
smp_rmb();
|
||||
return ret;
|
||||
}
|
||||
|
||||
void mte_zero_clear_page_tags(void *addr);
|
||||
void mte_sync_tags(pte_t old_pte, pte_t pte);
|
||||
void mte_copy_page_tags(void *kto, const void *kfrom);
|
||||
|
|
@ -56,6 +79,13 @@ size_t mte_probe_user_range(const char __user *uaddr, size_t size);
|
|||
/* unused if !CONFIG_ARM64_MTE, silence the compiler */
|
||||
#define PG_mte_tagged 0
|
||||
|
||||
static inline void set_page_mte_tagged(struct page *page)
|
||||
{
|
||||
}
|
||||
static inline bool page_mte_tagged(struct page *page)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
static inline void mte_zero_clear_page_tags(void *addr)
|
||||
{
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1050,7 +1050,7 @@ static inline void arch_swap_invalidate_area(int type)
|
|||
static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
|
||||
{
|
||||
if (system_supports_mte() && mte_restore_tags(entry, &folio->page))
|
||||
set_bit(PG_mte_tagged, &folio->flags);
|
||||
set_page_mte_tagged(&folio->page);
|
||||
}
|
||||
|
||||
#endif /* CONFIG_ARM64_MTE */
|
||||
|
|
|
|||
|
|
@ -2075,8 +2075,10 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
|
|||
* Clear the tags in the zero page. This needs to be done via the
|
||||
* linear map which has the Tagged attribute.
|
||||
*/
|
||||
if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
|
||||
if (!page_mte_tagged(ZERO_PAGE(0))) {
|
||||
mte_clear_page_tags(lm_alias(empty_zero_page));
|
||||
set_page_mte_tagged(ZERO_PAGE(0));
|
||||
}
|
||||
|
||||
kasan_init_hw_tags_cpu();
|
||||
}
|
||||
|
|
|
|||
|
|
@ -146,7 +146,7 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
|
|||
return s;
|
||||
}
|
||||
|
||||
DEFINE_SPINLOCK(efi_rt_lock);
|
||||
DEFINE_RAW_SPINLOCK(efi_rt_lock);
|
||||
|
||||
asmlinkage u64 *efi_rt_stack_top __ro_after_init;
|
||||
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ static int mte_dump_tag_range(struct coredump_params *cprm,
|
|||
* Pages mapped in user space as !pte_access_permitted() (e.g.
|
||||
* PROT_EXEC only) may not have the PG_mte_tagged flag set.
|
||||
*/
|
||||
if (!test_bit(PG_mte_tagged, &page->flags)) {
|
||||
if (!page_mte_tagged(page)) {
|
||||
put_page(page);
|
||||
dump_skip(cprm, MTE_PAGE_TAG_STORAGE);
|
||||
continue;
|
||||
|
|
|
|||
|
|
@ -271,7 +271,7 @@ static int swsusp_mte_save_tags(void)
|
|||
if (!page)
|
||||
continue;
|
||||
|
||||
if (!test_bit(PG_mte_tagged, &page->flags))
|
||||
if (!page_mte_tagged(page))
|
||||
continue;
|
||||
|
||||
ret = save_tags(page, pfn);
|
||||
|
|
|
|||
|
|
@ -41,8 +41,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
|
|||
if (check_swap && is_swap_pte(old_pte)) {
|
||||
swp_entry_t entry = pte_to_swp_entry(old_pte);
|
||||
|
||||
if (!non_swap_entry(entry) && mte_restore_tags(entry, page))
|
||||
if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) {
|
||||
set_page_mte_tagged(page);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if (!pte_is_tagged)
|
||||
|
|
@ -52,8 +54,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
|
|||
* Test PG_mte_tagged again in case it was racing with another
|
||||
* set_pte_at().
|
||||
*/
|
||||
if (!test_and_set_bit(PG_mte_tagged, &page->flags))
|
||||
if (!page_mte_tagged(page)) {
|
||||
mte_clear_page_tags(page_address(page));
|
||||
set_page_mte_tagged(page);
|
||||
}
|
||||
}
|
||||
|
||||
void mte_sync_tags(pte_t old_pte, pte_t pte)
|
||||
|
|
@ -69,9 +73,11 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
|
|||
|
||||
/* if PG_mte_tagged is set, tags have already been initialised */
|
||||
for (i = 0; i < nr_pages; i++, page++) {
|
||||
if (!test_bit(PG_mte_tagged, &page->flags))
|
||||
if (!page_mte_tagged(page)) {
|
||||
mte_sync_page_tags(page, old_pte, check_swap,
|
||||
pte_is_tagged);
|
||||
set_page_mte_tagged(page);
|
||||
}
|
||||
}
|
||||
|
||||
/* ensure the tags are visible before the PTE is set */
|
||||
|
|
@ -96,8 +102,7 @@ int memcmp_pages(struct page *page1, struct page *page2)
|
|||
* pages is tagged, set_pte_at() may zero or change the tags of the
|
||||
* other page via mte_sync_tags().
|
||||
*/
|
||||
if (test_bit(PG_mte_tagged, &page1->flags) ||
|
||||
test_bit(PG_mte_tagged, &page2->flags))
|
||||
if (page_mte_tagged(page1) || page_mte_tagged(page2))
|
||||
return addr1 != addr2;
|
||||
|
||||
return ret;
|
||||
|
|
@ -454,7 +459,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
|
|||
put_page(page);
|
||||
break;
|
||||
}
|
||||
WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
|
||||
WARN_ON_ONCE(!page_mte_tagged(page));
|
||||
|
||||
/* limit access to the end of the page */
|
||||
offset = offset_in_page(addr);
|
||||
|
|
|
|||
|
|
@ -1061,7 +1061,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
|
|||
maddr = page_address(page);
|
||||
|
||||
if (!write) {
|
||||
if (test_bit(PG_mte_tagged, &page->flags))
|
||||
if (page_mte_tagged(page))
|
||||
num_tags = mte_copy_tags_to_user(tags, maddr,
|
||||
MTE_GRANULES_PER_PAGE);
|
||||
else
|
||||
|
|
@ -1078,7 +1078,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
|
|||
* completed fully
|
||||
*/
|
||||
if (num_tags == MTE_GRANULES_PER_PAGE)
|
||||
set_bit(PG_mte_tagged, &page->flags);
|
||||
set_page_mte_tagged(page);
|
||||
|
||||
kvm_release_pfn_dirty(pfn);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1271,9 +1271,9 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
|
|||
return -EFAULT;
|
||||
|
||||
for (i = 0; i < nr_pages; i++, page++) {
|
||||
if (!test_bit(PG_mte_tagged, &page->flags)) {
|
||||
if (!page_mte_tagged(page)) {
|
||||
mte_clear_page_tags(page_address(page));
|
||||
set_bit(PG_mte_tagged, &page->flags);
|
||||
set_page_mte_tagged(page);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -21,9 +21,11 @@ void copy_highpage(struct page *to, struct page *from)
|
|||
|
||||
copy_page(kto, kfrom);
|
||||
|
||||
if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
|
||||
set_bit(PG_mte_tagged, &to->flags);
|
||||
if (system_supports_mte() && page_mte_tagged(from)) {
|
||||
if (kasan_hw_tags_enabled())
|
||||
page_kasan_tag_reset(to);
|
||||
mte_copy_page_tags(kto, kfrom);
|
||||
set_page_mte_tagged(to);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(copy_highpage);
|
||||
|
|
|
|||
|
|
@ -972,5 +972,5 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
|
|||
void tag_clear_highpage(struct page *page)
|
||||
{
|
||||
mte_zero_clear_page_tags(page_address(page));
|
||||
set_bit(PG_mte_tagged, &page->flags);
|
||||
set_page_mte_tagged(page);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ int mte_save_tags(struct page *page)
|
|||
{
|
||||
void *tag_storage, *ret;
|
||||
|
||||
if (!test_bit(PG_mte_tagged, &page->flags))
|
||||
if (!page_mte_tagged(page))
|
||||
return 0;
|
||||
|
||||
tag_storage = mte_allocate_tag_storage();
|
||||
|
|
|
|||
|
|
@ -767,6 +767,7 @@ static int vector_config(char *str, char **error_out)
|
|||
|
||||
if (parsed == NULL) {
|
||||
*error_out = "vector_config failed to parse parameters";
|
||||
kfree(params);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -132,8 +132,11 @@ static int um_pci_send_cmd(struct um_pci_device *dev,
|
|||
out ? 1 : 0,
|
||||
posted ? cmd : HANDLE_NO_FREE(cmd),
|
||||
GFP_ATOMIC);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
if (posted)
|
||||
kfree(cmd);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (posted) {
|
||||
virtqueue_kick(dev->cmd_vq);
|
||||
|
|
@ -623,22 +626,33 @@ static void um_pci_virtio_remove(struct virtio_device *vdev)
|
|||
struct um_pci_device *dev = vdev->priv;
|
||||
int i;
|
||||
|
||||
/* Stop all virtqueues */
|
||||
virtio_reset_device(vdev);
|
||||
vdev->config->del_vqs(vdev);
|
||||
|
||||
device_set_wakeup_enable(&vdev->dev, false);
|
||||
|
||||
mutex_lock(&um_pci_mtx);
|
||||
for (i = 0; i < MAX_DEVICES; i++) {
|
||||
if (um_pci_devices[i].dev != dev)
|
||||
continue;
|
||||
|
||||
um_pci_devices[i].dev = NULL;
|
||||
irq_free_desc(dev->irq);
|
||||
|
||||
break;
|
||||
}
|
||||
mutex_unlock(&um_pci_mtx);
|
||||
|
||||
um_pci_rescan();
|
||||
if (i < MAX_DEVICES) {
|
||||
struct pci_dev *pci_dev;
|
||||
|
||||
pci_dev = pci_get_slot(bridge->bus, i);
|
||||
if (pci_dev)
|
||||
pci_stop_and_remove_bus_device_locked(pci_dev);
|
||||
}
|
||||
|
||||
/* Stop all virtqueues */
|
||||
virtio_reset_device(vdev);
|
||||
dev->cmd_vq = NULL;
|
||||
dev->irq_vq = NULL;
|
||||
vdev->config->del_vqs(vdev);
|
||||
|
||||
kfree(dev);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -168,7 +168,8 @@ static void vhost_user_check_reset(struct virtio_uml_device *vu_dev,
|
|||
if (!vu_dev->registered)
|
||||
return;
|
||||
|
||||
virtio_break_device(&vu_dev->vdev);
|
||||
vu_dev->registered = 0;
|
||||
|
||||
schedule_work(&pdata->conn_broken_wk);
|
||||
}
|
||||
|
||||
|
|
@ -1136,6 +1137,15 @@ void virtio_uml_set_no_vq_suspend(struct virtio_device *vdev,
|
|||
|
||||
static void vu_of_conn_broken(struct work_struct *wk)
|
||||
{
|
||||
struct virtio_uml_platform_data *pdata;
|
||||
struct virtio_uml_device *vu_dev;
|
||||
|
||||
pdata = container_of(wk, struct virtio_uml_platform_data, conn_broken_wk);
|
||||
|
||||
vu_dev = platform_get_drvdata(pdata->pdev);
|
||||
|
||||
virtio_break_device(&vu_dev->vdev);
|
||||
|
||||
/*
|
||||
* We can't remove the device from the devicetree so the only thing we
|
||||
* can do is warn.
|
||||
|
|
@ -1266,8 +1276,14 @@ static int vu_unregister_cmdline_device(struct device *dev, void *data)
|
|||
static void vu_conn_broken(struct work_struct *wk)
|
||||
{
|
||||
struct virtio_uml_platform_data *pdata;
|
||||
struct virtio_uml_device *vu_dev;
|
||||
|
||||
pdata = container_of(wk, struct virtio_uml_platform_data, conn_broken_wk);
|
||||
|
||||
vu_dev = platform_get_drvdata(pdata->pdev);
|
||||
|
||||
virtio_break_device(&vu_dev->vdev);
|
||||
|
||||
vu_unregister_cmdline_device(&pdata->pdev->dev, NULL);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
|
|||
* simple as possible.
|
||||
* Must be called with preemption disabled.
|
||||
*/
|
||||
static void __resctrl_sched_in(void)
|
||||
static inline void __resctrl_sched_in(struct task_struct *tsk)
|
||||
{
|
||||
struct resctrl_pqr_state *state = this_cpu_ptr(&pqr_state);
|
||||
u32 closid = state->default_closid;
|
||||
|
|
@ -63,13 +63,13 @@ static void __resctrl_sched_in(void)
|
|||
* Else use the closid/rmid assigned to this cpu.
|
||||
*/
|
||||
if (static_branch_likely(&rdt_alloc_enable_key)) {
|
||||
tmp = READ_ONCE(current->closid);
|
||||
tmp = READ_ONCE(tsk->closid);
|
||||
if (tmp)
|
||||
closid = tmp;
|
||||
}
|
||||
|
||||
if (static_branch_likely(&rdt_mon_enable_key)) {
|
||||
tmp = READ_ONCE(current->rmid);
|
||||
tmp = READ_ONCE(tsk->rmid);
|
||||
if (tmp)
|
||||
rmid = tmp;
|
||||
}
|
||||
|
|
@ -90,17 +90,17 @@ static inline unsigned int resctrl_arch_round_mon_val(unsigned int val)
|
|||
return val * scale;
|
||||
}
|
||||
|
||||
static inline void resctrl_sched_in(void)
|
||||
static inline void resctrl_sched_in(struct task_struct *tsk)
|
||||
{
|
||||
if (static_branch_likely(&rdt_enable_key))
|
||||
__resctrl_sched_in();
|
||||
__resctrl_sched_in(tsk);
|
||||
}
|
||||
|
||||
void resctrl_cpu_detect(struct cpuinfo_x86 *c);
|
||||
|
||||
#else
|
||||
|
||||
static inline void resctrl_sched_in(void) {}
|
||||
static inline void resctrl_sched_in(struct task_struct *tsk) {}
|
||||
static inline void resctrl_cpu_detect(struct cpuinfo_x86 *c) {}
|
||||
|
||||
#endif /* CONFIG_X86_CPU_RESCTRL */
|
||||
|
|
|
|||
|
|
@ -314,7 +314,7 @@ static void update_cpu_closid_rmid(void *info)
|
|||
* executing task might have its own closid selected. Just reuse
|
||||
* the context switch code.
|
||||
*/
|
||||
resctrl_sched_in();
|
||||
resctrl_sched_in(current);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -535,7 +535,7 @@ static void _update_task_closid_rmid(void *task)
|
|||
* Otherwise, the MSR is updated when the task is scheduled in.
|
||||
*/
|
||||
if (task == current)
|
||||
resctrl_sched_in();
|
||||
resctrl_sched_in(task);
|
||||
}
|
||||
|
||||
static void update_task_closid_rmid(struct task_struct *t)
|
||||
|
|
|
|||
|
|
@ -212,7 +212,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
|
|||
switch_fpu_finish();
|
||||
|
||||
/* Load the Intel cache allocation PQR MSR. */
|
||||
resctrl_sched_in();
|
||||
resctrl_sched_in(next_p);
|
||||
|
||||
return prev_p;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -656,7 +656,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
|
|||
}
|
||||
|
||||
/* Load the Intel cache allocation PQR MSR. */
|
||||
resctrl_sched_in();
|
||||
resctrl_sched_in(next_p);
|
||||
|
||||
return prev_p;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -17,8 +17,10 @@ int __vdso_clock_gettime(clockid_t clock, struct __kernel_old_timespec *ts)
|
|||
{
|
||||
long ret;
|
||||
|
||||
asm("syscall" : "=a" (ret) :
|
||||
"0" (__NR_clock_gettime), "D" (clock), "S" (ts) : "memory");
|
||||
asm("syscall"
|
||||
: "=a" (ret)
|
||||
: "0" (__NR_clock_gettime), "D" (clock), "S" (ts)
|
||||
: "rcx", "r11", "memory");
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
@ -29,8 +31,10 @@ int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz)
|
|||
{
|
||||
long ret;
|
||||
|
||||
asm("syscall" : "=a" (ret) :
|
||||
"0" (__NR_gettimeofday), "D" (tv), "S" (tz) : "memory");
|
||||
asm("syscall"
|
||||
: "=a" (ret)
|
||||
: "0" (__NR_gettimeofday), "D" (tv), "S" (tz)
|
||||
: "rcx", "r11", "memory");
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -484,6 +484,25 @@ void acpi_dev_power_up_children_with_adr(struct acpi_device *adev)
|
|||
acpi_dev_for_each_child(adev, acpi_power_up_if_adr_present, NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
* acpi_dev_power_state_for_wake - Deepest power state for wakeup signaling
|
||||
* @adev: ACPI companion of the target device.
|
||||
*
|
||||
* Evaluate _S0W for @adev and return the value produced by it or return
|
||||
* ACPI_STATE_UNKNOWN on errors (including _S0W not present).
|
||||
*/
|
||||
u8 acpi_dev_power_state_for_wake(struct acpi_device *adev)
|
||||
{
|
||||
unsigned long long state;
|
||||
acpi_status status;
|
||||
|
||||
status = acpi_evaluate_integer(adev->handle, "_S0W", NULL, &state);
|
||||
if (ACPI_FAILURE(status))
|
||||
return ACPI_STATE_UNKNOWN;
|
||||
|
||||
return state;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static DEFINE_MUTEX(acpi_pm_notifier_lock);
|
||||
static DEFINE_MUTEX(acpi_pm_notifier_install_lock);
|
||||
|
|
|
|||
|
|
@ -322,8 +322,10 @@ fail1:
|
|||
static int hd44780_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct charlcd *lcd = platform_get_drvdata(pdev);
|
||||
struct hd44780_common *hdc = lcd->drvdata;
|
||||
|
||||
charlcd_unregister(lcd);
|
||||
kfree(hdc->hd44780);
|
||||
kfree(lcd->drvdata);
|
||||
|
||||
kfree(lcd);
|
||||
|
|
|
|||
|
|
@ -251,7 +251,7 @@ static int cache_shared_cpu_map_setup(unsigned int cpu)
|
|||
{
|
||||
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
|
||||
struct cacheinfo *this_leaf, *sib_leaf;
|
||||
unsigned int index;
|
||||
unsigned int index, sib_index;
|
||||
int ret = 0;
|
||||
|
||||
if (this_cpu_ci->cpu_map_populated)
|
||||
|
|
@ -279,11 +279,13 @@ static int cache_shared_cpu_map_setup(unsigned int cpu)
|
|||
|
||||
if (i == cpu || !sib_cpu_ci->info_list)
|
||||
continue;/* skip if itself or no cacheinfo */
|
||||
|
||||
sib_leaf = per_cpu_cacheinfo_idx(i, index);
|
||||
if (cache_leaves_are_shared(this_leaf, sib_leaf)) {
|
||||
cpumask_set_cpu(cpu, &sib_leaf->shared_cpu_map);
|
||||
cpumask_set_cpu(i, &this_leaf->shared_cpu_map);
|
||||
for (sib_index = 0; sib_index < cache_leaves(i); sib_index++) {
|
||||
sib_leaf = per_cpu_cacheinfo_idx(i, sib_index);
|
||||
if (cache_leaves_are_shared(this_leaf, sib_leaf)) {
|
||||
cpumask_set_cpu(cpu, &sib_leaf->shared_cpu_map);
|
||||
cpumask_set_cpu(i, &this_leaf->shared_cpu_map);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
/* record the maximum cache line size */
|
||||
|
|
@ -297,7 +299,7 @@ static int cache_shared_cpu_map_setup(unsigned int cpu)
|
|||
static void cache_shared_cpu_map_remove(unsigned int cpu)
|
||||
{
|
||||
struct cacheinfo *this_leaf, *sib_leaf;
|
||||
unsigned int sibling, index;
|
||||
unsigned int sibling, index, sib_index;
|
||||
|
||||
for (index = 0; index < cache_leaves(cpu); index++) {
|
||||
this_leaf = per_cpu_cacheinfo_idx(cpu, index);
|
||||
|
|
@ -308,9 +310,14 @@ static void cache_shared_cpu_map_remove(unsigned int cpu)
|
|||
if (sibling == cpu || !sib_cpu_ci->info_list)
|
||||
continue;/* skip if itself or no cacheinfo */
|
||||
|
||||
sib_leaf = per_cpu_cacheinfo_idx(sibling, index);
|
||||
cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map);
|
||||
cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map);
|
||||
for (sib_index = 0; sib_index < cache_leaves(sibling); sib_index++) {
|
||||
sib_leaf = per_cpu_cacheinfo_idx(sibling, sib_index);
|
||||
if (cache_leaves_are_shared(this_leaf, sib_leaf)) {
|
||||
cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map);
|
||||
cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (of_have_populated_dt())
|
||||
of_node_put(this_leaf->fw_token);
|
||||
|
|
|
|||
|
|
@ -125,7 +125,7 @@ static void component_debugfs_add(struct aggregate_device *m)
|
|||
|
||||
static void component_debugfs_del(struct aggregate_device *m)
|
||||
{
|
||||
debugfs_remove(debugfs_lookup(dev_name(m->parent), component_debugfs_dir));
|
||||
debugfs_lookup_and_remove(dev_name(m->parent), component_debugfs_dir);
|
||||
}
|
||||
|
||||
#else
|
||||
|
|
|
|||
|
|
@ -372,7 +372,7 @@ late_initcall(deferred_probe_initcall);
|
|||
|
||||
static void __exit deferred_probe_exit(void)
|
||||
{
|
||||
debugfs_remove_recursive(debugfs_lookup("devices_deferred", NULL));
|
||||
debugfs_lookup_and_remove("devices_deferred", NULL);
|
||||
}
|
||||
__exitcall(deferred_probe_exit);
|
||||
|
||||
|
|
|
|||
|
|
@ -977,13 +977,13 @@ loop_set_status_from_info(struct loop_device *lo,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Avoid assigning overflow values */
|
||||
if (info->lo_offset > LLONG_MAX || info->lo_sizelimit > LLONG_MAX)
|
||||
return -EOVERFLOW;
|
||||
|
||||
lo->lo_offset = info->lo_offset;
|
||||
lo->lo_sizelimit = info->lo_sizelimit;
|
||||
|
||||
/* loff_t vars have been assigned __u64 */
|
||||
if (lo->lo_offset < 0 || lo->lo_sizelimit < 0)
|
||||
return -EOVERFLOW;
|
||||
|
||||
memcpy(lo->lo_file_name, info->lo_file_name, LO_NAME_SIZE);
|
||||
lo->lo_file_name[LO_NAME_SIZE-1] = 0;
|
||||
lo->lo_flags = info->lo_flags;
|
||||
|
|
|
|||
|
|
@ -219,7 +219,7 @@ static int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ring_ele
|
|||
mutex_unlock(&mhi_chan->lock);
|
||||
break;
|
||||
case MHI_PKT_TYPE_RESET_CHAN_CMD:
|
||||
dev_dbg(dev, "Received STOP command for channel (%u)\n", ch_id);
|
||||
dev_dbg(dev, "Received RESET command for channel (%u)\n", ch_id);
|
||||
if (!ch_ring->started) {
|
||||
dev_err(dev, "Channel (%u) not opened\n", ch_id);
|
||||
return -ENODEV;
|
||||
|
|
|
|||
|
|
@ -264,6 +264,14 @@ static const struct dmi_system_id efifb_dmi_swap_width_height[] __initconst = {
|
|||
"Lenovo ideapad D330-10IGM"),
|
||||
},
|
||||
},
|
||||
{
|
||||
/* Lenovo IdeaPad Duet 3 10IGL5 with 1200x1920 portrait screen */
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_VERSION,
|
||||
"IdeaPad Duet 3 10IGL5"),
|
||||
},
|
||||
},
|
||||
{},
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -3309,8 +3309,13 @@ int drm_dp_add_payload_part1(struct drm_dp_mst_topology_mgr *mgr,
|
|||
int ret;
|
||||
|
||||
port = drm_dp_mst_topology_get_port_validated(mgr, payload->port);
|
||||
if (!port)
|
||||
if (!port) {
|
||||
drm_dbg_kms(mgr->dev,
|
||||
"VCPI %d for port %p not in topology, not creating a payload\n",
|
||||
payload->vcpi, payload->port);
|
||||
payload->vc_start_slot = -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (mgr->payload_count == 0)
|
||||
mgr->next_start_slot = mst_state->start_slot;
|
||||
|
|
@ -3644,6 +3649,9 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
|
|||
drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0);
|
||||
ret = 0;
|
||||
mgr->payload_id_table_cleared = false;
|
||||
|
||||
memset(&mgr->down_rep_recv, 0, sizeof(mgr->down_rep_recv));
|
||||
memset(&mgr->up_req_recv, 0, sizeof(mgr->up_req_recv));
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
|
|
@ -3856,7 +3864,7 @@ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
|
|||
struct drm_dp_sideband_msg_rx *msg = &mgr->down_rep_recv;
|
||||
|
||||
if (!drm_dp_get_one_sb_msg(mgr, false, &mstb))
|
||||
goto out;
|
||||
goto out_clear_reply;
|
||||
|
||||
/* Multi-packet message transmission, don't clear the reply */
|
||||
if (!msg->have_eomt)
|
||||
|
|
@ -5355,27 +5363,52 @@ struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_a
|
|||
EXPORT_SYMBOL(drm_atomic_get_mst_topology_state);
|
||||
|
||||
/**
|
||||
* drm_atomic_get_new_mst_topology_state: get new MST topology state in atomic state, if any
|
||||
* drm_atomic_get_old_mst_topology_state: get old MST topology state in atomic state, if any
|
||||
* @state: global atomic state
|
||||
* @mgr: MST topology manager, also the private object in this case
|
||||
*
|
||||
* This function wraps drm_atomic_get_priv_obj_state() passing in the MST atomic
|
||||
* This function wraps drm_atomic_get_old_private_obj_state() passing in the MST atomic
|
||||
* state vtable so that the private object state returned is that of a MST
|
||||
* topology object.
|
||||
*
|
||||
* Returns:
|
||||
*
|
||||
* The MST topology state, or NULL if there's no topology state for this MST mgr
|
||||
* The old MST topology state, or NULL if there's no topology state for this MST mgr
|
||||
* in the global atomic state
|
||||
*/
|
||||
struct drm_dp_mst_topology_state *
|
||||
drm_atomic_get_old_mst_topology_state(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr)
|
||||
{
|
||||
struct drm_private_state *old_priv_state =
|
||||
drm_atomic_get_old_private_obj_state(state, &mgr->base);
|
||||
|
||||
return old_priv_state ? to_dp_mst_topology_state(old_priv_state) : NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_atomic_get_old_mst_topology_state);
|
||||
|
||||
/**
|
||||
* drm_atomic_get_new_mst_topology_state: get new MST topology state in atomic state, if any
|
||||
* @state: global atomic state
|
||||
* @mgr: MST topology manager, also the private object in this case
|
||||
*
|
||||
* This function wraps drm_atomic_get_new_private_obj_state() passing in the MST atomic
|
||||
* state vtable so that the private object state returned is that of a MST
|
||||
* topology object.
|
||||
*
|
||||
* Returns:
|
||||
*
|
||||
* The new MST topology state, or NULL if there's no topology state for this MST mgr
|
||||
* in the global atomic state
|
||||
*/
|
||||
struct drm_dp_mst_topology_state *
|
||||
drm_atomic_get_new_mst_topology_state(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr)
|
||||
{
|
||||
struct drm_private_state *priv_state =
|
||||
struct drm_private_state *new_priv_state =
|
||||
drm_atomic_get_new_private_obj_state(state, &mgr->base);
|
||||
|
||||
return priv_state ? to_dp_mst_topology_state(priv_state) : NULL;
|
||||
return new_priv_state ? to_dp_mst_topology_state(new_priv_state) : NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_atomic_get_new_mst_topology_state);
|
||||
|
||||
|
|
|
|||
|
|
@ -107,9 +107,6 @@ config DRM_I915_USERPTR
|
|||
|
||||
If in doubt, say "Y".
|
||||
|
||||
config DRM_I915_GVT
|
||||
bool
|
||||
|
||||
config DRM_I915_GVT_KVMGT
|
||||
tristate "Enable KVM host support Intel GVT-g graphics virtualization"
|
||||
depends on DRM_I915
|
||||
|
|
@ -160,3 +157,6 @@ menu "drm/i915 Unstable Evolution"
|
|||
depends on DRM_I915
|
||||
source "drivers/gpu/drm/i915/Kconfig.unstable"
|
||||
endmenu
|
||||
|
||||
config DRM_I915_GVT
|
||||
bool
|
||||
|
|
|
|||
|
|
@ -5969,6 +5969,10 @@ int intel_modeset_all_pipes(struct intel_atomic_state *state)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = intel_dp_mst_add_topology_state_for_crtc(state, crtc);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = intel_atomic_add_affected_planes(state, crtc);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
|
|
|||
|
|
@ -1003,3 +1003,64 @@ bool intel_dp_mst_is_slave_trans(const struct intel_crtc_state *crtc_state)
|
|||
return crtc_state->mst_master_transcoder != INVALID_TRANSCODER &&
|
||||
crtc_state->mst_master_transcoder != crtc_state->cpu_transcoder;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_dp_mst_add_topology_state_for_connector - add MST topology state for a connector
|
||||
* @state: atomic state
|
||||
* @connector: connector to add the state for
|
||||
* @crtc: the CRTC @connector is attached to
|
||||
*
|
||||
* Add the MST topology state for @connector to @state.
|
||||
*
|
||||
* Returns 0 on success, negative error code on failure.
|
||||
*/
|
||||
static int
|
||||
intel_dp_mst_add_topology_state_for_connector(struct intel_atomic_state *state,
|
||||
struct intel_connector *connector,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_dp_mst_topology_state *mst_state;
|
||||
|
||||
if (!connector->mst_port)
|
||||
return 0;
|
||||
|
||||
mst_state = drm_atomic_get_mst_topology_state(&state->base,
|
||||
&connector->mst_port->mst_mgr);
|
||||
if (IS_ERR(mst_state))
|
||||
return PTR_ERR(mst_state);
|
||||
|
||||
mst_state->pending_crtc_mask |= drm_crtc_mask(&crtc->base);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_dp_mst_add_topology_state_for_crtc - add MST topology state for a CRTC
|
||||
* @state: atomic state
|
||||
* @crtc: CRTC to add the state for
|
||||
*
|
||||
* Add the MST topology state for @crtc to @state.
|
||||
*
|
||||
* Returns 0 on success, negative error code on failure.
|
||||
*/
|
||||
int intel_dp_mst_add_topology_state_for_crtc(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_connector *_connector;
|
||||
struct drm_connector_state *conn_state;
|
||||
int i;
|
||||
|
||||
for_each_new_connector_in_state(&state->base, _connector, conn_state, i) {
|
||||
struct intel_connector *connector = to_intel_connector(_connector);
|
||||
int ret;
|
||||
|
||||
if (conn_state->crtc != &crtc->base)
|
||||
continue;
|
||||
|
||||
ret = intel_dp_mst_add_topology_state_for_connector(state, connector, crtc);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,6 +8,8 @@
|
|||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct intel_atomic_state;
|
||||
struct intel_crtc;
|
||||
struct intel_crtc_state;
|
||||
struct intel_digital_port;
|
||||
struct intel_dp;
|
||||
|
|
@ -18,5 +20,7 @@ int intel_dp_mst_encoder_active_links(struct intel_digital_port *dig_port);
|
|||
bool intel_dp_mst_is_master_trans(const struct intel_crtc_state *crtc_state);
|
||||
bool intel_dp_mst_is_slave_trans(const struct intel_crtc_state *crtc_state);
|
||||
bool intel_dp_mst_source_support(struct intel_dp *intel_dp);
|
||||
int intel_dp_mst_add_topology_state_for_crtc(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
|
||||
#endif /* __INTEL_DP_MST_H__ */
|
||||
|
|
|
|||
|
|
@ -624,7 +624,13 @@ void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous
|
|||
struct intel_fbdev *ifbdev = dev_priv->display.fbdev.fbdev;
|
||||
struct fb_info *info;
|
||||
|
||||
if (!ifbdev || !ifbdev->vma)
|
||||
if (!ifbdev)
|
||||
return;
|
||||
|
||||
if (drm_WARN_ON(&dev_priv->drm, !HAS_DISPLAY(dev_priv)))
|
||||
return;
|
||||
|
||||
if (!ifbdev->vma)
|
||||
goto set_suspend;
|
||||
|
||||
info = ifbdev->helper.fbdev;
|
||||
|
|
|
|||
|
|
@ -296,9 +296,12 @@ int mma9551_read_config_word(struct i2c_client *client, u8 app_id,
|
|||
|
||||
ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_CONFIG,
|
||||
reg, NULL, 0, (u8 *)&v, 2);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
*val = be16_to_cpu(v);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_NS(mma9551_read_config_word, IIO_MMA9551);
|
||||
|
||||
|
|
@ -354,9 +357,12 @@ int mma9551_read_status_word(struct i2c_client *client, u8 app_id,
|
|||
|
||||
ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_STATUS,
|
||||
reg, NULL, 0, (u8 *)&v, 2);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
*val = be16_to_cpu(v);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_NS(mma9551_read_status_word, IIO_MMA9551);
|
||||
|
||||
|
|
|
|||
|
|
@ -479,13 +479,20 @@ static int compare_netdev_and_ip(int ifindex_a, struct sockaddr *sa,
|
|||
if (sa->sa_family != sb->sa_family)
|
||||
return sa->sa_family - sb->sa_family;
|
||||
|
||||
if (sa->sa_family == AF_INET)
|
||||
return memcmp((char *)&((struct sockaddr_in *)sa)->sin_addr,
|
||||
(char *)&((struct sockaddr_in *)sb)->sin_addr,
|
||||
if (sa->sa_family == AF_INET &&
|
||||
__builtin_object_size(sa, 0) >= sizeof(struct sockaddr_in)) {
|
||||
return memcmp(&((struct sockaddr_in *)sa)->sin_addr,
|
||||
&((struct sockaddr_in *)sb)->sin_addr,
|
||||
sizeof(((struct sockaddr_in *)sa)->sin_addr));
|
||||
}
|
||||
|
||||
return ipv6_addr_cmp(&((struct sockaddr_in6 *)sa)->sin6_addr,
|
||||
&((struct sockaddr_in6 *)sb)->sin6_addr);
|
||||
if (sa->sa_family == AF_INET6 &&
|
||||
__builtin_object_size(sa, 0) >= sizeof(struct sockaddr_in6)) {
|
||||
return ipv6_addr_cmp(&((struct sockaddr_in6 *)sa)->sin6_addr,
|
||||
&((struct sockaddr_in6 *)sb)->sin6_addr);
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
static int cma_add_id_to_tree(struct rdma_id_private *node_id_priv)
|
||||
|
|
|
|||
|
|
@ -1056,7 +1056,7 @@ static void read_link_down_reason(struct hfi1_devdata *dd, u8 *ldr);
|
|||
static void handle_temp_err(struct hfi1_devdata *dd);
|
||||
static void dc_shutdown(struct hfi1_devdata *dd);
|
||||
static void dc_start(struct hfi1_devdata *dd);
|
||||
static int qos_rmt_entries(struct hfi1_devdata *dd, unsigned int *mp,
|
||||
static int qos_rmt_entries(unsigned int n_krcv_queues, unsigned int *mp,
|
||||
unsigned int *np);
|
||||
static void clear_full_mgmt_pkey(struct hfi1_pportdata *ppd);
|
||||
static int wait_link_transfer_active(struct hfi1_devdata *dd, int wait_ms);
|
||||
|
|
@ -13362,7 +13362,6 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
|
|||
int ret;
|
||||
unsigned ngroups;
|
||||
int rmt_count;
|
||||
int user_rmt_reduced;
|
||||
u32 n_usr_ctxts;
|
||||
u32 send_contexts = chip_send_contexts(dd);
|
||||
u32 rcv_contexts = chip_rcv_contexts(dd);
|
||||
|
|
@ -13421,28 +13420,34 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
|
|||
(num_kernel_contexts + n_usr_ctxts),
|
||||
&node_affinity.real_cpu_mask);
|
||||
/*
|
||||
* The RMT entries are currently allocated as shown below:
|
||||
* 1. QOS (0 to 128 entries);
|
||||
* 2. FECN (num_kernel_context - 1 + num_user_contexts +
|
||||
* num_netdev_contexts);
|
||||
* 3. netdev (num_netdev_contexts).
|
||||
* It should be noted that FECN oversubscribe num_netdev_contexts
|
||||
* entries of RMT because both netdev and PSM could allocate any receive
|
||||
* context between dd->first_dyn_alloc_text and dd->num_rcv_contexts,
|
||||
* and PSM FECN must reserve an RMT entry for each possible PSM receive
|
||||
* context.
|
||||
* RMT entries are allocated as follows:
|
||||
* 1. QOS (0 to 128 entries)
|
||||
* 2. FECN (num_kernel_context - 1 [a] + num_user_contexts +
|
||||
* num_netdev_contexts [b])
|
||||
* 3. netdev (NUM_NETDEV_MAP_ENTRIES)
|
||||
*
|
||||
* Notes:
|
||||
* [a] Kernel contexts (except control) are included in FECN if kernel
|
||||
* TID_RDMA is active.
|
||||
* [b] Netdev and user contexts are randomly allocated from the same
|
||||
* context pool, so FECN must cover all contexts in the pool.
|
||||
*/
|
||||
rmt_count = qos_rmt_entries(dd, NULL, NULL) + (num_netdev_contexts * 2);
|
||||
if (HFI1_CAP_IS_KSET(TID_RDMA))
|
||||
rmt_count += num_kernel_contexts - 1;
|
||||
if (rmt_count + n_usr_ctxts > NUM_MAP_ENTRIES) {
|
||||
user_rmt_reduced = NUM_MAP_ENTRIES - rmt_count;
|
||||
dd_dev_err(dd,
|
||||
"RMT size is reducing the number of user receive contexts from %u to %d\n",
|
||||
n_usr_ctxts,
|
||||
user_rmt_reduced);
|
||||
/* recalculate */
|
||||
n_usr_ctxts = user_rmt_reduced;
|
||||
rmt_count = qos_rmt_entries(num_kernel_contexts - 1, NULL, NULL)
|
||||
+ (HFI1_CAP_IS_KSET(TID_RDMA) ? num_kernel_contexts - 1
|
||||
: 0)
|
||||
+ n_usr_ctxts
|
||||
+ num_netdev_contexts
|
||||
+ NUM_NETDEV_MAP_ENTRIES;
|
||||
if (rmt_count > NUM_MAP_ENTRIES) {
|
||||
int over = rmt_count - NUM_MAP_ENTRIES;
|
||||
/* try to squish user contexts, minimum of 1 */
|
||||
if (over >= n_usr_ctxts) {
|
||||
dd_dev_err(dd, "RMT overflow: reduce the requested number of contexts\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
dd_dev_err(dd, "RMT overflow: reducing # user contexts from %u to %u\n",
|
||||
n_usr_ctxts, n_usr_ctxts - over);
|
||||
n_usr_ctxts -= over;
|
||||
}
|
||||
|
||||
/* the first N are kernel contexts, the rest are user/netdev contexts */
|
||||
|
|
@ -14299,15 +14304,15 @@ static void clear_rsm_rule(struct hfi1_devdata *dd, u8 rule_index)
|
|||
}
|
||||
|
||||
/* return the number of RSM map table entries that will be used for QOS */
|
||||
static int qos_rmt_entries(struct hfi1_devdata *dd, unsigned int *mp,
|
||||
static int qos_rmt_entries(unsigned int n_krcv_queues, unsigned int *mp,
|
||||
unsigned int *np)
|
||||
{
|
||||
int i;
|
||||
unsigned int m, n;
|
||||
u8 max_by_vl = 0;
|
||||
uint max_by_vl = 0;
|
||||
|
||||
/* is QOS active at all? */
|
||||
if (dd->n_krcv_queues <= MIN_KERNEL_KCTXTS ||
|
||||
if (n_krcv_queues < MIN_KERNEL_KCTXTS ||
|
||||
num_vls == 1 ||
|
||||
krcvqsset <= 1)
|
||||
goto no_qos;
|
||||
|
|
@ -14365,7 +14370,7 @@ static void init_qos(struct hfi1_devdata *dd, struct rsm_map_table *rmt)
|
|||
|
||||
if (!rmt)
|
||||
goto bail;
|
||||
rmt_entries = qos_rmt_entries(dd, &m, &n);
|
||||
rmt_entries = qos_rmt_entries(dd->n_krcv_queues - 1, &m, &n);
|
||||
if (rmt_entries == 0)
|
||||
goto bail;
|
||||
qpns_per_vl = 1 << m;
|
||||
|
|
|
|||
|
|
@ -1712,27 +1712,29 @@ static int pdev_pri_ats_enable(struct pci_dev *pdev)
|
|||
/* Only allow access to user-accessible pages */
|
||||
ret = pci_enable_pasid(pdev, 0);
|
||||
if (ret)
|
||||
goto out_err;
|
||||
return ret;
|
||||
|
||||
/* First reset the PRI state of the device */
|
||||
ret = pci_reset_pri(pdev);
|
||||
if (ret)
|
||||
goto out_err;
|
||||
goto out_err_pasid;
|
||||
|
||||
/* Enable PRI */
|
||||
/* FIXME: Hardcode number of outstanding requests for now */
|
||||
ret = pci_enable_pri(pdev, 32);
|
||||
if (ret)
|
||||
goto out_err;
|
||||
goto out_err_pasid;
|
||||
|
||||
ret = pci_enable_ats(pdev, PAGE_SHIFT);
|
||||
if (ret)
|
||||
goto out_err;
|
||||
goto out_err_pri;
|
||||
|
||||
return 0;
|
||||
|
||||
out_err:
|
||||
out_err_pri:
|
||||
pci_disable_pri(pdev);
|
||||
|
||||
out_err_pasid:
|
||||
pci_disable_pasid(pdev);
|
||||
|
||||
return ret;
|
||||
|
|
|
|||
|
|
@ -2089,8 +2089,22 @@ static int __iommu_attach_group(struct iommu_domain *domain,
|
|||
|
||||
ret = __iommu_group_for_each_dev(group, domain,
|
||||
iommu_group_do_attach_device);
|
||||
if (ret == 0)
|
||||
if (ret == 0) {
|
||||
group->domain = domain;
|
||||
} else {
|
||||
/*
|
||||
* To recover from the case when certain device within the
|
||||
* group fails to attach to the new domain, we need force
|
||||
* attaching all devices back to the old domain. The old
|
||||
* domain is compatible for all devices in the group,
|
||||
* hence the iommu driver should always return success.
|
||||
*/
|
||||
struct iommu_domain *old_domain = group->domain;
|
||||
|
||||
group->domain = NULL;
|
||||
WARN(__iommu_group_set_domain(group, old_domain),
|
||||
"iommu driver failed to attach a compatible domain");
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@
|
|||
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
|
||||
*/
|
||||
|
||||
#include <asm/barrier.h>
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/list.h>
|
||||
|
|
@ -1509,6 +1510,10 @@ static void uvc_ctrl_status_event_work(struct work_struct *work)
|
|||
|
||||
uvc_ctrl_status_event(w->chain, w->ctrl, w->data);
|
||||
|
||||
/* The barrier is needed to synchronize with uvc_status_stop(). */
|
||||
if (smp_load_acquire(&dev->flush_status))
|
||||
return;
|
||||
|
||||
/* Resubmit the URB. */
|
||||
w->urb->interval = dev->int_ep->desc.bInterval;
|
||||
ret = usb_submit_urb(w->urb, GFP_KERNEL);
|
||||
|
|
|
|||
|
|
@ -252,14 +252,10 @@ static int uvc_parse_format(struct uvc_device *dev,
|
|||
fmtdesc = uvc_format_by_guid(&buffer[5]);
|
||||
|
||||
if (fmtdesc != NULL) {
|
||||
strscpy(format->name, fmtdesc->name,
|
||||
sizeof(format->name));
|
||||
format->fcc = fmtdesc->fcc;
|
||||
} else {
|
||||
dev_info(&streaming->intf->dev,
|
||||
"Unknown video format %pUl\n", &buffer[5]);
|
||||
snprintf(format->name, sizeof(format->name), "%pUl\n",
|
||||
&buffer[5]);
|
||||
format->fcc = 0;
|
||||
}
|
||||
|
||||
|
|
@ -271,8 +267,6 @@ static int uvc_parse_format(struct uvc_device *dev,
|
|||
*/
|
||||
if (dev->quirks & UVC_QUIRK_FORCE_Y8) {
|
||||
if (format->fcc == V4L2_PIX_FMT_YUYV) {
|
||||
strscpy(format->name, "Greyscale 8-bit (Y8 )",
|
||||
sizeof(format->name));
|
||||
format->fcc = V4L2_PIX_FMT_GREY;
|
||||
format->bpp = 8;
|
||||
width_multiplier = 2;
|
||||
|
|
@ -313,7 +307,6 @@ static int uvc_parse_format(struct uvc_device *dev,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
strscpy(format->name, "MJPEG", sizeof(format->name));
|
||||
format->fcc = V4L2_PIX_FMT_MJPEG;
|
||||
format->flags = UVC_FMT_FLAG_COMPRESSED;
|
||||
format->bpp = 0;
|
||||
|
|
@ -329,17 +322,7 @@ static int uvc_parse_format(struct uvc_device *dev,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
switch (buffer[8] & 0x7f) {
|
||||
case 0:
|
||||
strscpy(format->name, "SD-DV", sizeof(format->name));
|
||||
break;
|
||||
case 1:
|
||||
strscpy(format->name, "SDL-DV", sizeof(format->name));
|
||||
break;
|
||||
case 2:
|
||||
strscpy(format->name, "HD-DV", sizeof(format->name));
|
||||
break;
|
||||
default:
|
||||
if ((buffer[8] & 0x7f) > 2) {
|
||||
uvc_dbg(dev, DESCR,
|
||||
"device %d videostreaming interface %d: unknown DV format %u\n",
|
||||
dev->udev->devnum,
|
||||
|
|
@ -347,9 +330,6 @@ static int uvc_parse_format(struct uvc_device *dev,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
strlcat(format->name, buffer[8] & (1 << 7) ? " 60Hz" : " 50Hz",
|
||||
sizeof(format->name));
|
||||
|
||||
format->fcc = V4L2_PIX_FMT_DV;
|
||||
format->flags = UVC_FMT_FLAG_COMPRESSED | UVC_FMT_FLAG_STREAM;
|
||||
format->bpp = 0;
|
||||
|
|
@ -376,7 +356,7 @@ static int uvc_parse_format(struct uvc_device *dev,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
uvc_dbg(dev, DESCR, "Found format %s\n", format->name);
|
||||
uvc_dbg(dev, DESCR, "Found format %p4cc", &format->fcc);
|
||||
|
||||
buflen -= buffer[0];
|
||||
buffer += buffer[0];
|
||||
|
|
@ -880,10 +860,8 @@ static int uvc_parse_vendor_control(struct uvc_device *dev,
|
|||
+ n;
|
||||
memcpy(unit->extension.bmControls, &buffer[23+p], 2*n);
|
||||
|
||||
if (buffer[24+p+2*n] != 0)
|
||||
usb_string(udev, buffer[24+p+2*n], unit->name,
|
||||
sizeof(unit->name));
|
||||
else
|
||||
if (buffer[24+p+2*n] == 0 ||
|
||||
usb_string(udev, buffer[24+p+2*n], unit->name, sizeof(unit->name)) < 0)
|
||||
sprintf(unit->name, "Extension %u", buffer[3]);
|
||||
|
||||
list_add_tail(&unit->list, &dev->entities);
|
||||
|
|
@ -1007,15 +985,15 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
|
|||
memcpy(term->media.bmTransportModes, &buffer[10+n], p);
|
||||
}
|
||||
|
||||
if (buffer[7] != 0)
|
||||
usb_string(udev, buffer[7], term->name,
|
||||
sizeof(term->name));
|
||||
else if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA)
|
||||
sprintf(term->name, "Camera %u", buffer[3]);
|
||||
else if (UVC_ENTITY_TYPE(term) == UVC_ITT_MEDIA_TRANSPORT_INPUT)
|
||||
sprintf(term->name, "Media %u", buffer[3]);
|
||||
else
|
||||
sprintf(term->name, "Input %u", buffer[3]);
|
||||
if (buffer[7] == 0 ||
|
||||
usb_string(udev, buffer[7], term->name, sizeof(term->name)) < 0) {
|
||||
if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA)
|
||||
sprintf(term->name, "Camera %u", buffer[3]);
|
||||
if (UVC_ENTITY_TYPE(term) == UVC_ITT_MEDIA_TRANSPORT_INPUT)
|
||||
sprintf(term->name, "Media %u", buffer[3]);
|
||||
else
|
||||
sprintf(term->name, "Input %u", buffer[3]);
|
||||
}
|
||||
|
||||
list_add_tail(&term->list, &dev->entities);
|
||||
break;
|
||||
|
|
@ -1048,10 +1026,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
|
|||
|
||||
memcpy(term->baSourceID, &buffer[7], 1);
|
||||
|
||||
if (buffer[8] != 0)
|
||||
usb_string(udev, buffer[8], term->name,
|
||||
sizeof(term->name));
|
||||
else
|
||||
if (buffer[8] == 0 ||
|
||||
usb_string(udev, buffer[8], term->name, sizeof(term->name)) < 0)
|
||||
sprintf(term->name, "Output %u", buffer[3]);
|
||||
|
||||
list_add_tail(&term->list, &dev->entities);
|
||||
|
|
@ -1073,10 +1049,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
|
|||
|
||||
memcpy(unit->baSourceID, &buffer[5], p);
|
||||
|
||||
if (buffer[5+p] != 0)
|
||||
usb_string(udev, buffer[5+p], unit->name,
|
||||
sizeof(unit->name));
|
||||
else
|
||||
if (buffer[5+p] == 0 ||
|
||||
usb_string(udev, buffer[5+p], unit->name, sizeof(unit->name)) < 0)
|
||||
sprintf(unit->name, "Selector %u", buffer[3]);
|
||||
|
||||
list_add_tail(&unit->list, &dev->entities);
|
||||
|
|
@ -1106,10 +1080,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
|
|||
if (dev->uvc_version >= 0x0110)
|
||||
unit->processing.bmVideoStandards = buffer[9+n];
|
||||
|
||||
if (buffer[8+n] != 0)
|
||||
usb_string(udev, buffer[8+n], unit->name,
|
||||
sizeof(unit->name));
|
||||
else
|
||||
if (buffer[8+n] == 0 ||
|
||||
usb_string(udev, buffer[8+n], unit->name, sizeof(unit->name)) < 0)
|
||||
sprintf(unit->name, "Processing %u", buffer[3]);
|
||||
|
||||
list_add_tail(&unit->list, &dev->entities);
|
||||
|
|
@ -1137,10 +1109,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
|
|||
unit->extension.bmControls = (u8 *)unit + sizeof(*unit);
|
||||
memcpy(unit->extension.bmControls, &buffer[23+p], n);
|
||||
|
||||
if (buffer[23+p+n] != 0)
|
||||
usb_string(udev, buffer[23+p+n], unit->name,
|
||||
sizeof(unit->name));
|
||||
else
|
||||
if (buffer[23+p+n] == 0 ||
|
||||
usb_string(udev, buffer[23+p+n], unit->name, sizeof(unit->name)) < 0)
|
||||
sprintf(unit->name, "Extension %u", buffer[3]);
|
||||
|
||||
list_add_tail(&unit->list, &dev->entities);
|
||||
|
|
@ -2483,6 +2453,24 @@ static const struct usb_device_id uvc_ids[] = {
|
|||
.bInterfaceSubClass = 1,
|
||||
.bInterfaceProtocol = 0,
|
||||
.driver_info = (kernel_ulong_t)&uvc_quirk_probe_minmax },
|
||||
/* Logitech, Webcam C910 */
|
||||
{ .match_flags = USB_DEVICE_ID_MATCH_DEVICE
|
||||
| USB_DEVICE_ID_MATCH_INT_INFO,
|
||||
.idVendor = 0x046d,
|
||||
.idProduct = 0x0821,
|
||||
.bInterfaceClass = USB_CLASS_VIDEO,
|
||||
.bInterfaceSubClass = 1,
|
||||
.bInterfaceProtocol = 0,
|
||||
.driver_info = UVC_INFO_QUIRK(UVC_QUIRK_WAKE_AUTOSUSPEND)},
|
||||
/* Logitech, Webcam B910 */
|
||||
{ .match_flags = USB_DEVICE_ID_MATCH_DEVICE
|
||||
| USB_DEVICE_ID_MATCH_INT_INFO,
|
||||
.idVendor = 0x046d,
|
||||
.idProduct = 0x0823,
|
||||
.bInterfaceClass = USB_CLASS_VIDEO,
|
||||
.bInterfaceSubClass = 1,
|
||||
.bInterfaceProtocol = 0,
|
||||
.driver_info = UVC_INFO_QUIRK(UVC_QUIRK_WAKE_AUTOSUSPEND)},
|
||||
/* Logitech Quickcam Fusion */
|
||||
{ .match_flags = USB_DEVICE_ID_MATCH_DEVICE
|
||||
| USB_DEVICE_ID_MATCH_INT_INFO,
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ static int uvc_mc_create_links(struct uvc_video_chain *chain,
|
|||
continue;
|
||||
|
||||
remote = uvc_entity_by_id(chain->dev, entity->baSourceID[i]);
|
||||
if (remote == NULL)
|
||||
if (remote == NULL || remote->num_pads == 0)
|
||||
return -EINVAL;
|
||||
|
||||
source = (UVC_ENTITY_TYPE(remote) == UVC_TT_STREAMING)
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@
|
|||
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
|
||||
*/
|
||||
|
||||
#include <asm/barrier.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/input.h>
|
||||
#include <linux/slab.h>
|
||||
|
|
@ -309,5 +310,41 @@ int uvc_status_start(struct uvc_device *dev, gfp_t flags)
|
|||
|
||||
void uvc_status_stop(struct uvc_device *dev)
|
||||
{
|
||||
struct uvc_ctrl_work *w = &dev->async_ctrl;
|
||||
|
||||
/*
|
||||
* Prevent the asynchronous control handler from requeing the URB. The
|
||||
* barrier is needed so the flush_status change is visible to other
|
||||
* CPUs running the asynchronous handler before usb_kill_urb() is
|
||||
* called below.
|
||||
*/
|
||||
smp_store_release(&dev->flush_status, true);
|
||||
|
||||
/*
|
||||
* Cancel any pending asynchronous work. If any status event was queued,
|
||||
* process it synchronously.
|
||||
*/
|
||||
if (cancel_work_sync(&w->work))
|
||||
uvc_ctrl_status_event(w->chain, w->ctrl, w->data);
|
||||
|
||||
/* Kill the urb. */
|
||||
usb_kill_urb(dev->int_urb);
|
||||
|
||||
/*
|
||||
* The URB completion handler may have queued asynchronous work. This
|
||||
* won't resubmit the URB as flush_status is set, but it needs to be
|
||||
* cancelled before returning or it could then race with a future
|
||||
* uvc_status_start() call.
|
||||
*/
|
||||
if (cancel_work_sync(&w->work))
|
||||
uvc_ctrl_status_event(w->chain, w->ctrl, w->data);
|
||||
|
||||
/*
|
||||
* From this point, there are no events on the queue and the status URB
|
||||
* is dead. No events will be queued until uvc_status_start() is called.
|
||||
* The barrier is needed to make sure that flush_status is visible to
|
||||
* uvc_ctrl_status_event_work() when uvc_status_start() will be called
|
||||
* again.
|
||||
*/
|
||||
smp_store_release(&dev->flush_status, false);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -661,8 +661,6 @@ static int uvc_ioctl_enum_fmt(struct uvc_streaming *stream,
|
|||
fmt->flags = 0;
|
||||
if (format->flags & UVC_FMT_FLAG_COMPRESSED)
|
||||
fmt->flags |= V4L2_FMT_FLAG_COMPRESSED;
|
||||
strscpy(fmt->description, format->name, sizeof(fmt->description));
|
||||
fmt->description[sizeof(fmt->description) - 1] = 0;
|
||||
fmt->pixelformat = format->fcc;
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1352,7 +1352,9 @@ static void uvc_video_decode_meta(struct uvc_streaming *stream,
|
|||
if (has_scr)
|
||||
memcpy(stream->clock.last_scr, scr, 6);
|
||||
|
||||
memcpy(&meta->length, mem, length);
|
||||
meta->length = mem[0];
|
||||
meta->flags = mem[1];
|
||||
memcpy(meta->buf, &mem[2], length - 2);
|
||||
meta_buf->bytesused += length + sizeof(meta->ns) + sizeof(meta->sof);
|
||||
|
||||
uvc_dbg(stream->dev, FRAME,
|
||||
|
|
@ -1965,6 +1967,17 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream,
|
|||
"Selecting alternate setting %u (%u B/frame bandwidth)\n",
|
||||
altsetting, best_psize);
|
||||
|
||||
/*
|
||||
* Some devices, namely the Logitech C910 and B910, are unable
|
||||
* to recover from a USB autosuspend, unless the alternate
|
||||
* setting of the streaming interface is toggled.
|
||||
*/
|
||||
if (stream->dev->quirks & UVC_QUIRK_WAKE_AUTOSUSPEND) {
|
||||
usb_set_interface(stream->dev->udev, intfnum,
|
||||
altsetting);
|
||||
usb_set_interface(stream->dev->udev, intfnum, 0);
|
||||
}
|
||||
|
||||
ret = usb_set_interface(stream->dev->udev, intfnum, altsetting);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
|
|
|||
|
|
@ -74,6 +74,7 @@
|
|||
#define UVC_QUIRK_RESTORE_CTRLS_ON_INIT 0x00000400
|
||||
#define UVC_QUIRK_FORCE_Y8 0x00000800
|
||||
#define UVC_QUIRK_FORCE_BPP 0x00001000
|
||||
#define UVC_QUIRK_WAKE_AUTOSUSPEND 0x00002000
|
||||
|
||||
/* Format flags */
|
||||
#define UVC_FMT_FLAG_COMPRESSED 0x00000001
|
||||
|
|
@ -264,8 +265,6 @@ struct uvc_format {
|
|||
u32 fcc;
|
||||
u32 flags;
|
||||
|
||||
char name[32];
|
||||
|
||||
unsigned int nframes;
|
||||
struct uvc_frame *frame;
|
||||
};
|
||||
|
|
@ -559,6 +558,7 @@ struct uvc_device {
|
|||
/* Status Interrupt Endpoint */
|
||||
struct usb_host_endpoint *int_ep;
|
||||
struct urb *int_urb;
|
||||
bool flush_status;
|
||||
u8 *status;
|
||||
struct input_dev *input;
|
||||
char input_phys[64];
|
||||
|
|
|
|||
|
|
@ -162,14 +162,36 @@ static const struct regmap_access_table rpcif_volatile_table = {
|
|||
.n_yes_ranges = ARRAY_SIZE(rpcif_volatile_ranges),
|
||||
};
|
||||
|
||||
struct rpcif_priv {
|
||||
struct device *dev;
|
||||
void __iomem *base;
|
||||
void __iomem *dirmap;
|
||||
struct regmap *regmap;
|
||||
struct reset_control *rstc;
|
||||
struct platform_device *vdev;
|
||||
size_t size;
|
||||
enum rpcif_type type;
|
||||
enum rpcif_data_dir dir;
|
||||
u8 bus_size;
|
||||
u8 xfer_size;
|
||||
void *buffer;
|
||||
u32 xferlen;
|
||||
u32 smcr;
|
||||
u32 smadr;
|
||||
u32 command; /* DRCMR or SMCMR */
|
||||
u32 option; /* DROPR or SMOPR */
|
||||
u32 enable; /* DRENR or SMENR */
|
||||
u32 dummy; /* DRDMCR or SMDMCR */
|
||||
u32 ddr; /* DRDRENR or SMDRENR */
|
||||
};
|
||||
|
||||
/*
|
||||
* Custom accessor functions to ensure SM[RW]DR[01] are always accessed with
|
||||
* proper width. Requires rpcif.xfer_size to be correctly set before!
|
||||
* proper width. Requires rpcif_priv.xfer_size to be correctly set before!
|
||||
*/
|
||||
static int rpcif_reg_read(void *context, unsigned int reg, unsigned int *val)
|
||||
{
|
||||
struct rpcif *rpc = context;
|
||||
struct rpcif_priv *rpc = context;
|
||||
|
||||
switch (reg) {
|
||||
case RPCIF_SMRDR0:
|
||||
|
|
@ -205,7 +227,7 @@ static int rpcif_reg_read(void *context, unsigned int reg, unsigned int *val)
|
|||
|
||||
static int rpcif_reg_write(void *context, unsigned int reg, unsigned int val)
|
||||
{
|
||||
struct rpcif *rpc = context;
|
||||
struct rpcif_priv *rpc = context;
|
||||
|
||||
switch (reg) {
|
||||
case RPCIF_SMWDR0:
|
||||
|
|
@ -252,39 +274,18 @@ static const struct regmap_config rpcif_regmap_config = {
|
|||
.volatile_table = &rpcif_volatile_table,
|
||||
};
|
||||
|
||||
int rpcif_sw_init(struct rpcif *rpc, struct device *dev)
|
||||
int rpcif_sw_init(struct rpcif *rpcif, struct device *dev)
|
||||
{
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
struct resource *res;
|
||||
struct rpcif_priv *rpc = dev_get_drvdata(dev);
|
||||
|
||||
rpc->dev = dev;
|
||||
|
||||
rpc->base = devm_platform_ioremap_resource_byname(pdev, "regs");
|
||||
if (IS_ERR(rpc->base))
|
||||
return PTR_ERR(rpc->base);
|
||||
|
||||
rpc->regmap = devm_regmap_init(&pdev->dev, NULL, rpc, &rpcif_regmap_config);
|
||||
if (IS_ERR(rpc->regmap)) {
|
||||
dev_err(&pdev->dev,
|
||||
"failed to init regmap for rpcif, error %ld\n",
|
||||
PTR_ERR(rpc->regmap));
|
||||
return PTR_ERR(rpc->regmap);
|
||||
}
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap");
|
||||
rpc->dirmap = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(rpc->dirmap))
|
||||
return PTR_ERR(rpc->dirmap);
|
||||
rpc->size = resource_size(res);
|
||||
|
||||
rpc->type = (uintptr_t)of_device_get_match_data(dev);
|
||||
rpc->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
|
||||
|
||||
return PTR_ERR_OR_ZERO(rpc->rstc);
|
||||
rpcif->dev = dev;
|
||||
rpcif->dirmap = rpc->dirmap;
|
||||
rpcif->size = rpc->size;
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(rpcif_sw_init);
|
||||
|
||||
static void rpcif_rzg2l_timing_adjust_sdr(struct rpcif *rpc)
|
||||
static void rpcif_rzg2l_timing_adjust_sdr(struct rpcif_priv *rpc)
|
||||
{
|
||||
regmap_write(rpc->regmap, RPCIF_PHYWR, 0xa5390000);
|
||||
regmap_write(rpc->regmap, RPCIF_PHYADD, 0x80000000);
|
||||
|
|
@ -298,8 +299,9 @@ static void rpcif_rzg2l_timing_adjust_sdr(struct rpcif *rpc)
|
|||
regmap_write(rpc->regmap, RPCIF_PHYADD, 0x80000032);
|
||||
}
|
||||
|
||||
int rpcif_hw_init(struct rpcif *rpc, bool hyperflash)
|
||||
int rpcif_hw_init(struct rpcif *rpcif, bool hyperflash)
|
||||
{
|
||||
struct rpcif_priv *rpc = dev_get_drvdata(rpcif->dev);
|
||||
u32 dummy;
|
||||
|
||||
pm_runtime_get_sync(rpc->dev);
|
||||
|
|
@ -360,7 +362,7 @@ int rpcif_hw_init(struct rpcif *rpc, bool hyperflash)
|
|||
}
|
||||
EXPORT_SYMBOL(rpcif_hw_init);
|
||||
|
||||
static int wait_msg_xfer_end(struct rpcif *rpc)
|
||||
static int wait_msg_xfer_end(struct rpcif_priv *rpc)
|
||||
{
|
||||
u32 sts;
|
||||
|
||||
|
|
@ -369,7 +371,7 @@ static int wait_msg_xfer_end(struct rpcif *rpc)
|
|||
USEC_PER_SEC);
|
||||
}
|
||||
|
||||
static u8 rpcif_bits_set(struct rpcif *rpc, u32 nbytes)
|
||||
static u8 rpcif_bits_set(struct rpcif_priv *rpc, u32 nbytes)
|
||||
{
|
||||
if (rpc->bus_size == 2)
|
||||
nbytes /= 2;
|
||||
|
|
@ -382,9 +384,11 @@ static u8 rpcif_bit_size(u8 buswidth)
|
|||
return buswidth > 4 ? 2 : ilog2(buswidth);
|
||||
}
|
||||
|
||||
void rpcif_prepare(struct rpcif *rpc, const struct rpcif_op *op, u64 *offs,
|
||||
void rpcif_prepare(struct rpcif *rpcif, const struct rpcif_op *op, u64 *offs,
|
||||
size_t *len)
|
||||
{
|
||||
struct rpcif_priv *rpc = dev_get_drvdata(rpcif->dev);
|
||||
|
||||
rpc->smcr = 0;
|
||||
rpc->smadr = 0;
|
||||
rpc->enable = 0;
|
||||
|
|
@ -468,8 +472,9 @@ void rpcif_prepare(struct rpcif *rpc, const struct rpcif_op *op, u64 *offs,
|
|||
}
|
||||
EXPORT_SYMBOL(rpcif_prepare);
|
||||
|
||||
int rpcif_manual_xfer(struct rpcif *rpc)
|
||||
int rpcif_manual_xfer(struct rpcif *rpcif)
|
||||
{
|
||||
struct rpcif_priv *rpc = dev_get_drvdata(rpcif->dev);
|
||||
u32 smenr, smcr, pos = 0, max = rpc->bus_size == 2 ? 8 : 4;
|
||||
int ret = 0;
|
||||
|
||||
|
|
@ -589,7 +594,7 @@ exit:
|
|||
err_out:
|
||||
if (reset_control_reset(rpc->rstc))
|
||||
dev_err(rpc->dev, "Failed to reset HW\n");
|
||||
rpcif_hw_init(rpc, rpc->bus_size == 2);
|
||||
rpcif_hw_init(rpcif, rpc->bus_size == 2);
|
||||
goto exit;
|
||||
}
|
||||
EXPORT_SYMBOL(rpcif_manual_xfer);
|
||||
|
|
@ -636,8 +641,9 @@ static void memcpy_fromio_readw(void *to,
|
|||
}
|
||||
}
|
||||
|
||||
ssize_t rpcif_dirmap_read(struct rpcif *rpc, u64 offs, size_t len, void *buf)
|
||||
ssize_t rpcif_dirmap_read(struct rpcif *rpcif, u64 offs, size_t len, void *buf)
|
||||
{
|
||||
struct rpcif_priv *rpc = dev_get_drvdata(rpcif->dev);
|
||||
loff_t from = offs & (rpc->size - 1);
|
||||
size_t size = rpc->size - from;
|
||||
|
||||
|
|
@ -670,8 +676,11 @@ EXPORT_SYMBOL(rpcif_dirmap_read);
|
|||
|
||||
static int rpcif_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct platform_device *vdev;
|
||||
struct device_node *flash;
|
||||
struct rpcif_priv *rpc;
|
||||
struct resource *res;
|
||||
const char *name;
|
||||
int ret;
|
||||
|
||||
|
|
@ -692,11 +701,40 @@ static int rpcif_probe(struct platform_device *pdev)
|
|||
}
|
||||
of_node_put(flash);
|
||||
|
||||
rpc = devm_kzalloc(&pdev->dev, sizeof(*rpc), GFP_KERNEL);
|
||||
if (!rpc)
|
||||
return -ENOMEM;
|
||||
|
||||
rpc->base = devm_platform_ioremap_resource_byname(pdev, "regs");
|
||||
if (IS_ERR(rpc->base))
|
||||
return PTR_ERR(rpc->base);
|
||||
|
||||
rpc->regmap = devm_regmap_init(dev, NULL, rpc, &rpcif_regmap_config);
|
||||
if (IS_ERR(rpc->regmap)) {
|
||||
dev_err(dev, "failed to init regmap for rpcif, error %ld\n",
|
||||
PTR_ERR(rpc->regmap));
|
||||
return PTR_ERR(rpc->regmap);
|
||||
}
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap");
|
||||
rpc->dirmap = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(rpc->dirmap))
|
||||
return PTR_ERR(rpc->dirmap);
|
||||
rpc->size = resource_size(res);
|
||||
|
||||
rpc->type = (uintptr_t)of_device_get_match_data(dev);
|
||||
rpc->rstc = devm_reset_control_get_exclusive(dev, NULL);
|
||||
if (IS_ERR(rpc->rstc))
|
||||
return PTR_ERR(rpc->rstc);
|
||||
|
||||
vdev = platform_device_alloc(name, pdev->id);
|
||||
if (!vdev)
|
||||
return -ENOMEM;
|
||||
vdev->dev.parent = &pdev->dev;
|
||||
platform_set_drvdata(pdev, vdev);
|
||||
|
||||
rpc->dev = &pdev->dev;
|
||||
rpc->vdev = vdev;
|
||||
platform_set_drvdata(pdev, rpc);
|
||||
|
||||
ret = platform_device_add(vdev);
|
||||
if (ret) {
|
||||
|
|
@ -709,9 +747,9 @@ static int rpcif_probe(struct platform_device *pdev)
|
|||
|
||||
static int rpcif_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct platform_device *vdev = platform_get_drvdata(pdev);
|
||||
struct rpcif_priv *rpc = platform_get_drvdata(pdev);
|
||||
|
||||
platform_device_unregister(vdev);
|
||||
platform_device_unregister(rpc->vdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ int arizona_clk32k_enable(struct arizona *arizona)
|
|||
if (arizona->clk32k_ref == 1) {
|
||||
switch (arizona->pdata.clk32k_src) {
|
||||
case ARIZONA_32KZ_MCLK1:
|
||||
ret = pm_runtime_get_sync(arizona->dev);
|
||||
ret = pm_runtime_resume_and_get(arizona->dev);
|
||||
if (ret != 0)
|
||||
goto err_ref;
|
||||
ret = clk_prepare_enable(arizona->mclk[ARIZONA_MCLK1]);
|
||||
|
|
|
|||
|
|
@ -151,7 +151,7 @@ static int mei_fwver(struct mei_cl_device *cldev)
|
|||
ret = __mei_cl_send(cldev->cl, (u8 *)&req, sizeof(req), 0,
|
||||
MEI_CL_IO_TX_BLOCKING);
|
||||
if (ret < 0) {
|
||||
dev_err(&cldev->dev, "Could not send ReqFWVersion cmd\n");
|
||||
dev_err(&cldev->dev, "Could not send ReqFWVersion cmd ret = %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
@ -163,7 +163,7 @@ static int mei_fwver(struct mei_cl_device *cldev)
|
|||
* Should be at least one version block,
|
||||
* error out if nothing found
|
||||
*/
|
||||
dev_err(&cldev->dev, "Could not read FW version\n");
|
||||
dev_err(&cldev->dev, "Could not read FW version ret = %d\n", bytes_recv);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
|
|
@ -376,7 +376,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
|
|||
ret = __mei_cl_send(cl, (u8 *)&cmd, sizeof(cmd), 0,
|
||||
MEI_CL_IO_TX_BLOCKING);
|
||||
if (ret < 0) {
|
||||
dev_err(bus->dev, "Could not send IF version cmd\n");
|
||||
dev_err(bus->dev, "Could not send IF version cmd ret = %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
@ -391,7 +391,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
|
|||
bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, &vtag,
|
||||
0, 0);
|
||||
if (bytes_recv < 0 || (size_t)bytes_recv < if_version_length) {
|
||||
dev_err(bus->dev, "Could not read IF version\n");
|
||||
dev_err(bus->dev, "Could not read IF version ret = %d\n", bytes_recv);
|
||||
ret = -EIO;
|
||||
goto err;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1709,7 +1709,7 @@ static void __init vmballoon_debugfs_init(struct vmballoon *b)
|
|||
static void __exit vmballoon_debugfs_exit(struct vmballoon *b)
|
||||
{
|
||||
static_key_disable(&balloon_stat_enabled.key);
|
||||
debugfs_remove(debugfs_lookup("vmmemctl", NULL));
|
||||
debugfs_lookup_and_remove("vmmemctl", NULL);
|
||||
kfree(b->stats);
|
||||
b->stats = NULL;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -468,6 +468,7 @@ static int uif_init(struct ubi_device *ubi)
|
|||
err = ubi_add_volume(ubi, ubi->volumes[i]);
|
||||
if (err) {
|
||||
ubi_err(ubi, "cannot add volume %d", i);
|
||||
ubi->volumes[i] = NULL;
|
||||
goto out_volumes;
|
||||
}
|
||||
}
|
||||
|
|
@ -663,6 +664,12 @@ static int io_init(struct ubi_device *ubi, int max_beb_per1024)
|
|||
ubi->ec_hdr_alsize = ALIGN(UBI_EC_HDR_SIZE, ubi->hdrs_min_io_size);
|
||||
ubi->vid_hdr_alsize = ALIGN(UBI_VID_HDR_SIZE, ubi->hdrs_min_io_size);
|
||||
|
||||
if (ubi->vid_hdr_offset && ((ubi->vid_hdr_offset + UBI_VID_HDR_SIZE) >
|
||||
ubi->vid_hdr_alsize)) {
|
||||
ubi_err(ubi, "VID header offset %d too large.", ubi->vid_hdr_offset);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dbg_gen("min_io_size %d", ubi->min_io_size);
|
||||
dbg_gen("max_write_size %d", ubi->max_write_size);
|
||||
dbg_gen("hdrs_min_io_size %d", ubi->hdrs_min_io_size);
|
||||
|
|
|
|||
|
|
@ -146,13 +146,15 @@ void ubi_refill_pools(struct ubi_device *ubi)
|
|||
if (ubi->fm_anchor) {
|
||||
wl_tree_add(ubi->fm_anchor, &ubi->free);
|
||||
ubi->free_count++;
|
||||
ubi->fm_anchor = NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* All available PEBs are in ubi->free, now is the time to get
|
||||
* the best anchor PEBs.
|
||||
*/
|
||||
ubi->fm_anchor = ubi_wl_get_fm_peb(ubi, 1);
|
||||
if (!ubi->fm_disabled)
|
||||
/*
|
||||
* All available PEBs are in ubi->free, now is the time to get
|
||||
* the best anchor PEBs.
|
||||
*/
|
||||
ubi->fm_anchor = ubi_wl_get_fm_peb(ubi, 1);
|
||||
|
||||
for (;;) {
|
||||
enough = 0;
|
||||
|
|
|
|||
|
|
@ -464,7 +464,7 @@ int ubi_resize_volume(struct ubi_volume_desc *desc, int reserved_pebs)
|
|||
for (i = 0; i < -pebs; i++) {
|
||||
err = ubi_eba_unmap_leb(ubi, vol, reserved_pebs + i);
|
||||
if (err)
|
||||
goto out_acc;
|
||||
goto out_free;
|
||||
}
|
||||
spin_lock(&ubi->volumes_lock);
|
||||
ubi->rsvd_pebs += pebs;
|
||||
|
|
@ -512,8 +512,10 @@ out_acc:
|
|||
ubi->avail_pebs += pebs;
|
||||
spin_unlock(&ubi->volumes_lock);
|
||||
}
|
||||
return err;
|
||||
|
||||
out_free:
|
||||
kfree(new_eba_tbl);
|
||||
ubi_eba_destroy_table(new_eba_tbl);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
@ -580,6 +582,7 @@ int ubi_add_volume(struct ubi_device *ubi, struct ubi_volume *vol)
|
|||
if (err) {
|
||||
ubi_err(ubi, "cannot add character device for volume %d, error %d",
|
||||
vol_id, err);
|
||||
vol_release(&vol->dev);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
@ -590,15 +593,14 @@ int ubi_add_volume(struct ubi_device *ubi, struct ubi_volume *vol)
|
|||
vol->dev.groups = volume_dev_groups;
|
||||
dev_set_name(&vol->dev, "%s_%d", ubi->ubi_name, vol->vol_id);
|
||||
err = device_register(&vol->dev);
|
||||
if (err)
|
||||
goto out_cdev;
|
||||
if (err) {
|
||||
cdev_del(&vol->cdev);
|
||||
put_device(&vol->dev);
|
||||
return err;
|
||||
}
|
||||
|
||||
self_check_volumes(ubi);
|
||||
return err;
|
||||
|
||||
out_cdev:
|
||||
cdev_del(&vol->cdev);
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -890,8 +890,11 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
|
|||
|
||||
err = do_sync_erase(ubi, e1, vol_id, lnum, 0);
|
||||
if (err) {
|
||||
if (e2)
|
||||
if (e2) {
|
||||
spin_lock(&ubi->wl_lock);
|
||||
wl_entry_destroy(ubi, e2);
|
||||
spin_unlock(&ubi->wl_lock);
|
||||
}
|
||||
goto out_ro;
|
||||
}
|
||||
|
||||
|
|
@ -973,11 +976,11 @@ out_error:
|
|||
spin_lock(&ubi->wl_lock);
|
||||
ubi->move_from = ubi->move_to = NULL;
|
||||
ubi->move_to_put = ubi->wl_scheduled = 0;
|
||||
wl_entry_destroy(ubi, e1);
|
||||
wl_entry_destroy(ubi, e2);
|
||||
spin_unlock(&ubi->wl_lock);
|
||||
|
||||
ubi_free_vid_buf(vidb);
|
||||
wl_entry_destroy(ubi, e1);
|
||||
wl_entry_destroy(ubi, e2);
|
||||
|
||||
out_ro:
|
||||
ubi_ro_mode(ubi);
|
||||
|
|
@ -1130,14 +1133,18 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk)
|
|||
/* Re-schedule the LEB for erasure */
|
||||
err1 = schedule_erase(ubi, e, vol_id, lnum, 0, false);
|
||||
if (err1) {
|
||||
spin_lock(&ubi->wl_lock);
|
||||
wl_entry_destroy(ubi, e);
|
||||
spin_unlock(&ubi->wl_lock);
|
||||
err = err1;
|
||||
goto out_ro;
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
spin_lock(&ubi->wl_lock);
|
||||
wl_entry_destroy(ubi, e);
|
||||
spin_unlock(&ubi->wl_lock);
|
||||
if (err != -EIO)
|
||||
/*
|
||||
* If this is not %-EIO, we have no idea what to do. Scheduling
|
||||
|
|
@ -1253,6 +1260,18 @@ int ubi_wl_put_peb(struct ubi_device *ubi, int vol_id, int lnum,
|
|||
retry:
|
||||
spin_lock(&ubi->wl_lock);
|
||||
e = ubi->lookuptbl[pnum];
|
||||
if (!e) {
|
||||
/*
|
||||
* This wl entry has been removed for some errors by other
|
||||
* process (eg. wear leveling worker), corresponding process
|
||||
* (except __erase_worker, which cannot concurrent with
|
||||
* ubi_wl_put_peb) will set ubi ro_mode at the same time,
|
||||
* just ignore this wl entry.
|
||||
*/
|
||||
spin_unlock(&ubi->wl_lock);
|
||||
up_read(&ubi->fm_protect);
|
||||
return 0;
|
||||
}
|
||||
if (e == ubi->move_from) {
|
||||
/*
|
||||
* User is putting the physical eraseblock which was selected to
|
||||
|
|
|
|||
|
|
@ -513,7 +513,7 @@ static const char * const vsc9959_resource_names[TARGET_MAX] = {
|
|||
* SGMII/QSGMII MAC PCS can be found.
|
||||
*/
|
||||
static const struct resource vsc9959_imdio_res =
|
||||
DEFINE_RES_MEM_NAMED(0x8030, 0x8040, "imdio");
|
||||
DEFINE_RES_MEM_NAMED(0x8030, 0x10, "imdio");
|
||||
|
||||
static const struct reg_field vsc9959_regfields[REGFIELD_MAX] = {
|
||||
[ANA_ADVLEARN_VLAN_CHK] = REG_FIELD(ANA_ADVLEARN, 6, 6),
|
||||
|
|
|
|||
|
|
@ -923,8 +923,8 @@ static int vsc9953_mdio_bus_alloc(struct ocelot *ocelot)
|
|||
|
||||
rc = mscc_miim_setup(dev, &bus, "VSC9953 internal MDIO bus",
|
||||
ocelot->targets[GCB],
|
||||
ocelot->map[GCB][GCB_MIIM_MII_STATUS & REG_MASK]);
|
||||
|
||||
ocelot->map[GCB][GCB_MIIM_MII_STATUS & REG_MASK],
|
||||
true);
|
||||
if (rc) {
|
||||
dev_err(dev, "failed to setup MDIO bus\n");
|
||||
return rc;
|
||||
|
|
|
|||
|
|
@ -765,7 +765,7 @@ static int otx2_prepare_ipv6_flow(struct ethtool_rx_flow_spec *fsp,
|
|||
|
||||
/* NPC profile doesn't extract AH/ESP header fields */
|
||||
if ((ah_esp_mask->spi & ah_esp_hdr->spi) ||
|
||||
(ah_esp_mask->tclass & ah_esp_mask->tclass))
|
||||
(ah_esp_mask->tclass & ah_esp_hdr->tclass))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (flow_type == AH_V6_FLOW)
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@
|
|||
#include <net/tso.h>
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/bpf_trace.h>
|
||||
#include <net/ip6_checksum.h>
|
||||
|
||||
#include "otx2_reg.h"
|
||||
#include "otx2_common.h"
|
||||
|
|
@ -699,7 +700,7 @@ static void otx2_sqe_add_ext(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
|
|||
|
||||
static void otx2_sqe_add_mem(struct otx2_snd_queue *sq, int *offset,
|
||||
int alg, u64 iova, int ptp_offset,
|
||||
u64 base_ns, int udp_csum)
|
||||
u64 base_ns, bool udp_csum_crt)
|
||||
{
|
||||
struct nix_sqe_mem_s *mem;
|
||||
|
||||
|
|
@ -711,7 +712,7 @@ static void otx2_sqe_add_mem(struct otx2_snd_queue *sq, int *offset,
|
|||
|
||||
if (ptp_offset) {
|
||||
mem->start_offset = ptp_offset;
|
||||
mem->udp_csum_crt = udp_csum;
|
||||
mem->udp_csum_crt = !!udp_csum_crt;
|
||||
mem->base_ns = base_ns;
|
||||
mem->step_type = 1;
|
||||
}
|
||||
|
|
@ -986,10 +987,11 @@ static bool otx2_validate_network_transport(struct sk_buff *skb)
|
|||
return false;
|
||||
}
|
||||
|
||||
static bool otx2_ptp_is_sync(struct sk_buff *skb, int *offset, int *udp_csum)
|
||||
static bool otx2_ptp_is_sync(struct sk_buff *skb, int *offset, bool *udp_csum_crt)
|
||||
{
|
||||
struct ethhdr *eth = (struct ethhdr *)(skb->data);
|
||||
u16 nix_offload_hlen = 0, inner_vhlen = 0;
|
||||
bool udp_hdr_present = false, is_sync;
|
||||
u8 *data = skb->data, *msgtype;
|
||||
__be16 proto = eth->h_proto;
|
||||
int network_depth = 0;
|
||||
|
|
@ -1029,45 +1031,81 @@ static bool otx2_ptp_is_sync(struct sk_buff *skb, int *offset, int *udp_csum)
|
|||
if (!otx2_validate_network_transport(skb))
|
||||
return false;
|
||||
|
||||
*udp_csum = 1;
|
||||
*offset = nix_offload_hlen + skb_transport_offset(skb) +
|
||||
sizeof(struct udphdr);
|
||||
udp_hdr_present = true;
|
||||
|
||||
}
|
||||
|
||||
msgtype = data + *offset;
|
||||
|
||||
/* Check PTP messageId is SYNC or not */
|
||||
return (*msgtype & 0xf) == 0;
|
||||
is_sync = !(*msgtype & 0xf);
|
||||
if (is_sync)
|
||||
*udp_csum_crt = udp_hdr_present;
|
||||
else
|
||||
*offset = 0;
|
||||
|
||||
return is_sync;
|
||||
}
|
||||
|
||||
static void otx2_set_txtstamp(struct otx2_nic *pfvf, struct sk_buff *skb,
|
||||
struct otx2_snd_queue *sq, int *offset)
|
||||
{
|
||||
struct ethhdr *eth = (struct ethhdr *)(skb->data);
|
||||
struct ptpv2_tstamp *origin_tstamp;
|
||||
int ptp_offset = 0, udp_csum = 0;
|
||||
bool udp_csum_crt = false;
|
||||
unsigned int udphoff;
|
||||
struct timespec64 ts;
|
||||
int ptp_offset = 0;
|
||||
__wsum skb_csum;
|
||||
u64 iova;
|
||||
|
||||
if (unlikely(!skb_shinfo(skb)->gso_size &&
|
||||
(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))) {
|
||||
if (unlikely(pfvf->flags & OTX2_FLAG_PTP_ONESTEP_SYNC)) {
|
||||
if (otx2_ptp_is_sync(skb, &ptp_offset, &udp_csum)) {
|
||||
origin_tstamp = (struct ptpv2_tstamp *)
|
||||
((u8 *)skb->data + ptp_offset +
|
||||
PTP_SYNC_SEC_OFFSET);
|
||||
ts = ns_to_timespec64(pfvf->ptp->tstamp);
|
||||
origin_tstamp->seconds_msb = htons((ts.tv_sec >> 32) & 0xffff);
|
||||
origin_tstamp->seconds_lsb = htonl(ts.tv_sec & 0xffffffff);
|
||||
origin_tstamp->nanoseconds = htonl(ts.tv_nsec);
|
||||
/* Point to correction field in PTP packet */
|
||||
ptp_offset += 8;
|
||||
if (unlikely(pfvf->flags & OTX2_FLAG_PTP_ONESTEP_SYNC &&
|
||||
otx2_ptp_is_sync(skb, &ptp_offset, &udp_csum_crt))) {
|
||||
origin_tstamp = (struct ptpv2_tstamp *)
|
||||
((u8 *)skb->data + ptp_offset +
|
||||
PTP_SYNC_SEC_OFFSET);
|
||||
ts = ns_to_timespec64(pfvf->ptp->tstamp);
|
||||
origin_tstamp->seconds_msb = htons((ts.tv_sec >> 32) & 0xffff);
|
||||
origin_tstamp->seconds_lsb = htonl(ts.tv_sec & 0xffffffff);
|
||||
origin_tstamp->nanoseconds = htonl(ts.tv_nsec);
|
||||
/* Point to correction field in PTP packet */
|
||||
ptp_offset += 8;
|
||||
|
||||
/* When user disables hw checksum, stack calculates the csum,
|
||||
* but it does not cover ptp timestamp which is added later.
|
||||
* Recalculate the checksum manually considering the timestamp.
|
||||
*/
|
||||
if (udp_csum_crt) {
|
||||
struct udphdr *uh = udp_hdr(skb);
|
||||
|
||||
if (skb->ip_summed != CHECKSUM_PARTIAL && uh->check != 0) {
|
||||
udphoff = skb_transport_offset(skb);
|
||||
uh->check = 0;
|
||||
skb_csum = skb_checksum(skb, udphoff, skb->len - udphoff,
|
||||
0);
|
||||
if (ntohs(eth->h_proto) == ETH_P_IPV6)
|
||||
uh->check = csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
|
||||
&ipv6_hdr(skb)->daddr,
|
||||
skb->len - udphoff,
|
||||
ipv6_hdr(skb)->nexthdr,
|
||||
skb_csum);
|
||||
else
|
||||
uh->check = csum_tcpudp_magic(ip_hdr(skb)->saddr,
|
||||
ip_hdr(skb)->daddr,
|
||||
skb->len - udphoff,
|
||||
IPPROTO_UDP,
|
||||
skb_csum);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
|
||||
}
|
||||
iova = sq->timestamps->iova + (sq->head * sizeof(u64));
|
||||
otx2_sqe_add_mem(sq, offset, NIX_SENDMEMALG_E_SETTSTMP, iova,
|
||||
ptp_offset, pfvf->ptp->base_ns, udp_csum);
|
||||
ptp_offset, pfvf->ptp->base_ns, udp_csum_crt);
|
||||
} else {
|
||||
skb_tx_timestamp(skb);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -90,4 +90,8 @@ void mlx5_ec_cleanup(struct mlx5_core_dev *dev)
|
|||
err = mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_HOST_PF]);
|
||||
if (err)
|
||||
mlx5_core_warn(dev, "Timeout reclaiming external host PF pages err(%d)\n", err);
|
||||
|
||||
err = mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_VF]);
|
||||
if (err)
|
||||
mlx5_core_warn(dev, "Timeout reclaiming external host VFs pages err(%d)\n", err);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -86,7 +86,19 @@ static bool mlx5e_ptp_ts_cqe_drop(struct mlx5e_ptpsq *ptpsq, u16 skb_cc, u16 skb
|
|||
return (ptpsq->ts_cqe_ctr_mask && (skb_cc != skb_id));
|
||||
}
|
||||
|
||||
static void mlx5e_ptp_skb_fifo_ts_cqe_resync(struct mlx5e_ptpsq *ptpsq, u16 skb_cc, u16 skb_id)
|
||||
static bool mlx5e_ptp_ts_cqe_ooo(struct mlx5e_ptpsq *ptpsq, u16 skb_id)
|
||||
{
|
||||
u16 skb_cc = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_cc);
|
||||
u16 skb_pc = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_pc);
|
||||
|
||||
if (PTP_WQE_CTR2IDX(skb_id - skb_cc) >= PTP_WQE_CTR2IDX(skb_pc - skb_cc))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static void mlx5e_ptp_skb_fifo_ts_cqe_resync(struct mlx5e_ptpsq *ptpsq, u16 skb_cc,
|
||||
u16 skb_id, int budget)
|
||||
{
|
||||
struct skb_shared_hwtstamps hwts = {};
|
||||
struct sk_buff *skb;
|
||||
|
|
@ -98,6 +110,7 @@ static void mlx5e_ptp_skb_fifo_ts_cqe_resync(struct mlx5e_ptpsq *ptpsq, u16 skb_
|
|||
hwts.hwtstamp = mlx5e_skb_cb_get_hwts(skb)->cqe_hwtstamp;
|
||||
skb_tstamp_tx(skb, &hwts);
|
||||
ptpsq->cq_stats->resync_cqe++;
|
||||
napi_consume_skb(skb, budget);
|
||||
skb_cc = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_cc);
|
||||
}
|
||||
}
|
||||
|
|
@ -118,8 +131,14 @@ static void mlx5e_ptp_handle_ts_cqe(struct mlx5e_ptpsq *ptpsq,
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (mlx5e_ptp_ts_cqe_drop(ptpsq, skb_cc, skb_id))
|
||||
mlx5e_ptp_skb_fifo_ts_cqe_resync(ptpsq, skb_cc, skb_id);
|
||||
if (mlx5e_ptp_ts_cqe_drop(ptpsq, skb_cc, skb_id)) {
|
||||
if (mlx5e_ptp_ts_cqe_ooo(ptpsq, skb_id)) {
|
||||
/* already handled by a previous resync */
|
||||
ptpsq->cq_stats->ooo_cqe_drop++;
|
||||
return;
|
||||
}
|
||||
mlx5e_ptp_skb_fifo_ts_cqe_resync(ptpsq, skb_cc, skb_id, budget);
|
||||
}
|
||||
|
||||
skb = mlx5e_skb_fifo_pop(&ptpsq->skb_fifo);
|
||||
hwtstamp = mlx5e_cqe_ts_to_ns(sq->ptp_cyc2time, sq->clock, get_cqe_ts(cqe));
|
||||
|
|
|
|||
|
|
@ -81,7 +81,7 @@ void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq);
|
|||
static inline bool
|
||||
mlx5e_skb_fifo_has_room(struct mlx5e_skb_fifo *fifo)
|
||||
{
|
||||
return (*fifo->pc - *fifo->cc) < fifo->mask;
|
||||
return (u16)(*fifo->pc - *fifo->cc) < fifo->mask;
|
||||
}
|
||||
|
||||
static inline bool
|
||||
|
|
@ -297,6 +297,8 @@ void mlx5e_skb_fifo_push(struct mlx5e_skb_fifo *fifo, struct sk_buff *skb)
|
|||
static inline
|
||||
struct sk_buff *mlx5e_skb_fifo_pop(struct mlx5e_skb_fifo *fifo)
|
||||
{
|
||||
WARN_ON_ONCE(*fifo->pc == *fifo->cc);
|
||||
|
||||
return *mlx5e_skb_fifo_get(fifo, (*fifo->cc)++);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -2121,6 +2121,7 @@ static const struct counter_desc ptp_cq_stats_desc[] = {
|
|||
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, abort_abs_diff_ns) },
|
||||
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, resync_cqe) },
|
||||
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, resync_event) },
|
||||
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, ooo_cqe_drop) },
|
||||
};
|
||||
|
||||
static const struct counter_desc ptp_rq_stats_desc[] = {
|
||||
|
|
|
|||
|
|
@ -459,6 +459,7 @@ struct mlx5e_ptp_cq_stats {
|
|||
u64 abort_abs_diff_ns;
|
||||
u64 resync_cqe;
|
||||
u64 resync_event;
|
||||
u64 ooo_cqe_drop;
|
||||
};
|
||||
|
||||
struct mlx5e_stats {
|
||||
|
|
|
|||
|
|
@ -1043,7 +1043,8 @@ mlx5_eswitch_add_send_to_vport_rule(struct mlx5_eswitch *on_esw,
|
|||
dest.vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID;
|
||||
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
|
||||
|
||||
if (rep->vport == MLX5_VPORT_UPLINK)
|
||||
if (MLX5_CAP_ESW_FLOWTABLE(on_esw->dev, flow_source) &&
|
||||
rep->vport == MLX5_VPORT_UPLINK)
|
||||
spec->flow_context.flow_source = MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT;
|
||||
|
||||
flow_rule = mlx5_add_flow_rules(on_esw->fdb_table.offloads.slow_fdb,
|
||||
|
|
|
|||
|
|
@ -105,6 +105,7 @@ int mlx5_geneve_tlv_option_add(struct mlx5_geneve *geneve, struct geneve_opt *op
|
|||
geneve->opt_type = opt->type;
|
||||
geneve->obj_id = res;
|
||||
geneve->refcount++;
|
||||
res = 0;
|
||||
}
|
||||
|
||||
unlock:
|
||||
|
|
|
|||
|
|
@ -147,6 +147,10 @@ mlx5_device_disable_sriov(struct mlx5_core_dev *dev, int num_vfs, bool clear_vf)
|
|||
|
||||
mlx5_eswitch_disable_sriov(dev->priv.eswitch, clear_vf);
|
||||
|
||||
/* For ECPFs, skip waiting for host VF pages until ECPF is destroyed */
|
||||
if (mlx5_core_is_ecpf(dev))
|
||||
return;
|
||||
|
||||
if (mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_VF]))
|
||||
mlx5_core_warn(dev, "timeout reclaiming VFs pages\n");
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2894,8 +2894,10 @@ static int happy_meal_pci_probe(struct pci_dev *pdev,
|
|||
goto err_out_clear_quattro;
|
||||
}
|
||||
|
||||
hpreg_res = devm_request_region(&pdev->dev, pci_resource_start(pdev, 0),
|
||||
pci_resource_len(pdev, 0), DRV_NAME);
|
||||
hpreg_res = devm_request_mem_region(&pdev->dev,
|
||||
pci_resource_start(pdev, 0),
|
||||
pci_resource_len(pdev, 0),
|
||||
DRV_NAME);
|
||||
if (!hpreg_res) {
|
||||
err = -EBUSY;
|
||||
dev_err(&pdev->dev, "Cannot obtain PCI resources, aborting.\n");
|
||||
|
|
|
|||
|
|
@ -52,6 +52,7 @@ struct mscc_miim_info {
|
|||
struct mscc_miim_dev {
|
||||
struct regmap *regs;
|
||||
int mii_status_offset;
|
||||
bool ignore_read_errors;
|
||||
struct regmap *phy_regs;
|
||||
const struct mscc_miim_info *info;
|
||||
struct clk *clk;
|
||||
|
|
@ -138,7 +139,7 @@ static int mscc_miim_read(struct mii_bus *bus, int mii_id, int regnum)
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (val & MSCC_MIIM_DATA_ERROR) {
|
||||
if (!miim->ignore_read_errors && !!(val & MSCC_MIIM_DATA_ERROR)) {
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
}
|
||||
|
|
@ -218,7 +219,8 @@ static const struct regmap_config mscc_miim_phy_regmap_config = {
|
|||
};
|
||||
|
||||
int mscc_miim_setup(struct device *dev, struct mii_bus **pbus, const char *name,
|
||||
struct regmap *mii_regmap, int status_offset)
|
||||
struct regmap *mii_regmap, int status_offset,
|
||||
bool ignore_read_errors)
|
||||
{
|
||||
struct mscc_miim_dev *miim;
|
||||
struct mii_bus *bus;
|
||||
|
|
@ -240,6 +242,7 @@ int mscc_miim_setup(struct device *dev, struct mii_bus **pbus, const char *name,
|
|||
|
||||
miim->regs = mii_regmap;
|
||||
miim->mii_status_offset = status_offset;
|
||||
miim->ignore_read_errors = ignore_read_errors;
|
||||
|
||||
*pbus = bus;
|
||||
|
||||
|
|
@ -291,7 +294,7 @@ static int mscc_miim_probe(struct platform_device *pdev)
|
|||
return dev_err_probe(dev, PTR_ERR(phy_regmap),
|
||||
"Unable to create phy register regmap\n");
|
||||
|
||||
ret = mscc_miim_setup(dev, &bus, "mscc_miim", mii_regmap, 0);
|
||||
ret = mscc_miim_setup(dev, &bus, "mscc_miim", mii_regmap, 0, false);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Unable to setup the MDIO bus\n");
|
||||
return ret;
|
||||
|
|
|
|||
|
|
@ -672,6 +672,12 @@ int st_nci_se_io(struct nci_dev *ndev, u32 se_idx,
|
|||
ST_NCI_EVT_TRANSMIT_DATA, apdu,
|
||||
apdu_length);
|
||||
default:
|
||||
/* Need to free cb_context here as at the moment we can't
|
||||
* clearly indicate to the caller if the callback function
|
||||
* would be called (and free it) or not. In both cases a
|
||||
* negative value may be returned to the caller.
|
||||
*/
|
||||
kfree(cb_context);
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -236,6 +236,12 @@ int st21nfca_hci_se_io(struct nfc_hci_dev *hdev, u32 se_idx,
|
|||
ST21NFCA_EVT_TRANSMIT_DATA,
|
||||
apdu, apdu_length);
|
||||
default:
|
||||
/* Need to free cb_context here as at the moment we can't
|
||||
* clearly indicate to the caller if the callback function
|
||||
* would be called (and free it) or not. In both cases a
|
||||
* negative value may be returned to the caller.
|
||||
*/
|
||||
kfree(cb_context);
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -38,6 +38,7 @@ struct nvme_ns_info {
|
|||
bool is_shared;
|
||||
bool is_readonly;
|
||||
bool is_ready;
|
||||
bool is_removed;
|
||||
};
|
||||
|
||||
unsigned int admin_timeout = 60;
|
||||
|
|
@ -1439,16 +1440,8 @@ static int nvme_identify_ns(struct nvme_ctrl *ctrl, unsigned nsid,
|
|||
error = nvme_submit_sync_cmd(ctrl->admin_q, &c, *id, sizeof(**id));
|
||||
if (error) {
|
||||
dev_warn(ctrl->device, "Identify namespace failed (%d)\n", error);
|
||||
goto out_free_id;
|
||||
kfree(*id);
|
||||
}
|
||||
|
||||
error = NVME_SC_INVALID_NS | NVME_SC_DNR;
|
||||
if ((*id)->ncap == 0) /* namespace not allocated or attached */
|
||||
goto out_free_id;
|
||||
return 0;
|
||||
|
||||
out_free_id:
|
||||
kfree(*id);
|
||||
return error;
|
||||
}
|
||||
|
||||
|
|
@ -1462,6 +1455,13 @@ static int nvme_ns_info_from_identify(struct nvme_ctrl *ctrl,
|
|||
ret = nvme_identify_ns(ctrl, info->nsid, &id);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (id->ncap == 0) {
|
||||
/* namespace not allocated or attached */
|
||||
info->is_removed = true;
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
info->anagrpid = id->anagrpid;
|
||||
info->is_shared = id->nmic & NVME_NS_NMIC_SHARED;
|
||||
info->is_readonly = id->nsattr & NVME_NS_ATTR_RO;
|
||||
|
|
@ -4388,6 +4388,7 @@ static void nvme_scan_ns(struct nvme_ctrl *ctrl, unsigned nsid)
|
|||
{
|
||||
struct nvme_ns_info info = { .nsid = nsid };
|
||||
struct nvme_ns *ns;
|
||||
int ret;
|
||||
|
||||
if (nvme_identify_ns_descs(ctrl, &info))
|
||||
return;
|
||||
|
|
@ -4404,19 +4405,19 @@ static void nvme_scan_ns(struct nvme_ctrl *ctrl, unsigned nsid)
|
|||
* set up a namespace. If not fall back to the legacy version.
|
||||
*/
|
||||
if ((ctrl->cap & NVME_CAP_CRMS_CRIMS) ||
|
||||
(info.ids.csi != NVME_CSI_NVM && info.ids.csi != NVME_CSI_ZNS)) {
|
||||
if (nvme_ns_info_from_id_cs_indep(ctrl, &info))
|
||||
return;
|
||||
} else {
|
||||
if (nvme_ns_info_from_identify(ctrl, &info))
|
||||
return;
|
||||
}
|
||||
(info.ids.csi != NVME_CSI_NVM && info.ids.csi != NVME_CSI_ZNS))
|
||||
ret = nvme_ns_info_from_id_cs_indep(ctrl, &info);
|
||||
else
|
||||
ret = nvme_ns_info_from_identify(ctrl, &info);
|
||||
|
||||
if (info.is_removed)
|
||||
nvme_ns_remove_by_nsid(ctrl, nsid);
|
||||
|
||||
/*
|
||||
* Ignore the namespace if it is not ready. We will get an AEN once it
|
||||
* becomes ready and restart the scan.
|
||||
*/
|
||||
if (!info.is_ready)
|
||||
if (ret || !info.is_ready)
|
||||
return;
|
||||
|
||||
ns = nvme_find_get_ns(ctrl, nsid);
|
||||
|
|
|
|||
|
|
@ -189,7 +189,8 @@ nvmf_ctlr_matches_baseopts(struct nvme_ctrl *ctrl,
|
|||
|
||||
static inline char *nvmf_ctrl_subsysnqn(struct nvme_ctrl *ctrl)
|
||||
{
|
||||
if (!ctrl->subsys)
|
||||
if (!ctrl->subsys ||
|
||||
!strcmp(ctrl->opts->subsysnqn, NVME_DISC_SUBSYS_NAME))
|
||||
return ctrl->opts->subsysnqn;
|
||||
return ctrl->subsys->subnqn;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2488,6 +2488,10 @@ static int nvme_tcp_get_address(struct nvme_ctrl *ctrl, char *buf, int size)
|
|||
|
||||
len = nvmf_get_address(ctrl, buf, size);
|
||||
|
||||
mutex_lock(&queue->queue_lock);
|
||||
|
||||
if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags))
|
||||
goto done;
|
||||
ret = kernel_getsockname(queue->sock, (struct sockaddr *)&src_addr);
|
||||
if (ret > 0) {
|
||||
if (len > 0)
|
||||
|
|
@ -2495,6 +2499,8 @@ static int nvme_tcp_get_address(struct nvme_ctrl *ctrl, char *buf, int size)
|
|||
len += scnprintf(buf + len, size - len, "%ssrc_addr=%pISc\n",
|
||||
(len) ? "," : "", &src_addr);
|
||||
}
|
||||
done:
|
||||
mutex_unlock(&queue->queue_lock);
|
||||
|
||||
return len;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -15,9 +15,14 @@
|
|||
#include "../pci.h"
|
||||
|
||||
/* Device IDs */
|
||||
#define DEV_PCIE_PORT_0 0x7a09
|
||||
#define DEV_PCIE_PORT_1 0x7a19
|
||||
#define DEV_PCIE_PORT_2 0x7a29
|
||||
#define DEV_LS2K_PCIE_PORT0 0x1a05
|
||||
#define DEV_LS7A_PCIE_PORT0 0x7a09
|
||||
#define DEV_LS7A_PCIE_PORT1 0x7a19
|
||||
#define DEV_LS7A_PCIE_PORT2 0x7a29
|
||||
#define DEV_LS7A_PCIE_PORT3 0x7a39
|
||||
#define DEV_LS7A_PCIE_PORT4 0x7a49
|
||||
#define DEV_LS7A_PCIE_PORT5 0x7a59
|
||||
#define DEV_LS7A_PCIE_PORT6 0x7a69
|
||||
|
||||
#define DEV_LS2K_APB 0x7a02
|
||||
#define DEV_LS7A_GMAC 0x7a03
|
||||
|
|
@ -53,11 +58,11 @@ static void bridge_class_quirk(struct pci_dev *dev)
|
|||
dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL;
|
||||
}
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_PCIE_PORT_0, bridge_class_quirk);
|
||||
DEV_LS7A_PCIE_PORT0, bridge_class_quirk);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_PCIE_PORT_1, bridge_class_quirk);
|
||||
DEV_LS7A_PCIE_PORT1, bridge_class_quirk);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_PCIE_PORT_2, bridge_class_quirk);
|
||||
DEV_LS7A_PCIE_PORT2, bridge_class_quirk);
|
||||
|
||||
static void system_bus_quirk(struct pci_dev *pdev)
|
||||
{
|
||||
|
|
@ -75,37 +80,33 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
|||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_LS7A_LPC, system_bus_quirk);
|
||||
|
||||
static void loongson_mrrs_quirk(struct pci_dev *dev)
|
||||
static void loongson_mrrs_quirk(struct pci_dev *pdev)
|
||||
{
|
||||
struct pci_bus *bus = dev->bus;
|
||||
struct pci_dev *bridge;
|
||||
static const struct pci_device_id bridge_devids[] = {
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_0) },
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_1) },
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_2) },
|
||||
{ 0, },
|
||||
};
|
||||
/*
|
||||
* Some Loongson PCIe ports have h/w limitations of maximum read
|
||||
* request size. They can't handle anything larger than this. So
|
||||
* force this limit on any devices attached under these ports.
|
||||
*/
|
||||
struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus);
|
||||
|
||||
/* look for the matching bridge */
|
||||
while (!pci_is_root_bus(bus)) {
|
||||
bridge = bus->self;
|
||||
bus = bus->parent;
|
||||
/*
|
||||
* Some Loongson PCIe ports have a h/w limitation of
|
||||
* 256 bytes maximum read request size. They can't handle
|
||||
* anything larger than this. So force this limit on
|
||||
* any devices attached under these ports.
|
||||
*/
|
||||
if (pci_match_id(bridge_devids, bridge)) {
|
||||
if (pcie_get_readrq(dev) > 256) {
|
||||
pci_info(dev, "limiting MRRS to 256\n");
|
||||
pcie_set_readrq(dev, 256);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
bridge->no_inc_mrrs = 1;
|
||||
}
|
||||
DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, loongson_mrrs_quirk);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_LS2K_PCIE_PORT0, loongson_mrrs_quirk);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_LS7A_PCIE_PORT0, loongson_mrrs_quirk);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_LS7A_PCIE_PORT1, loongson_mrrs_quirk);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_LS7A_PCIE_PORT2, loongson_mrrs_quirk);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_LS7A_PCIE_PORT3, loongson_mrrs_quirk);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_LS7A_PCIE_PORT4, loongson_mrrs_quirk);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_LS7A_PCIE_PORT5, loongson_mrrs_quirk);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_LS7A_PCIE_PORT6, loongson_mrrs_quirk);
|
||||
|
||||
static void loongson_pci_pin_quirk(struct pci_dev *pdev)
|
||||
{
|
||||
|
|
|
|||
|
|
@ -1086,6 +1086,8 @@ static void quirk_cmd_compl(struct pci_dev *pdev)
|
|||
}
|
||||
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
|
||||
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
|
||||
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x010e,
|
||||
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
|
||||
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0110,
|
||||
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
|
||||
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400,
|
||||
|
|
|
|||
|
|
@ -976,24 +976,41 @@ bool acpi_pci_power_manageable(struct pci_dev *dev)
|
|||
bool acpi_pci_bridge_d3(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_dev *rpdev;
|
||||
struct acpi_device *adev;
|
||||
acpi_status status;
|
||||
unsigned long long state;
|
||||
struct acpi_device *adev, *rpadev;
|
||||
const union acpi_object *obj;
|
||||
|
||||
if (acpi_pci_disabled || !dev->is_hotplug_bridge)
|
||||
return false;
|
||||
|
||||
/* Assume D3 support if the bridge is power-manageable by ACPI. */
|
||||
if (acpi_pci_power_manageable(dev))
|
||||
return true;
|
||||
adev = ACPI_COMPANION(&dev->dev);
|
||||
if (adev) {
|
||||
/*
|
||||
* If the bridge has _S0W, whether or not it can go into D3
|
||||
* depends on what is returned by that object. In particular,
|
||||
* if the power state returned by _S0W is D2 or shallower,
|
||||
* entering D3 should not be allowed.
|
||||
*/
|
||||
if (acpi_dev_power_state_for_wake(adev) <= ACPI_STATE_D2)
|
||||
return false;
|
||||
|
||||
/*
|
||||
* Otherwise, assume that the bridge can enter D3 so long as it
|
||||
* is power-manageable via ACPI.
|
||||
*/
|
||||
if (acpi_device_power_manageable(adev))
|
||||
return true;
|
||||
}
|
||||
|
||||
rpdev = pcie_find_root_port(dev);
|
||||
if (!rpdev)
|
||||
return false;
|
||||
|
||||
adev = ACPI_COMPANION(&rpdev->dev);
|
||||
if (!adev)
|
||||
if (rpdev == dev)
|
||||
rpadev = adev;
|
||||
else
|
||||
rpadev = ACPI_COMPANION(&rpdev->dev);
|
||||
|
||||
if (!rpadev)
|
||||
return false;
|
||||
|
||||
/*
|
||||
|
|
@ -1001,15 +1018,15 @@ bool acpi_pci_bridge_d3(struct pci_dev *dev)
|
|||
* doesn't supply a wakeup GPE via _PRW, it cannot signal hotplug
|
||||
* events from low-power states including D3hot and D3cold.
|
||||
*/
|
||||
if (!adev->wakeup.flags.valid)
|
||||
if (!rpadev->wakeup.flags.valid)
|
||||
return false;
|
||||
|
||||
/*
|
||||
* If the Root Port cannot wake itself from D3hot or D3cold, we
|
||||
* can't use D3.
|
||||
* In the bridge-below-a-Root-Port case, evaluate _S0W for the Root Port
|
||||
* to verify whether or not it can signal wakeup from D3.
|
||||
*/
|
||||
status = acpi_evaluate_integer(adev->handle, "_S0W", NULL, &state);
|
||||
if (ACPI_SUCCESS(status) && state < ACPI_STATE_D3_HOT)
|
||||
if (rpadev != adev &&
|
||||
acpi_dev_power_state_for_wake(rpadev) <= ACPI_STATE_D2)
|
||||
return false;
|
||||
|
||||
/*
|
||||
|
|
@ -1018,7 +1035,7 @@ bool acpi_pci_bridge_d3(struct pci_dev *dev)
|
|||
* bridges *below* that Root Port can also signal hotplug events
|
||||
* while in D3.
|
||||
*/
|
||||
if (!acpi_dev_get_property(adev, "HotPlugSupportInD3",
|
||||
if (!acpi_dev_get_property(rpadev, "HotPlugSupportInD3",
|
||||
ACPI_TYPE_INTEGER, &obj) &&
|
||||
obj->integer.value == 1)
|
||||
return true;
|
||||
|
|
|
|||
|
|
@ -6017,6 +6017,7 @@ int pcie_set_readrq(struct pci_dev *dev, int rq)
|
|||
{
|
||||
u16 v;
|
||||
int ret;
|
||||
struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
|
||||
|
||||
if (rq < 128 || rq > 4096 || !is_power_of_2(rq))
|
||||
return -EINVAL;
|
||||
|
|
@ -6035,6 +6036,15 @@ int pcie_set_readrq(struct pci_dev *dev, int rq)
|
|||
|
||||
v = (ffs(rq) - 8) << 12;
|
||||
|
||||
if (bridge->no_inc_mrrs) {
|
||||
int max_mrrs = pcie_get_readrq(dev);
|
||||
|
||||
if (rq > max_mrrs) {
|
||||
pci_info(dev, "can't set Max_Read_Request_Size to %d; max is %d\n", rq, max_mrrs);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
ret = pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL,
|
||||
PCI_EXP_DEVCTL_READRQ, v);
|
||||
|
||||
|
|
|
|||
|
|
@ -4835,6 +4835,26 @@ static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags)
|
|||
PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
|
||||
}
|
||||
|
||||
/*
|
||||
* Wangxun 10G/1G NICs have no ACS capability, and on multi-function
|
||||
* devices, peer-to-peer transactions are not be used between the functions.
|
||||
* So add an ACS quirk for below devices to isolate functions.
|
||||
* SFxxx 1G NICs(em).
|
||||
* RP1000/RP2000 10G NICs(sp).
|
||||
*/
|
||||
static int pci_quirk_wangxun_nic_acs(struct pci_dev *dev, u16 acs_flags)
|
||||
{
|
||||
switch (dev->device) {
|
||||
case 0x0100 ... 0x010F:
|
||||
case 0x1001:
|
||||
case 0x2001:
|
||||
return pci_acs_ctrl_enabled(acs_flags,
|
||||
PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static const struct pci_dev_acs_enabled {
|
||||
u16 vendor;
|
||||
u16 device;
|
||||
|
|
@ -4980,6 +5000,8 @@ static const struct pci_dev_acs_enabled {
|
|||
{ PCI_VENDOR_ID_NXP, 0x8d9b, pci_quirk_nxp_rp_acs },
|
||||
/* Zhaoxin Root/Downstream Ports */
|
||||
{ PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs },
|
||||
/* Wangxun nics */
|
||||
{ PCI_VENDOR_ID_WANGXUN, PCI_ANY_ID, pci_quirk_wangxun_nic_acs },
|
||||
{ 0 }
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -1765,12 +1765,70 @@ static void adjust_bridge_window(struct pci_dev *bridge, struct resource *res,
|
|||
add_size = size - new_size;
|
||||
pci_dbg(bridge, "bridge window %pR shrunken by %pa\n", res,
|
||||
&add_size);
|
||||
} else {
|
||||
return;
|
||||
}
|
||||
|
||||
res->end = res->start + new_size - 1;
|
||||
remove_from_list(add_list, res);
|
||||
|
||||
/* If the resource is part of the add_list, remove it now */
|
||||
if (add_list)
|
||||
remove_from_list(add_list, res);
|
||||
}
|
||||
|
||||
static void remove_dev_resource(struct resource *avail, struct pci_dev *dev,
|
||||
struct resource *res)
|
||||
{
|
||||
resource_size_t size, align, tmp;
|
||||
|
||||
size = resource_size(res);
|
||||
if (!size)
|
||||
return;
|
||||
|
||||
align = pci_resource_alignment(dev, res);
|
||||
align = align ? ALIGN(avail->start, align) - avail->start : 0;
|
||||
tmp = align + size;
|
||||
avail->start = min(avail->start + tmp, avail->end + 1);
|
||||
}
|
||||
|
||||
static void remove_dev_resources(struct pci_dev *dev, struct resource *io,
|
||||
struct resource *mmio,
|
||||
struct resource *mmio_pref)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < PCI_NUM_RESOURCES; i++) {
|
||||
struct resource *res = &dev->resource[i];
|
||||
|
||||
if (resource_type(res) == IORESOURCE_IO) {
|
||||
remove_dev_resource(io, dev, res);
|
||||
} else if (resource_type(res) == IORESOURCE_MEM) {
|
||||
|
||||
/*
|
||||
* Make sure prefetchable memory is reduced from
|
||||
* the correct resource. Specifically we put 32-bit
|
||||
* prefetchable memory in non-prefetchable window
|
||||
* if there is an 64-bit pretchable window.
|
||||
*
|
||||
* See comments in __pci_bus_size_bridges() for
|
||||
* more information.
|
||||
*/
|
||||
if ((res->flags & IORESOURCE_PREFETCH) &&
|
||||
((res->flags & IORESOURCE_MEM_64) ==
|
||||
(mmio_pref->flags & IORESOURCE_MEM_64)))
|
||||
remove_dev_resource(mmio_pref, dev, res);
|
||||
else
|
||||
remove_dev_resource(mmio, dev, res);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* io, mmio and mmio_pref contain the total amount of bridge window space
|
||||
* available. This includes the minimal space needed to cover all the
|
||||
* existing devices on the bus and the possible extra space that can be
|
||||
* shared with the bridges.
|
||||
*/
|
||||
static void pci_bus_distribute_available_resources(struct pci_bus *bus,
|
||||
struct list_head *add_list,
|
||||
struct resource io,
|
||||
|
|
@ -1780,7 +1838,7 @@ static void pci_bus_distribute_available_resources(struct pci_bus *bus,
|
|||
unsigned int normal_bridges = 0, hotplug_bridges = 0;
|
||||
struct resource *io_res, *mmio_res, *mmio_pref_res;
|
||||
struct pci_dev *dev, *bridge = bus->self;
|
||||
resource_size_t io_per_hp, mmio_per_hp, mmio_pref_per_hp, align;
|
||||
resource_size_t io_per_b, mmio_per_b, mmio_pref_per_b, align;
|
||||
|
||||
io_res = &bridge->resource[PCI_BRIDGE_IO_WINDOW];
|
||||
mmio_res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW];
|
||||
|
|
@ -1824,94 +1882,88 @@ static void pci_bus_distribute_available_resources(struct pci_bus *bus,
|
|||
normal_bridges++;
|
||||
}
|
||||
|
||||
/*
|
||||
* There is only one bridge on the bus so it gets all available
|
||||
* resources which it can then distribute to the possible hotplug
|
||||
* bridges below.
|
||||
*/
|
||||
if (hotplug_bridges + normal_bridges == 1) {
|
||||
dev = list_first_entry(&bus->devices, struct pci_dev, bus_list);
|
||||
if (dev->subordinate)
|
||||
pci_bus_distribute_available_resources(dev->subordinate,
|
||||
add_list, io, mmio, mmio_pref);
|
||||
if (!(hotplug_bridges + normal_bridges))
|
||||
return;
|
||||
|
||||
/*
|
||||
* Calculate the amount of space we can forward from "bus" to any
|
||||
* downstream buses, i.e., the space left over after assigning the
|
||||
* BARs and windows on "bus".
|
||||
*/
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
if (!dev->is_virtfn)
|
||||
remove_dev_resources(dev, &io, &mmio, &mmio_pref);
|
||||
}
|
||||
|
||||
if (hotplug_bridges == 0)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Calculate the total amount of extra resource space we can
|
||||
* pass to bridges below this one. This is basically the
|
||||
* extra space reduced by the minimal required space for the
|
||||
* non-hotplug bridges.
|
||||
* If there is at least one hotplug bridge on this bus it gets all
|
||||
* the extra resource space that was left after the reductions
|
||||
* above.
|
||||
*
|
||||
* If there are no hotplug bridges the extra resource space is
|
||||
* split between non-hotplug bridges. This is to allow possible
|
||||
* hotplug bridges below them to get the extra space as well.
|
||||
*/
|
||||
if (hotplug_bridges) {
|
||||
io_per_b = div64_ul(resource_size(&io), hotplug_bridges);
|
||||
mmio_per_b = div64_ul(resource_size(&mmio), hotplug_bridges);
|
||||
mmio_pref_per_b = div64_ul(resource_size(&mmio_pref),
|
||||
hotplug_bridges);
|
||||
} else {
|
||||
io_per_b = div64_ul(resource_size(&io), normal_bridges);
|
||||
mmio_per_b = div64_ul(resource_size(&mmio), normal_bridges);
|
||||
mmio_pref_per_b = div64_ul(resource_size(&mmio_pref),
|
||||
normal_bridges);
|
||||
}
|
||||
|
||||
for_each_pci_bridge(dev, bus) {
|
||||
resource_size_t used_size;
|
||||
struct resource *res;
|
||||
|
||||
if (dev->is_hotplug_bridge)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* Reduce the available resource space by what the
|
||||
* bridge and devices below it occupy.
|
||||
*/
|
||||
res = &dev->resource[PCI_BRIDGE_IO_WINDOW];
|
||||
align = pci_resource_alignment(dev, res);
|
||||
align = align ? ALIGN(io.start, align) - io.start : 0;
|
||||
used_size = align + resource_size(res);
|
||||
if (!res->parent)
|
||||
io.start = min(io.start + used_size, io.end + 1);
|
||||
|
||||
res = &dev->resource[PCI_BRIDGE_MEM_WINDOW];
|
||||
align = pci_resource_alignment(dev, res);
|
||||
align = align ? ALIGN(mmio.start, align) - mmio.start : 0;
|
||||
used_size = align + resource_size(res);
|
||||
if (!res->parent)
|
||||
mmio.start = min(mmio.start + used_size, mmio.end + 1);
|
||||
|
||||
res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW];
|
||||
align = pci_resource_alignment(dev, res);
|
||||
align = align ? ALIGN(mmio_pref.start, align) -
|
||||
mmio_pref.start : 0;
|
||||
used_size = align + resource_size(res);
|
||||
if (!res->parent)
|
||||
mmio_pref.start = min(mmio_pref.start + used_size,
|
||||
mmio_pref.end + 1);
|
||||
}
|
||||
|
||||
io_per_hp = div64_ul(resource_size(&io), hotplug_bridges);
|
||||
mmio_per_hp = div64_ul(resource_size(&mmio), hotplug_bridges);
|
||||
mmio_pref_per_hp = div64_ul(resource_size(&mmio_pref),
|
||||
hotplug_bridges);
|
||||
|
||||
/*
|
||||
* Go over devices on this bus and distribute the remaining
|
||||
* resource space between hotplug bridges.
|
||||
*/
|
||||
for_each_pci_bridge(dev, bus) {
|
||||
struct pci_bus *b;
|
||||
|
||||
b = dev->subordinate;
|
||||
if (!b || !dev->is_hotplug_bridge)
|
||||
if (!b)
|
||||
continue;
|
||||
if (hotplug_bridges && !dev->is_hotplug_bridge)
|
||||
continue;
|
||||
|
||||
res = &dev->resource[PCI_BRIDGE_IO_WINDOW];
|
||||
|
||||
/*
|
||||
* Distribute available extra resources equally between
|
||||
* hotplug-capable downstream ports taking alignment into
|
||||
* account.
|
||||
* Make sure the split resource space is properly aligned
|
||||
* for bridge windows (align it down to avoid going above
|
||||
* what is available).
|
||||
*/
|
||||
io.end = io.start + io_per_hp - 1;
|
||||
mmio.end = mmio.start + mmio_per_hp - 1;
|
||||
mmio_pref.end = mmio_pref.start + mmio_pref_per_hp - 1;
|
||||
align = pci_resource_alignment(dev, res);
|
||||
io.end = align ? io.start + ALIGN_DOWN(io_per_b, align) - 1
|
||||
: io.start + io_per_b - 1;
|
||||
|
||||
/*
|
||||
* The x_per_b holds the extra resource space that can be
|
||||
* added for each bridge but there is the minimal already
|
||||
* reserved as well so adjust x.start down accordingly to
|
||||
* cover the whole space.
|
||||
*/
|
||||
io.start -= resource_size(res);
|
||||
|
||||
res = &dev->resource[PCI_BRIDGE_MEM_WINDOW];
|
||||
align = pci_resource_alignment(dev, res);
|
||||
mmio.end = align ? mmio.start + ALIGN_DOWN(mmio_per_b, align) - 1
|
||||
: mmio.start + mmio_per_b - 1;
|
||||
mmio.start -= resource_size(res);
|
||||
|
||||
res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW];
|
||||
align = pci_resource_alignment(dev, res);
|
||||
mmio_pref.end = align ? mmio_pref.start +
|
||||
ALIGN_DOWN(mmio_pref_per_b, align) - 1
|
||||
: mmio_pref.start + mmio_pref_per_b - 1;
|
||||
mmio_pref.start -= resource_size(res);
|
||||
|
||||
pci_bus_distribute_available_resources(b, add_list, io, mmio,
|
||||
mmio_pref);
|
||||
|
||||
io.start += io_per_hp;
|
||||
mmio.start += mmio_per_hp;
|
||||
mmio_pref.start += mmio_pref_per_hp;
|
||||
io.start += io.end + 1;
|
||||
mmio.start += mmio.end + 1;
|
||||
mmio_pref.start += mmio_pref.end + 1;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -1923,6 +1975,8 @@ static void pci_bridge_distribute_available_resources(struct pci_dev *bridge,
|
|||
if (!bridge->is_hotplug_bridge)
|
||||
return;
|
||||
|
||||
pci_dbg(bridge, "distributing available resources\n");
|
||||
|
||||
/* Take the initial extra resources from the hotplug port */
|
||||
available_io = bridge->resource[PCI_BRIDGE_IO_WINDOW];
|
||||
available_mmio = bridge->resource[PCI_BRIDGE_MEM_WINDOW];
|
||||
|
|
@ -1934,6 +1988,54 @@ static void pci_bridge_distribute_available_resources(struct pci_dev *bridge,
|
|||
available_mmio_pref);
|
||||
}
|
||||
|
||||
static bool pci_bridge_resources_not_assigned(struct pci_dev *dev)
|
||||
{
|
||||
const struct resource *r;
|
||||
|
||||
/*
|
||||
* If the child device's resources are not yet assigned it means we
|
||||
* are configuring them (not the boot firmware), so we should be
|
||||
* able to extend the upstream bridge resources in the same way we
|
||||
* do with the normal hotplug case.
|
||||
*/
|
||||
r = &dev->resource[PCI_BRIDGE_IO_WINDOW];
|
||||
if (r->flags && !(r->flags & IORESOURCE_STARTALIGN))
|
||||
return false;
|
||||
r = &dev->resource[PCI_BRIDGE_MEM_WINDOW];
|
||||
if (r->flags && !(r->flags & IORESOURCE_STARTALIGN))
|
||||
return false;
|
||||
r = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW];
|
||||
if (r->flags && !(r->flags & IORESOURCE_STARTALIGN))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void
|
||||
pci_root_bus_distribute_available_resources(struct pci_bus *bus,
|
||||
struct list_head *add_list)
|
||||
{
|
||||
struct pci_dev *dev, *bridge = bus->self;
|
||||
|
||||
for_each_pci_bridge(dev, bus) {
|
||||
struct pci_bus *b;
|
||||
|
||||
b = dev->subordinate;
|
||||
if (!b)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* Need to check "bridge" here too because it is NULL
|
||||
* in case of root bus.
|
||||
*/
|
||||
if (bridge && pci_bridge_resources_not_assigned(dev))
|
||||
pci_bridge_distribute_available_resources(bridge,
|
||||
add_list);
|
||||
else
|
||||
pci_root_bus_distribute_available_resources(b, add_list);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* First try will not touch PCI bridge res.
|
||||
* Second and later try will clear small leaf bridge res.
|
||||
|
|
@ -1973,6 +2075,8 @@ again:
|
|||
*/
|
||||
__pci_bus_size_bridges(bus, add_list);
|
||||
|
||||
pci_root_bus_distribute_available_resources(bus, add_list);
|
||||
|
||||
/* Depth last, allocate resources and update the hardware. */
|
||||
__pci_bus_assign_resources(bus, add_list, &fail_head);
|
||||
if (add_list)
|
||||
|
|
|
|||
|
|
@ -808,9 +808,8 @@ static int tcphy_get_mode(struct rockchip_typec_phy *tcphy)
|
|||
struct extcon_dev *edev = tcphy->extcon;
|
||||
union extcon_property_value property;
|
||||
unsigned int id;
|
||||
bool ufp, dp;
|
||||
u8 mode;
|
||||
int ret;
|
||||
int ret, ufp, dp;
|
||||
|
||||
if (!edev)
|
||||
return MODE_DFP_USB;
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ struct ptp_vclock {
|
|||
struct hlist_node vclock_hash_node;
|
||||
struct cyclecounter cc;
|
||||
struct timecounter tc;
|
||||
spinlock_t lock; /* protects tc/cc */
|
||||
struct mutex lock; /* protects tc/cc */
|
||||
};
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -43,16 +43,16 @@ static void ptp_vclock_hash_del(struct ptp_vclock *vclock)
|
|||
static int ptp_vclock_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
|
||||
{
|
||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||
unsigned long flags;
|
||||
s64 adj;
|
||||
|
||||
adj = (s64)scaled_ppm << PTP_VCLOCK_FADJ_SHIFT;
|
||||
adj = div_s64(adj, PTP_VCLOCK_FADJ_DENOMINATOR);
|
||||
|
||||
spin_lock_irqsave(&vclock->lock, flags);
|
||||
if (mutex_lock_interruptible(&vclock->lock))
|
||||
return -EINTR;
|
||||
timecounter_read(&vclock->tc);
|
||||
vclock->cc.mult = PTP_VCLOCK_CC_MULT + adj;
|
||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
||||
mutex_unlock(&vclock->lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -60,11 +60,11 @@ static int ptp_vclock_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
|
|||
static int ptp_vclock_adjtime(struct ptp_clock_info *ptp, s64 delta)
|
||||
{
|
||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&vclock->lock, flags);
|
||||
if (mutex_lock_interruptible(&vclock->lock))
|
||||
return -EINTR;
|
||||
timecounter_adjtime(&vclock->tc, delta);
|
||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
||||
mutex_unlock(&vclock->lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -73,12 +73,12 @@ static int ptp_vclock_gettime(struct ptp_clock_info *ptp,
|
|||
struct timespec64 *ts)
|
||||
{
|
||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||
unsigned long flags;
|
||||
u64 ns;
|
||||
|
||||
spin_lock_irqsave(&vclock->lock, flags);
|
||||
if (mutex_lock_interruptible(&vclock->lock))
|
||||
return -EINTR;
|
||||
ns = timecounter_read(&vclock->tc);
|
||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
||||
mutex_unlock(&vclock->lock);
|
||||
*ts = ns_to_timespec64(ns);
|
||||
|
||||
return 0;
|
||||
|
|
@ -91,7 +91,6 @@ static int ptp_vclock_gettimex(struct ptp_clock_info *ptp,
|
|||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||
struct ptp_clock *pptp = vclock->pclock;
|
||||
struct timespec64 pts;
|
||||
unsigned long flags;
|
||||
int err;
|
||||
u64 ns;
|
||||
|
||||
|
|
@ -99,9 +98,10 @@ static int ptp_vclock_gettimex(struct ptp_clock_info *ptp,
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
spin_lock_irqsave(&vclock->lock, flags);
|
||||
if (mutex_lock_interruptible(&vclock->lock))
|
||||
return -EINTR;
|
||||
ns = timecounter_cyc2time(&vclock->tc, timespec64_to_ns(&pts));
|
||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
||||
mutex_unlock(&vclock->lock);
|
||||
|
||||
*ts = ns_to_timespec64(ns);
|
||||
|
||||
|
|
@ -113,11 +113,11 @@ static int ptp_vclock_settime(struct ptp_clock_info *ptp,
|
|||
{
|
||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||
u64 ns = timespec64_to_ns(ts);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&vclock->lock, flags);
|
||||
if (mutex_lock_interruptible(&vclock->lock))
|
||||
return -EINTR;
|
||||
timecounter_init(&vclock->tc, &vclock->cc, ns);
|
||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
||||
mutex_unlock(&vclock->lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -127,7 +127,6 @@ static int ptp_vclock_getcrosststamp(struct ptp_clock_info *ptp,
|
|||
{
|
||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||
struct ptp_clock *pptp = vclock->pclock;
|
||||
unsigned long flags;
|
||||
int err;
|
||||
u64 ns;
|
||||
|
||||
|
|
@ -135,9 +134,10 @@ static int ptp_vclock_getcrosststamp(struct ptp_clock_info *ptp,
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
spin_lock_irqsave(&vclock->lock, flags);
|
||||
if (mutex_lock_interruptible(&vclock->lock))
|
||||
return -EINTR;
|
||||
ns = timecounter_cyc2time(&vclock->tc, ktime_to_ns(xtstamp->device));
|
||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
||||
mutex_unlock(&vclock->lock);
|
||||
|
||||
xtstamp->device = ns_to_ktime(ns);
|
||||
|
||||
|
|
@ -205,7 +205,7 @@ struct ptp_vclock *ptp_vclock_register(struct ptp_clock *pclock)
|
|||
|
||||
INIT_HLIST_NODE(&vclock->vclock_hash_node);
|
||||
|
||||
spin_lock_init(&vclock->lock);
|
||||
mutex_init(&vclock->lock);
|
||||
|
||||
vclock->clock = ptp_clock_register(&vclock->info, &pclock->dev);
|
||||
if (IS_ERR_OR_NULL(vclock->clock)) {
|
||||
|
|
@ -269,7 +269,6 @@ ktime_t ptp_convert_timestamp(const ktime_t *hwtstamp, int vclock_index)
|
|||
{
|
||||
unsigned int hash = vclock_index % HASH_SIZE(vclock_hash);
|
||||
struct ptp_vclock *vclock;
|
||||
unsigned long flags;
|
||||
u64 ns;
|
||||
u64 vclock_ns = 0;
|
||||
|
||||
|
|
@ -281,9 +280,10 @@ ktime_t ptp_convert_timestamp(const ktime_t *hwtstamp, int vclock_index)
|
|||
if (vclock->clock->index != vclock_index)
|
||||
continue;
|
||||
|
||||
spin_lock_irqsave(&vclock->lock, flags);
|
||||
if (mutex_lock_interruptible(&vclock->lock))
|
||||
break;
|
||||
vclock_ns = timecounter_cyc2time(&vclock->tc, ns);
|
||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
||||
mutex_unlock(&vclock->lock);
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -159,7 +159,13 @@ static int pwm_sifive_apply(struct pwm_chip *chip, struct pwm_device *pwm,
|
|||
|
||||
mutex_lock(&ddata->lock);
|
||||
if (state->period != ddata->approx_period) {
|
||||
if (ddata->user_count != 1) {
|
||||
/*
|
||||
* Don't let a 2nd user change the period underneath the 1st user.
|
||||
* However if ddate->approx_period == 0 this is the first time we set
|
||||
* any period, so let whoever gets here first set the period so other
|
||||
* users who agree on the period won't fail.
|
||||
*/
|
||||
if (ddata->user_count != 1 && ddata->approx_period) {
|
||||
mutex_unlock(&ddata->lock);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -127,7 +127,7 @@ static int stm32_pwm_lp_apply(struct pwm_chip *chip, struct pwm_device *pwm,
|
|||
|
||||
/* ensure CMP & ARR registers are properly written */
|
||||
ret = regmap_read_poll_timeout(priv->regmap, STM32_LPTIM_ISR, val,
|
||||
(val & STM32_LPTIM_CMPOK_ARROK),
|
||||
(val & STM32_LPTIM_CMPOK_ARROK) == STM32_LPTIM_CMPOK_ARROK,
|
||||
100, 1000);
|
||||
if (ret) {
|
||||
dev_err(priv->chip.dev, "ARR/CMP registers write issue\n");
|
||||
|
|
|
|||
|
|
@ -392,7 +392,7 @@ int rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
|
|||
return err;
|
||||
if (!rtc->ops) {
|
||||
err = -ENODEV;
|
||||
} else if (!test_bit(RTC_FEATURE_ALARM, rtc->features) || !rtc->ops->read_alarm) {
|
||||
} else if (!test_bit(RTC_FEATURE_ALARM, rtc->features)) {
|
||||
err = -EINVAL;
|
||||
} else {
|
||||
memset(alarm, 0, sizeof(struct rtc_wkalrm));
|
||||
|
|
|
|||
|
|
@ -136,7 +136,6 @@ struct sun6i_rtc_clk_data {
|
|||
unsigned int fixed_prescaler : 16;
|
||||
unsigned int has_prescaler : 1;
|
||||
unsigned int has_out_clk : 1;
|
||||
unsigned int export_iosc : 1;
|
||||
unsigned int has_losc_en : 1;
|
||||
unsigned int has_auto_swt : 1;
|
||||
};
|
||||
|
|
@ -271,10 +270,8 @@ static void __init sun6i_rtc_clk_init(struct device_node *node,
|
|||
/* Yes, I know, this is ugly. */
|
||||
sun6i_rtc = rtc;
|
||||
|
||||
/* Only read IOSC name from device tree if it is exported */
|
||||
if (rtc->data->export_iosc)
|
||||
of_property_read_string_index(node, "clock-output-names", 2,
|
||||
&iosc_name);
|
||||
of_property_read_string_index(node, "clock-output-names", 2,
|
||||
&iosc_name);
|
||||
|
||||
rtc->int_osc = clk_hw_register_fixed_rate_with_accuracy(NULL,
|
||||
iosc_name,
|
||||
|
|
@ -315,13 +312,10 @@ static void __init sun6i_rtc_clk_init(struct device_node *node,
|
|||
goto err_register;
|
||||
}
|
||||
|
||||
clk_data->num = 2;
|
||||
clk_data->num = 3;
|
||||
clk_data->hws[0] = &rtc->hw;
|
||||
clk_data->hws[1] = __clk_get_hw(rtc->ext_losc);
|
||||
if (rtc->data->export_iosc) {
|
||||
clk_data->hws[2] = rtc->int_osc;
|
||||
clk_data->num = 3;
|
||||
}
|
||||
clk_data->hws[2] = rtc->int_osc;
|
||||
of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
|
||||
return;
|
||||
|
||||
|
|
@ -361,7 +355,6 @@ static const struct sun6i_rtc_clk_data sun8i_h3_rtc_data = {
|
|||
.fixed_prescaler = 32,
|
||||
.has_prescaler = 1,
|
||||
.has_out_clk = 1,
|
||||
.export_iosc = 1,
|
||||
};
|
||||
|
||||
static void __init sun8i_h3_rtc_clk_init(struct device_node *node)
|
||||
|
|
@ -379,7 +372,6 @@ static const struct sun6i_rtc_clk_data sun50i_h6_rtc_data = {
|
|||
.fixed_prescaler = 32,
|
||||
.has_prescaler = 1,
|
||||
.has_out_clk = 1,
|
||||
.export_iosc = 1,
|
||||
.has_losc_en = 1,
|
||||
.has_auto_swt = 1,
|
||||
};
|
||||
|
|
|
|||
|
|
@ -1516,23 +1516,22 @@ static void ipr_process_ccn(struct ipr_cmnd *ipr_cmd)
|
|||
}
|
||||
|
||||
/**
|
||||
* strip_and_pad_whitespace - Strip and pad trailing whitespace.
|
||||
* @i: index into buffer
|
||||
* @buf: string to modify
|
||||
* strip_whitespace - Strip and pad trailing whitespace.
|
||||
* @i: size of buffer
|
||||
* @buf: string to modify
|
||||
*
|
||||
* This function will strip all trailing whitespace, pad the end
|
||||
* of the string with a single space, and NULL terminate the string.
|
||||
* This function will strip all trailing whitespace and
|
||||
* NUL terminate the string.
|
||||
*
|
||||
* Return value:
|
||||
* new length of string
|
||||
**/
|
||||
static int strip_and_pad_whitespace(int i, char *buf)
|
||||
static void strip_whitespace(int i, char *buf)
|
||||
{
|
||||
if (i < 1)
|
||||
return;
|
||||
i--;
|
||||
while (i && buf[i] == ' ')
|
||||
i--;
|
||||
buf[i+1] = ' ';
|
||||
buf[i+2] = '\0';
|
||||
return i + 2;
|
||||
buf[i+1] = '\0';
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1547,19 +1546,21 @@ static int strip_and_pad_whitespace(int i, char *buf)
|
|||
static void ipr_log_vpd_compact(char *prefix, struct ipr_hostrcb *hostrcb,
|
||||
struct ipr_vpd *vpd)
|
||||
{
|
||||
char buffer[IPR_VENDOR_ID_LEN + IPR_PROD_ID_LEN + IPR_SERIAL_NUM_LEN + 3];
|
||||
int i = 0;
|
||||
char vendor_id[IPR_VENDOR_ID_LEN + 1];
|
||||
char product_id[IPR_PROD_ID_LEN + 1];
|
||||
char sn[IPR_SERIAL_NUM_LEN + 1];
|
||||
|
||||
memcpy(buffer, vpd->vpids.vendor_id, IPR_VENDOR_ID_LEN);
|
||||
i = strip_and_pad_whitespace(IPR_VENDOR_ID_LEN - 1, buffer);
|
||||
memcpy(vendor_id, vpd->vpids.vendor_id, IPR_VENDOR_ID_LEN);
|
||||
strip_whitespace(IPR_VENDOR_ID_LEN, vendor_id);
|
||||
|
||||
memcpy(&buffer[i], vpd->vpids.product_id, IPR_PROD_ID_LEN);
|
||||
i = strip_and_pad_whitespace(i + IPR_PROD_ID_LEN - 1, buffer);
|
||||
memcpy(product_id, vpd->vpids.product_id, IPR_PROD_ID_LEN);
|
||||
strip_whitespace(IPR_PROD_ID_LEN, product_id);
|
||||
|
||||
memcpy(&buffer[i], vpd->sn, IPR_SERIAL_NUM_LEN);
|
||||
buffer[IPR_SERIAL_NUM_LEN + i] = '\0';
|
||||
memcpy(sn, vpd->sn, IPR_SERIAL_NUM_LEN);
|
||||
strip_whitespace(IPR_SERIAL_NUM_LEN, sn);
|
||||
|
||||
ipr_hcam_err(hostrcb, "%s VPID/SN: %s\n", prefix, buffer);
|
||||
ipr_hcam_err(hostrcb, "%s VPID/SN: %s %s %s\n", prefix,
|
||||
vendor_id, product_id, sn);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -955,19 +955,16 @@ struct scmd_priv {
|
|||
* @chain_buf_count: Chain buffer count
|
||||
* @chain_buf_pool: Chain buffer pool
|
||||
* @chain_sgl_list: Chain SGL list
|
||||
* @chain_bitmap_sz: Chain buffer allocator bitmap size
|
||||
* @chain_bitmap: Chain buffer allocator bitmap
|
||||
* @chain_buf_lock: Chain buffer list lock
|
||||
* @bsg_cmds: Command tracker for BSG command
|
||||
* @host_tm_cmds: Command tracker for task management commands
|
||||
* @dev_rmhs_cmds: Command tracker for device removal commands
|
||||
* @evtack_cmds: Command tracker for event ack commands
|
||||
* @devrem_bitmap_sz: Device removal bitmap size
|
||||
* @devrem_bitmap: Device removal bitmap
|
||||
* @dev_handle_bitmap_sz: Device handle bitmap size
|
||||
* @dev_handle_bitmap_bits: Number of bits in device handle bitmap
|
||||
* @removepend_bitmap: Remove pending bitmap
|
||||
* @delayed_rmhs_list: Delayed device removal list
|
||||
* @evtack_cmds_bitmap_sz: Event Ack bitmap size
|
||||
* @evtack_cmds_bitmap: Event Ack bitmap
|
||||
* @delayed_evtack_cmds_list: Delayed event acknowledgment list
|
||||
* @ts_update_counter: Timestamp update counter
|
||||
|
|
@ -1128,7 +1125,6 @@ struct mpi3mr_ioc {
|
|||
u32 chain_buf_count;
|
||||
struct dma_pool *chain_buf_pool;
|
||||
struct chain_element *chain_sgl_list;
|
||||
u16 chain_bitmap_sz;
|
||||
void *chain_bitmap;
|
||||
spinlock_t chain_buf_lock;
|
||||
|
||||
|
|
@ -1136,12 +1132,10 @@ struct mpi3mr_ioc {
|
|||
struct mpi3mr_drv_cmd host_tm_cmds;
|
||||
struct mpi3mr_drv_cmd dev_rmhs_cmds[MPI3MR_NUM_DEVRMCMD];
|
||||
struct mpi3mr_drv_cmd evtack_cmds[MPI3MR_NUM_EVTACKCMD];
|
||||
u16 devrem_bitmap_sz;
|
||||
void *devrem_bitmap;
|
||||
u16 dev_handle_bitmap_sz;
|
||||
u16 dev_handle_bitmap_bits;
|
||||
void *removepend_bitmap;
|
||||
struct list_head delayed_rmhs_list;
|
||||
u16 evtack_cmds_bitmap_sz;
|
||||
void *evtack_cmds_bitmap;
|
||||
struct list_head delayed_evtack_cmds_list;
|
||||
|
||||
|
|
|
|||
|
|
@ -1128,7 +1128,6 @@ static int mpi3mr_issue_and_process_mur(struct mpi3mr_ioc *mrioc,
|
|||
static int
|
||||
mpi3mr_revalidate_factsdata(struct mpi3mr_ioc *mrioc)
|
||||
{
|
||||
u16 dev_handle_bitmap_sz;
|
||||
void *removepend_bitmap;
|
||||
|
||||
if (mrioc->facts.reply_sz > mrioc->reply_sz) {
|
||||
|
|
@ -1160,25 +1159,23 @@ mpi3mr_revalidate_factsdata(struct mpi3mr_ioc *mrioc)
|
|||
"\tcontroller while sas transport support is enabled at the\n"
|
||||
"\tdriver, please reboot the system or reload the driver\n");
|
||||
|
||||
dev_handle_bitmap_sz = mrioc->facts.max_devhandle / 8;
|
||||
if (mrioc->facts.max_devhandle % 8)
|
||||
dev_handle_bitmap_sz++;
|
||||
if (dev_handle_bitmap_sz > mrioc->dev_handle_bitmap_sz) {
|
||||
removepend_bitmap = krealloc(mrioc->removepend_bitmap,
|
||||
dev_handle_bitmap_sz, GFP_KERNEL);
|
||||
if (mrioc->facts.max_devhandle > mrioc->dev_handle_bitmap_bits) {
|
||||
removepend_bitmap = bitmap_zalloc(mrioc->facts.max_devhandle,
|
||||
GFP_KERNEL);
|
||||
if (!removepend_bitmap) {
|
||||
ioc_err(mrioc,
|
||||
"failed to increase removepend_bitmap sz from: %d to %d\n",
|
||||
mrioc->dev_handle_bitmap_sz, dev_handle_bitmap_sz);
|
||||
"failed to increase removepend_bitmap bits from %d to %d\n",
|
||||
mrioc->dev_handle_bitmap_bits,
|
||||
mrioc->facts.max_devhandle);
|
||||
return -EPERM;
|
||||
}
|
||||
memset(removepend_bitmap + mrioc->dev_handle_bitmap_sz, 0,
|
||||
dev_handle_bitmap_sz - mrioc->dev_handle_bitmap_sz);
|
||||
bitmap_free(mrioc->removepend_bitmap);
|
||||
mrioc->removepend_bitmap = removepend_bitmap;
|
||||
ioc_info(mrioc,
|
||||
"increased dev_handle_bitmap_sz from %d to %d\n",
|
||||
mrioc->dev_handle_bitmap_sz, dev_handle_bitmap_sz);
|
||||
mrioc->dev_handle_bitmap_sz = dev_handle_bitmap_sz;
|
||||
"increased bits of dev_handle_bitmap from %d to %d\n",
|
||||
mrioc->dev_handle_bitmap_bits,
|
||||
mrioc->facts.max_devhandle);
|
||||
mrioc->dev_handle_bitmap_bits = mrioc->facts.max_devhandle;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
@ -2957,27 +2954,18 @@ static int mpi3mr_alloc_reply_sense_bufs(struct mpi3mr_ioc *mrioc)
|
|||
if (!mrioc->pel_abort_cmd.reply)
|
||||
goto out_failed;
|
||||
|
||||
mrioc->dev_handle_bitmap_sz = mrioc->facts.max_devhandle / 8;
|
||||
if (mrioc->facts.max_devhandle % 8)
|
||||
mrioc->dev_handle_bitmap_sz++;
|
||||
mrioc->removepend_bitmap = kzalloc(mrioc->dev_handle_bitmap_sz,
|
||||
GFP_KERNEL);
|
||||
mrioc->dev_handle_bitmap_bits = mrioc->facts.max_devhandle;
|
||||
mrioc->removepend_bitmap = bitmap_zalloc(mrioc->dev_handle_bitmap_bits,
|
||||
GFP_KERNEL);
|
||||
if (!mrioc->removepend_bitmap)
|
||||
goto out_failed;
|
||||
|
||||
mrioc->devrem_bitmap_sz = MPI3MR_NUM_DEVRMCMD / 8;
|
||||
if (MPI3MR_NUM_DEVRMCMD % 8)
|
||||
mrioc->devrem_bitmap_sz++;
|
||||
mrioc->devrem_bitmap = kzalloc(mrioc->devrem_bitmap_sz,
|
||||
GFP_KERNEL);
|
||||
mrioc->devrem_bitmap = bitmap_zalloc(MPI3MR_NUM_DEVRMCMD, GFP_KERNEL);
|
||||
if (!mrioc->devrem_bitmap)
|
||||
goto out_failed;
|
||||
|
||||
mrioc->evtack_cmds_bitmap_sz = MPI3MR_NUM_EVTACKCMD / 8;
|
||||
if (MPI3MR_NUM_EVTACKCMD % 8)
|
||||
mrioc->evtack_cmds_bitmap_sz++;
|
||||
mrioc->evtack_cmds_bitmap = kzalloc(mrioc->evtack_cmds_bitmap_sz,
|
||||
GFP_KERNEL);
|
||||
mrioc->evtack_cmds_bitmap = bitmap_zalloc(MPI3MR_NUM_EVTACKCMD,
|
||||
GFP_KERNEL);
|
||||
if (!mrioc->evtack_cmds_bitmap)
|
||||
goto out_failed;
|
||||
|
||||
|
|
@ -3415,10 +3403,7 @@ static int mpi3mr_alloc_chain_bufs(struct mpi3mr_ioc *mrioc)
|
|||
if (!mrioc->chain_sgl_list[i].addr)
|
||||
goto out_failed;
|
||||
}
|
||||
mrioc->chain_bitmap_sz = num_chains / 8;
|
||||
if (num_chains % 8)
|
||||
mrioc->chain_bitmap_sz++;
|
||||
mrioc->chain_bitmap = kzalloc(mrioc->chain_bitmap_sz, GFP_KERNEL);
|
||||
mrioc->chain_bitmap = bitmap_zalloc(num_chains, GFP_KERNEL);
|
||||
if (!mrioc->chain_bitmap)
|
||||
goto out_failed;
|
||||
return retval;
|
||||
|
|
@ -4190,10 +4175,11 @@ void mpi3mr_memset_buffers(struct mpi3mr_ioc *mrioc)
|
|||
for (i = 0; i < MPI3MR_NUM_EVTACKCMD; i++)
|
||||
memset(mrioc->evtack_cmds[i].reply, 0,
|
||||
sizeof(*mrioc->evtack_cmds[i].reply));
|
||||
memset(mrioc->removepend_bitmap, 0, mrioc->dev_handle_bitmap_sz);
|
||||
memset(mrioc->devrem_bitmap, 0, mrioc->devrem_bitmap_sz);
|
||||
memset(mrioc->evtack_cmds_bitmap, 0,
|
||||
mrioc->evtack_cmds_bitmap_sz);
|
||||
bitmap_clear(mrioc->removepend_bitmap, 0,
|
||||
mrioc->dev_handle_bitmap_bits);
|
||||
bitmap_clear(mrioc->devrem_bitmap, 0, MPI3MR_NUM_DEVRMCMD);
|
||||
bitmap_clear(mrioc->evtack_cmds_bitmap, 0,
|
||||
MPI3MR_NUM_EVTACKCMD);
|
||||
}
|
||||
|
||||
for (i = 0; i < mrioc->num_queues; i++) {
|
||||
|
|
@ -4319,16 +4305,16 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc)
|
|||
mrioc->evtack_cmds[i].reply = NULL;
|
||||
}
|
||||
|
||||
kfree(mrioc->removepend_bitmap);
|
||||
bitmap_free(mrioc->removepend_bitmap);
|
||||
mrioc->removepend_bitmap = NULL;
|
||||
|
||||
kfree(mrioc->devrem_bitmap);
|
||||
bitmap_free(mrioc->devrem_bitmap);
|
||||
mrioc->devrem_bitmap = NULL;
|
||||
|
||||
kfree(mrioc->evtack_cmds_bitmap);
|
||||
bitmap_free(mrioc->evtack_cmds_bitmap);
|
||||
mrioc->evtack_cmds_bitmap = NULL;
|
||||
|
||||
kfree(mrioc->chain_bitmap);
|
||||
bitmap_free(mrioc->chain_bitmap);
|
||||
mrioc->chain_bitmap = NULL;
|
||||
|
||||
kfree(mrioc->transport_cmds.reply);
|
||||
|
|
@ -4887,9 +4873,10 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
|
|||
|
||||
mpi3mr_flush_delayed_cmd_lists(mrioc);
|
||||
mpi3mr_flush_drv_cmds(mrioc);
|
||||
memset(mrioc->devrem_bitmap, 0, mrioc->devrem_bitmap_sz);
|
||||
memset(mrioc->removepend_bitmap, 0, mrioc->dev_handle_bitmap_sz);
|
||||
memset(mrioc->evtack_cmds_bitmap, 0, mrioc->evtack_cmds_bitmap_sz);
|
||||
bitmap_clear(mrioc->devrem_bitmap, 0, MPI3MR_NUM_DEVRMCMD);
|
||||
bitmap_clear(mrioc->removepend_bitmap, 0,
|
||||
mrioc->dev_handle_bitmap_bits);
|
||||
bitmap_clear(mrioc->evtack_cmds_bitmap, 0, MPI3MR_NUM_EVTACKCMD);
|
||||
mpi3mr_flush_host_io(mrioc);
|
||||
mpi3mr_cleanup_fwevt_list(mrioc);
|
||||
mpi3mr_invalidate_devhandles(mrioc);
|
||||
|
|
|
|||
|
|
@ -1280,7 +1280,7 @@ void mpi3mr_sas_host_add(struct mpi3mr_ioc *mrioc)
|
|||
|
||||
if (mrioc->sas_hba.enclosure_handle) {
|
||||
if (!(mpi3mr_cfg_get_enclosure_pg0(mrioc, &ioc_status,
|
||||
&encl_pg0, sizeof(dev_pg0),
|
||||
&encl_pg0, sizeof(encl_pg0),
|
||||
MPI3_ENCLOS_PGAD_FORM_HANDLE,
|
||||
mrioc->sas_hba.enclosure_handle)) &&
|
||||
(ioc_status == MPI3_IOCSTATUS_SUCCESS))
|
||||
|
|
|
|||
|
|
@ -304,7 +304,6 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8186[] = {
|
|||
.ctl_offs = 0x9FC,
|
||||
.pwr_sta_offs = 0x16C,
|
||||
.pwr_sta2nd_offs = 0x170,
|
||||
.caps = MTK_SCPD_KEEP_DEFAULT_OFF,
|
||||
},
|
||||
[MT8186_POWER_DOMAIN_ADSP_INFRA] = {
|
||||
.name = "adsp_infra",
|
||||
|
|
@ -312,7 +311,6 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8186[] = {
|
|||
.ctl_offs = 0x9F8,
|
||||
.pwr_sta_offs = 0x16C,
|
||||
.pwr_sta2nd_offs = 0x170,
|
||||
.caps = MTK_SCPD_KEEP_DEFAULT_OFF,
|
||||
},
|
||||
[MT8186_POWER_DOMAIN_ADSP_TOP] = {
|
||||
.name = "adsp_top",
|
||||
|
|
@ -332,7 +330,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8186[] = {
|
|||
MT8186_TOP_AXI_PROT_EN_3_CLR,
|
||||
MT8186_TOP_AXI_PROT_EN_3_STA),
|
||||
},
|
||||
.caps = MTK_SCPD_SRAM_ISO | MTK_SCPD_KEEP_DEFAULT_OFF | MTK_SCPD_ACTIVE_WAKEUP,
|
||||
.caps = MTK_SCPD_SRAM_ISO | MTK_SCPD_ACTIVE_WAKEUP,
|
||||
},
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -1324,7 +1324,7 @@ static int svs_init01(struct svs_platform *svsp)
|
|||
svsb->pm_runtime_enabled_count++;
|
||||
}
|
||||
|
||||
ret = pm_runtime_get_sync(svsb->opp_dev);
|
||||
ret = pm_runtime_resume_and_get(svsb->opp_dev);
|
||||
if (ret < 0) {
|
||||
dev_err(svsb->dev, "mtcmos on fail: %d\n", ret);
|
||||
goto svs_init01_resume_cpuidle;
|
||||
|
|
@ -1461,6 +1461,7 @@ static int svs_init02(struct svs_platform *svsp)
|
|||
{
|
||||
struct svs_bank *svsb;
|
||||
unsigned long flags, time_left;
|
||||
int ret;
|
||||
u32 idx;
|
||||
|
||||
for (idx = 0; idx < svsp->bank_max; idx++) {
|
||||
|
|
@ -1479,7 +1480,8 @@ static int svs_init02(struct svs_platform *svsp)
|
|||
msecs_to_jiffies(5000));
|
||||
if (!time_left) {
|
||||
dev_err(svsb->dev, "init02 completion timeout\n");
|
||||
return -EBUSY;
|
||||
ret = -EBUSY;
|
||||
goto out_of_init02;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -1497,12 +1499,30 @@ static int svs_init02(struct svs_platform *svsp)
|
|||
if (svsb->type == SVSB_HIGH || svsb->type == SVSB_LOW) {
|
||||
if (svs_sync_bank_volts_from_opp(svsb)) {
|
||||
dev_err(svsb->dev, "sync volt fail\n");
|
||||
return -EPERM;
|
||||
ret = -EPERM;
|
||||
goto out_of_init02;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
out_of_init02:
|
||||
for (idx = 0; idx < svsp->bank_max; idx++) {
|
||||
svsb = &svsp->banks[idx];
|
||||
|
||||
spin_lock_irqsave(&svs_lock, flags);
|
||||
svsp->pbank = svsb;
|
||||
svs_switch_bank(svsp);
|
||||
svs_writel_relaxed(svsp, SVSB_PTPEN_OFF, SVSEN);
|
||||
svs_writel_relaxed(svsp, SVSB_INTSTS_VAL_CLEAN, INTSTS);
|
||||
spin_unlock_irqrestore(&svs_lock, flags);
|
||||
|
||||
svsb->phase = SVSB_PHASE_ERROR;
|
||||
svs_adjust_pm_opp_volts(svsb);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void svs_mon_mode(struct svs_platform *svsp)
|
||||
|
|
@ -1594,12 +1614,16 @@ static int svs_resume(struct device *dev)
|
|||
|
||||
ret = svs_init02(svsp);
|
||||
if (ret)
|
||||
goto out_of_resume;
|
||||
goto svs_resume_reset_assert;
|
||||
|
||||
svs_mon_mode(svsp);
|
||||
|
||||
return 0;
|
||||
|
||||
svs_resume_reset_assert:
|
||||
dev_err(svsp->dev, "assert reset: %d\n",
|
||||
reset_control_assert(svsp->rst));
|
||||
|
||||
out_of_resume:
|
||||
clk_disable_unprepare(svsp->main_clk);
|
||||
return ret;
|
||||
|
|
@ -2385,14 +2409,6 @@ static int svs_probe(struct platform_device *pdev)
|
|||
goto svs_probe_free_resource;
|
||||
}
|
||||
|
||||
ret = devm_request_threaded_irq(svsp->dev, svsp_irq, NULL, svs_isr,
|
||||
IRQF_ONESHOT, svsp->name, svsp);
|
||||
if (ret) {
|
||||
dev_err(svsp->dev, "register irq(%d) failed: %d\n",
|
||||
svsp_irq, ret);
|
||||
goto svs_probe_free_resource;
|
||||
}
|
||||
|
||||
svsp->main_clk = devm_clk_get(svsp->dev, "main");
|
||||
if (IS_ERR(svsp->main_clk)) {
|
||||
dev_err(svsp->dev, "failed to get clock: %ld\n",
|
||||
|
|
@ -2414,6 +2430,14 @@ static int svs_probe(struct platform_device *pdev)
|
|||
goto svs_probe_clk_disable;
|
||||
}
|
||||
|
||||
ret = devm_request_threaded_irq(svsp->dev, svsp_irq, NULL, svs_isr,
|
||||
IRQF_ONESHOT, svsp->name, svsp);
|
||||
if (ret) {
|
||||
dev_err(svsp->dev, "register irq(%d) failed: %d\n",
|
||||
svsp_irq, ret);
|
||||
goto svs_probe_iounmap;
|
||||
}
|
||||
|
||||
ret = svs_start(svsp);
|
||||
if (ret) {
|
||||
dev_err(svsp->dev, "svs start fail: %d\n", ret);
|
||||
|
|
|
|||
|
|
@ -92,7 +92,7 @@ static int qcom_subsystem_sleep_stats_show(struct seq_file *s, void *unused)
|
|||
/* Items are allocated lazily, so lookup pointer each time */
|
||||
stat = qcom_smem_get(subsystem->pid, subsystem->smem_item, NULL);
|
||||
if (IS_ERR(stat))
|
||||
return -EIO;
|
||||
return 0;
|
||||
|
||||
qcom_print_stats(s, stat);
|
||||
|
||||
|
|
@ -170,20 +170,14 @@ static void qcom_create_soc_sleep_stat_files(struct dentry *root, void __iomem *
|
|||
static void qcom_create_subsystem_stat_files(struct dentry *root,
|
||||
const struct stats_config *config)
|
||||
{
|
||||
const struct sleep_stats *stat;
|
||||
int i;
|
||||
|
||||
if (!config->subsystem_stats_in_smem)
|
||||
return;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(subsystems); i++) {
|
||||
stat = qcom_smem_get(subsystems[i].pid, subsystems[i].smem_item, NULL);
|
||||
if (IS_ERR(stat))
|
||||
continue;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(subsystems); i++)
|
||||
debugfs_create_file(subsystems[i].name, 0400, root, (void *)&subsystems[i],
|
||||
&qcom_subsystem_sleep_stats_fops);
|
||||
}
|
||||
}
|
||||
|
||||
static int qcom_stats_probe(struct platform_device *pdev)
|
||||
|
|
|
|||
|
|
@ -116,8 +116,10 @@ static int xlnx_add_cb_for_notify_event(const u32 node_id, const u32 event, cons
|
|||
INIT_LIST_HEAD(&eve_data->cb_list_head);
|
||||
|
||||
cb_data = kmalloc(sizeof(*cb_data), GFP_KERNEL);
|
||||
if (!cb_data)
|
||||
if (!cb_data) {
|
||||
kfree(eve_data);
|
||||
return -ENOMEM;
|
||||
}
|
||||
cb_data->eve_cb = cb_fun;
|
||||
cb_data->agent_data = data;
|
||||
|
||||
|
|
|
|||
|
|
@ -105,20 +105,19 @@ static int sdw_drv_probe(struct device *dev)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
mutex_lock(&slave->sdw_dev_lock);
|
||||
|
||||
ret = drv->probe(slave, id);
|
||||
if (ret) {
|
||||
name = drv->name;
|
||||
if (!name)
|
||||
name = drv->driver.name;
|
||||
mutex_unlock(&slave->sdw_dev_lock);
|
||||
|
||||
dev_err(dev, "Probe of %s failed: %d\n", name, ret);
|
||||
dev_pm_domain_detach(dev, false);
|
||||
return ret;
|
||||
}
|
||||
|
||||
mutex_lock(&slave->sdw_dev_lock);
|
||||
|
||||
/* device is probed so let's read the properties now */
|
||||
if (drv->ops && drv->ops->read_prop)
|
||||
drv->ops->read_prop(slave);
|
||||
|
|
@ -167,14 +166,12 @@ static int sdw_drv_remove(struct device *dev)
|
|||
int ret = 0;
|
||||
|
||||
mutex_lock(&slave->sdw_dev_lock);
|
||||
|
||||
slave->probed = false;
|
||||
mutex_unlock(&slave->sdw_dev_lock);
|
||||
|
||||
if (drv->remove)
|
||||
ret = drv->remove(slave);
|
||||
|
||||
mutex_unlock(&slave->sdw_dev_lock);
|
||||
|
||||
dev_pm_domain_detach(dev, false);
|
||||
|
||||
return ret;
|
||||
|
|
|
|||
|
|
@ -555,6 +555,29 @@ cdns_fill_msg_resp(struct sdw_cdns *cdns,
|
|||
return SDW_CMD_OK;
|
||||
}
|
||||
|
||||
static void cdns_read_response(struct sdw_cdns *cdns)
|
||||
{
|
||||
u32 num_resp, cmd_base;
|
||||
int i;
|
||||
|
||||
/* RX_FIFO_AVAIL can be 2 entries more than the FIFO size */
|
||||
BUILD_BUG_ON(ARRAY_SIZE(cdns->response_buf) < CDNS_MCP_CMD_LEN + 2);
|
||||
|
||||
num_resp = cdns_readl(cdns, CDNS_MCP_FIFOSTAT);
|
||||
num_resp &= CDNS_MCP_RX_FIFO_AVAIL;
|
||||
if (num_resp > ARRAY_SIZE(cdns->response_buf)) {
|
||||
dev_warn(cdns->dev, "RX AVAIL %d too long\n", num_resp);
|
||||
num_resp = ARRAY_SIZE(cdns->response_buf);
|
||||
}
|
||||
|
||||
cmd_base = CDNS_MCP_CMD_BASE;
|
||||
|
||||
for (i = 0; i < num_resp; i++) {
|
||||
cdns->response_buf[i] = cdns_readl(cdns, cmd_base);
|
||||
cmd_base += CDNS_MCP_CMD_WORD_LEN;
|
||||
}
|
||||
}
|
||||
|
||||
static enum sdw_command_response
|
||||
_cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd,
|
||||
int offset, int count, bool defer)
|
||||
|
|
@ -596,6 +619,10 @@ _cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd,
|
|||
dev_err(cdns->dev, "IO transfer timed out, cmd %d device %d addr %x len %d\n",
|
||||
cmd, msg->dev_num, msg->addr, msg->len);
|
||||
msg->len = 0;
|
||||
|
||||
/* Drain anything in the RX_FIFO */
|
||||
cdns_read_response(cdns);
|
||||
|
||||
return SDW_CMD_TIMEOUT;
|
||||
}
|
||||
|
||||
|
|
@ -769,22 +796,6 @@ EXPORT_SYMBOL(cdns_read_ping_status);
|
|||
* IRQ handling
|
||||
*/
|
||||
|
||||
static void cdns_read_response(struct sdw_cdns *cdns)
|
||||
{
|
||||
u32 num_resp, cmd_base;
|
||||
int i;
|
||||
|
||||
num_resp = cdns_readl(cdns, CDNS_MCP_FIFOSTAT);
|
||||
num_resp &= CDNS_MCP_RX_FIFO_AVAIL;
|
||||
|
||||
cmd_base = CDNS_MCP_CMD_BASE;
|
||||
|
||||
for (i = 0; i < num_resp; i++) {
|
||||
cdns->response_buf[i] = cdns_readl(cdns, cmd_base);
|
||||
cmd_base += CDNS_MCP_CMD_WORD_LEN;
|
||||
}
|
||||
}
|
||||
|
||||
static int cdns_update_slave_status(struct sdw_cdns *cdns,
|
||||
u64 slave_intstat)
|
||||
{
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue