Merge 6.1.69 into android14-6.1-lts
Changes in 6.1.69 perf/x86/uncore: Don't WARN_ON_ONCE() for a broken discovery table r8152: add USB device driver for config selection r8152: add vendor/device ID pair for D-Link DUB-E250 r8152: add vendor/device ID pair for ASUS USB-C2500 powerpc/ftrace: Fix stack teardown in ftrace_no_trace ext4: fix warning in ext4_dio_write_end_io() ksmbd: fix memory leak in smb2_lock() afs: Fix refcount underflow from error handling race HID: lenovo: Restrict detection of patched firmware only to USB cptkbd net/mlx5e: Fix possible deadlock on mlx5e_tx_timeout_work net: ipv6: support reporting otherwise unknown prefix flags in RTM_NEWPREFIX qca_debug: Prevent crash on TX ring changes qca_debug: Fix ethtool -G iface tx behavior qca_spi: Fix reset behavior bnxt_en: Clear resource reservation during resume bnxt_en: Save ring error counters across reset bnxt_en: Fix wrong return value check in bnxt_close_nic() bnxt_en: Fix HWTSTAMP_FILTER_ALL packet timestamp logic atm: solos-pci: Fix potential deadlock on &cli_queue_lock atm: solos-pci: Fix potential deadlock on &tx_queue_lock net: vlan: introduce skb_vlan_eth_hdr() net: fec: correct queue selection octeontx2-af: fix a use-after-free in rvu_nix_register_reporters octeontx2-pf: Fix promisc mcam entry action octeontx2-af: Update RSS algorithm index atm: Fix Use-After-Free in do_vcc_ioctl net/rose: Fix Use-After-Free in rose_ioctl iavf: Introduce new state machines for flow director iavf: Handle ntuple on/off based on new state machines for flow director qed: Fix a potential use-after-free in qed_cxt_tables_alloc net: Remove acked SYN flag from packet in the transmit queue correctly net: ena: Destroy correct number of xdp queues upon failure net: ena: Fix xdp drops handling due to multibuf packets net: ena: Fix XDP redirection error stmmac: dwmac-loongson: Make sure MDIO is initialized before use sign-file: Fix incorrect return values check vsock/virtio: Fix unsigned integer wrap around in virtio_transport_has_space() dpaa2-switch: fix size of the dma_unmap dpaa2-switch: do not ask for MDB, VLAN and FDB replay net: stmmac: Handle disabled MDIO busses from devicetree appletalk: Fix Use-After-Free in atalk_ioctl net: atlantic: fix double free in ring reinit logic cred: switch to using atomic_long_t fuse: dax: set fc->dax to NULL in fuse_dax_conn_free() ALSA: hda/hdmi: add force-connect quirk for NUC5CPYB ALSA: hda/hdmi: add force-connect quirks for ASUSTeK Z170 variants ALSA: hda/realtek: Apply mute LED quirk for HP15-db Revert "PCI: acpiphp: Reassign resources on bridge if necessary" PCI: loongson: Limit MRRS to 256 ksmbd: fix wrong name of SMB2_CREATE_ALLOCATION_SIZE drm/mediatek: Add spinlock for setting vblank event in atomic_begin x86/hyperv: Fix the detection of E820_TYPE_PRAM in a Gen2 VM usb: aqc111: check packet for fixup for true limit stmmac: dwmac-loongson: Add architecture dependency blk-throttle: fix lockdep warning of "cgroup_mutex or RCU read lock required!" blk-cgroup: bypass blkcg_deactivate_policy after destroying bcache: avoid oversize memory allocation by small stripe_size bcache: remove redundant assignment to variable cur_idx bcache: add code comments for bch_btree_node_get() and __bch_btree_node_alloc() bcache: avoid NULL checking to c->root in run_cache_set() nbd: fold nbd config initialization into nbd_alloc_config() nvme-auth: set explanation code for failure2 msgs nvme: catch errors from nvme_configure_metadata() selftests/bpf: fix bpf_loop_bench for new callback verification scheme LoongArch: Add dependency between vmlinuz.efi and vmlinux.efi LoongArch: Implement constant timer shutdown interface platform/x86: intel_telemetry: Fix kernel doc descriptions HID: glorious: fix Glorious Model I HID report HID: add ALWAYS_POLL quirk for Apple kb nbd: pass nbd_sock to nbd_read_reply() instead of index HID: hid-asus: reset the backlight brightness level on resume HID: multitouch: Add quirk for HONOR GLO-GXXX touchpad asm-generic: qspinlock: fix queued_spin_value_unlocked() implementation net: usb: qmi_wwan: claim interface 4 for ZTE MF290 arm64: add dependency between vmlinuz.efi and Image HID: hid-asus: add const to read-only outgoing usb buffer perf: Fix perf_event_validate_size() lockdep splat btrfs: do not allow non subvolume root targets for snapshot soundwire: stream: fix NULL pointer dereference for multi_link ext4: prevent the normalized size from exceeding EXT_MAX_BLOCKS arm64: mm: Always make sw-dirty PTEs hw-dirty in pte_modify team: Fix use-after-free when an option instance allocation fails drm/amdgpu/sdma5.2: add begin/end_use ring callbacks dmaengine: stm32-dma: avoid bitfield overflow assertion mm/mglru: fix underprotected page cache mm/shmem: fix race in shmem_undo_range w/THP btrfs: free qgroup reserve when ORDERED_IOERR is set btrfs: don't clear qgroup reserved bit in release_folio drm/amdgpu: fix tear down order in amdgpu_vm_pt_free drm/amd/display: Disable PSR-SU on Parade 0803 TCON again drm/i915: Fix remapped stride with CCS on ADL+ smb: client: fix OOB in receive_encrypted_standard() smb: client: fix NULL deref in asn1_ber_decoder() smb: client: fix OOB in smb2_query_reparse_point() ring-buffer: Fix memory leak of free page tracing: Update snapshot buffer on resize if it is allocated ring-buffer: Do not update before stamp when switching sub-buffers ring-buffer: Have saved event hold the entire event ring-buffer: Fix writing to the buffer with max_data_size ring-buffer: Fix a race in rb_time_cmpxchg() for 32 bit archs ring-buffer: Do not try to put back write_stamp ring-buffer: Have rb_time_cmpxchg() set the msb counter too net: tls, update curr on splice as well r8152: avoid to change cfg for all devices r8152: remove rtl_vendor_mode function r8152: fix the autosuspend doesn't work Linux 6.1.69 Change-Id: I695d1d50ca8c00ff505505918bdc59ce9d29d479 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
d3d46ac25c
112 changed files with 1121 additions and 513 deletions
2
Makefile
2
Makefile
|
|
@ -1,7 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 1
|
||||
SUBLEVEL = 68
|
||||
SUBLEVEL = 69
|
||||
EXTRAVERSION =
|
||||
NAME = Curry Ramen
|
||||
|
||||
|
|
|
|||
|
|
@ -171,7 +171,7 @@ ifndef KBUILD_MIXED_TREE
|
|||
all: $(notdir $(KBUILD_IMAGE))
|
||||
endif
|
||||
|
||||
|
||||
vmlinuz.efi: Image
|
||||
Image vmlinuz.efi: vmlinux
|
||||
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
|
||||
|
||||
|
|
|
|||
|
|
@ -826,6 +826,12 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
|||
if (pte_hw_dirty(pte))
|
||||
pte = pte_mkdirty(pte);
|
||||
pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask);
|
||||
/*
|
||||
* If we end up clearing hw dirtiness for a sw-dirty PTE, set hardware
|
||||
* dirtiness again.
|
||||
*/
|
||||
if (pte_sw_dirty(pte))
|
||||
pte = pte_mkdirty(pte);
|
||||
return pte;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -116,6 +116,8 @@ vdso_install:
|
|||
|
||||
all: $(notdir $(KBUILD_IMAGE))
|
||||
|
||||
vmlinuz.efi: vmlinux.efi
|
||||
|
||||
vmlinux.elf vmlinux.efi vmlinuz.efi: vmlinux
|
||||
$(Q)$(MAKE) $(build)=$(boot) $(bootvars-y) $(boot)/$@
|
||||
|
||||
|
|
|
|||
|
|
@ -58,21 +58,6 @@ static int constant_set_state_oneshot(struct clock_event_device *evt)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int constant_set_state_oneshot_stopped(struct clock_event_device *evt)
|
||||
{
|
||||
unsigned long timer_config;
|
||||
|
||||
raw_spin_lock(&state_lock);
|
||||
|
||||
timer_config = csr_read64(LOONGARCH_CSR_TCFG);
|
||||
timer_config &= ~CSR_TCFG_EN;
|
||||
csr_write64(timer_config, LOONGARCH_CSR_TCFG);
|
||||
|
||||
raw_spin_unlock(&state_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int constant_set_state_periodic(struct clock_event_device *evt)
|
||||
{
|
||||
unsigned long period;
|
||||
|
|
@ -92,6 +77,16 @@ static int constant_set_state_periodic(struct clock_event_device *evt)
|
|||
|
||||
static int constant_set_state_shutdown(struct clock_event_device *evt)
|
||||
{
|
||||
unsigned long timer_config;
|
||||
|
||||
raw_spin_lock(&state_lock);
|
||||
|
||||
timer_config = csr_read64(LOONGARCH_CSR_TCFG);
|
||||
timer_config &= ~CSR_TCFG_EN;
|
||||
csr_write64(timer_config, LOONGARCH_CSR_TCFG);
|
||||
|
||||
raw_spin_unlock(&state_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -156,7 +151,7 @@ int constant_clockevent_init(void)
|
|||
cd->rating = 320;
|
||||
cd->cpumask = cpumask_of(cpu);
|
||||
cd->set_state_oneshot = constant_set_state_oneshot;
|
||||
cd->set_state_oneshot_stopped = constant_set_state_oneshot_stopped;
|
||||
cd->set_state_oneshot_stopped = constant_set_state_shutdown;
|
||||
cd->set_state_periodic = constant_set_state_periodic;
|
||||
cd->set_state_shutdown = constant_set_state_shutdown;
|
||||
cd->set_next_event = constant_timer_next_event;
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@
|
|||
.endif
|
||||
|
||||
/* Save previous stack pointer (r1) */
|
||||
addi r8, r1, SWITCH_FRAME_SIZE
|
||||
addi r8, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
|
||||
PPC_STL r8, GPR1(r1)
|
||||
|
||||
.if \allregs == 1
|
||||
|
|
@ -182,7 +182,7 @@ ftrace_no_trace:
|
|||
mflr r3
|
||||
mtctr r3
|
||||
REST_GPR(3, r1)
|
||||
addi r1, r1, SWITCH_FRAME_SIZE
|
||||
addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
|
||||
mtlr r0
|
||||
bctr
|
||||
#endif
|
||||
|
|
|
|||
|
|
@ -140,13 +140,21 @@ uncore_insert_box_info(struct uncore_unit_discovery *unit,
|
|||
unsigned int *box_offset, *ids;
|
||||
int i;
|
||||
|
||||
if (WARN_ON_ONCE(!unit->ctl || !unit->ctl_offset || !unit->ctr_offset))
|
||||
if (!unit->ctl || !unit->ctl_offset || !unit->ctr_offset) {
|
||||
pr_info("Invalid address is detected for uncore type %d box %d, "
|
||||
"Disable the uncore unit.\n",
|
||||
unit->box_type, unit->box_id);
|
||||
return;
|
||||
}
|
||||
|
||||
if (parsed) {
|
||||
type = search_uncore_discovery_type(unit->box_type);
|
||||
if (WARN_ON_ONCE(!type))
|
||||
if (!type) {
|
||||
pr_info("A spurious uncore type %d is detected, "
|
||||
"Disable the uncore type.\n",
|
||||
unit->box_type);
|
||||
return;
|
||||
}
|
||||
/* Store the first box of each die */
|
||||
if (!type->box_ctrl_die[die])
|
||||
type->box_ctrl_die[die] = unit->ctl;
|
||||
|
|
@ -181,8 +189,12 @@ uncore_insert_box_info(struct uncore_unit_discovery *unit,
|
|||
ids[i] = type->ids[i];
|
||||
box_offset[i] = type->box_offset[i];
|
||||
|
||||
if (WARN_ON_ONCE(unit->box_id == ids[i]))
|
||||
if (unit->box_id == ids[i]) {
|
||||
pr_info("Duplicate uncore type %d box ID %d is detected, "
|
||||
"Drop the duplicate uncore unit.\n",
|
||||
unit->box_type, unit->box_id);
|
||||
goto free_ids;
|
||||
}
|
||||
}
|
||||
ids[i] = unit->box_id;
|
||||
box_offset[i] = unit->ctl - type->box_ctrl;
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/io.h>
|
||||
#include <asm/apic.h>
|
||||
#include <asm/desc.h>
|
||||
#include <asm/e820/api.h>
|
||||
#include <asm/sev.h>
|
||||
#include <asm/ibt.h>
|
||||
#include <asm/hypervisor.h>
|
||||
|
|
@ -267,15 +268,31 @@ static int hv_cpu_die(unsigned int cpu)
|
|||
|
||||
static int __init hv_pci_init(void)
|
||||
{
|
||||
int gen2vm = efi_enabled(EFI_BOOT);
|
||||
bool gen2vm = efi_enabled(EFI_BOOT);
|
||||
|
||||
/*
|
||||
* For Generation-2 VM, we exit from pci_arch_init() by returning 0.
|
||||
* The purpose is to suppress the harmless warning:
|
||||
* A Generation-2 VM doesn't support legacy PCI/PCIe, so both
|
||||
* raw_pci_ops and raw_pci_ext_ops are NULL, and pci_subsys_init() ->
|
||||
* pcibios_init() doesn't call pcibios_resource_survey() ->
|
||||
* e820__reserve_resources_late(); as a result, any emulated persistent
|
||||
* memory of E820_TYPE_PRAM (12) via the kernel parameter
|
||||
* memmap=nn[KMG]!ss is not added into iomem_resource and hence can't be
|
||||
* detected by register_e820_pmem(). Fix this by directly calling
|
||||
* e820__reserve_resources_late() here: e820__reserve_resources_late()
|
||||
* depends on e820__reserve_resources(), which has been called earlier
|
||||
* from setup_arch(). Note: e820__reserve_resources_late() also adds
|
||||
* any memory of E820_TYPE_PMEM (7) into iomem_resource, and
|
||||
* acpi_nfit_register_region() -> acpi_nfit_insert_resource() ->
|
||||
* region_intersects() returns REGION_INTERSECTS, so the memory of
|
||||
* E820_TYPE_PMEM won't get added twice.
|
||||
*
|
||||
* We return 0 here so that pci_arch_init() won't print the warning:
|
||||
* "PCI: Fatal: No config space access function found"
|
||||
*/
|
||||
if (gen2vm)
|
||||
if (gen2vm) {
|
||||
e820__reserve_resources_late();
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* For Generation-1 VM, we'll proceed in pci_arch_init(). */
|
||||
return 1;
|
||||
|
|
|
|||
|
|
@ -462,6 +462,7 @@ static void blkg_destroy_all(struct gendisk *disk)
|
|||
struct request_queue *q = disk->queue;
|
||||
struct blkcg_gq *blkg, *n;
|
||||
int count = BLKG_DESTROY_BATCH_SIZE;
|
||||
int i;
|
||||
|
||||
restart:
|
||||
spin_lock_irq(&q->queue_lock);
|
||||
|
|
@ -487,6 +488,18 @@ restart:
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Mark policy deactivated since policy offline has been done, and
|
||||
* the free is scheduled, so future blkcg_deactivate_policy() can
|
||||
* be bypassed
|
||||
*/
|
||||
for (i = 0; i < BLKCG_MAX_POLS; i++) {
|
||||
struct blkcg_policy *pol = blkcg_policy[i];
|
||||
|
||||
if (pol)
|
||||
__clear_bit(pol->plid, q->blkcg_pols);
|
||||
}
|
||||
|
||||
q->root_blkg = NULL;
|
||||
spin_unlock_irq(&q->queue_lock);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1333,6 +1333,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global)
|
|||
tg_bps_limit(tg, READ), tg_bps_limit(tg, WRITE),
|
||||
tg_iops_limit(tg, READ), tg_iops_limit(tg, WRITE));
|
||||
|
||||
rcu_read_lock();
|
||||
/*
|
||||
* Update has_rules[] flags for the updated tg's subtree. A tg is
|
||||
* considered to have rules if either the tg itself or any of its
|
||||
|
|
@ -1360,6 +1361,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global)
|
|||
this_tg->latency_target = max(this_tg->latency_target,
|
||||
parent_tg->latency_target);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
/*
|
||||
* We're already holding queue_lock and know @tg is valid. Let's
|
||||
|
|
|
|||
|
|
@ -449,9 +449,9 @@ static ssize_t console_show(struct device *dev, struct device_attribute *attr,
|
|||
struct sk_buff *skb;
|
||||
unsigned int len;
|
||||
|
||||
spin_lock(&card->cli_queue_lock);
|
||||
spin_lock_bh(&card->cli_queue_lock);
|
||||
skb = skb_dequeue(&card->cli_queue[SOLOS_CHAN(atmdev)]);
|
||||
spin_unlock(&card->cli_queue_lock);
|
||||
spin_unlock_bh(&card->cli_queue_lock);
|
||||
if(skb == NULL)
|
||||
return sprintf(buf, "No data.\n");
|
||||
|
||||
|
|
@ -956,14 +956,14 @@ static void pclose(struct atm_vcc *vcc)
|
|||
struct pkt_hdr *header;
|
||||
|
||||
/* Remove any yet-to-be-transmitted packets from the pending queue */
|
||||
spin_lock(&card->tx_queue_lock);
|
||||
spin_lock_bh(&card->tx_queue_lock);
|
||||
skb_queue_walk_safe(&card->tx_queue[port], skb, tmpskb) {
|
||||
if (SKB_CB(skb)->vcc == vcc) {
|
||||
skb_unlink(skb, &card->tx_queue[port]);
|
||||
solos_pop(vcc, skb);
|
||||
}
|
||||
}
|
||||
spin_unlock(&card->tx_queue_lock);
|
||||
spin_unlock_bh(&card->tx_queue_lock);
|
||||
|
||||
skb = alloc_skb(sizeof(*header), GFP_KERNEL);
|
||||
if (!skb) {
|
||||
|
|
|
|||
|
|
@ -67,6 +67,7 @@ struct nbd_sock {
|
|||
struct recv_thread_args {
|
||||
struct work_struct work;
|
||||
struct nbd_device *nbd;
|
||||
struct nbd_sock *nsock;
|
||||
int index;
|
||||
};
|
||||
|
||||
|
|
@ -489,15 +490,9 @@ done:
|
|||
return BLK_EH_DONE;
|
||||
}
|
||||
|
||||
/*
|
||||
* Send or receive packet. Return a positive value on success and
|
||||
* negtive value on failue, and never return 0.
|
||||
*/
|
||||
static int sock_xmit(struct nbd_device *nbd, int index, int send,
|
||||
struct iov_iter *iter, int msg_flags, int *sent)
|
||||
static int __sock_xmit(struct nbd_device *nbd, struct socket *sock, int send,
|
||||
struct iov_iter *iter, int msg_flags, int *sent)
|
||||
{
|
||||
struct nbd_config *config = nbd->config;
|
||||
struct socket *sock = config->socks[index]->sock;
|
||||
int result;
|
||||
struct msghdr msg;
|
||||
unsigned int noreclaim_flag;
|
||||
|
|
@ -539,6 +534,19 @@ static int sock_xmit(struct nbd_device *nbd, int index, int send,
|
|||
return result;
|
||||
}
|
||||
|
||||
/*
|
||||
* Send or receive packet. Return a positive value on success and
|
||||
* negtive value on failure, and never return 0.
|
||||
*/
|
||||
static int sock_xmit(struct nbd_device *nbd, int index, int send,
|
||||
struct iov_iter *iter, int msg_flags, int *sent)
|
||||
{
|
||||
struct nbd_config *config = nbd->config;
|
||||
struct socket *sock = config->socks[index]->sock;
|
||||
|
||||
return __sock_xmit(nbd, sock, send, iter, msg_flags, sent);
|
||||
}
|
||||
|
||||
/*
|
||||
* Different settings for sk->sk_sndtimeo can result in different return values
|
||||
* if there is a signal pending when we enter sendmsg, because reasons?
|
||||
|
|
@ -695,7 +703,7 @@ out:
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int nbd_read_reply(struct nbd_device *nbd, int index,
|
||||
static int nbd_read_reply(struct nbd_device *nbd, struct socket *sock,
|
||||
struct nbd_reply *reply)
|
||||
{
|
||||
struct kvec iov = {.iov_base = reply, .iov_len = sizeof(*reply)};
|
||||
|
|
@ -704,7 +712,7 @@ static int nbd_read_reply(struct nbd_device *nbd, int index,
|
|||
|
||||
reply->magic = 0;
|
||||
iov_iter_kvec(&to, ITER_DEST, &iov, 1, sizeof(*reply));
|
||||
result = sock_xmit(nbd, index, 0, &to, MSG_WAITALL, NULL);
|
||||
result = __sock_xmit(nbd, sock, 0, &to, MSG_WAITALL, NULL);
|
||||
if (result < 0) {
|
||||
if (!nbd_disconnected(nbd->config))
|
||||
dev_err(disk_to_dev(nbd->disk),
|
||||
|
|
@ -828,14 +836,14 @@ static void recv_work(struct work_struct *work)
|
|||
struct nbd_device *nbd = args->nbd;
|
||||
struct nbd_config *config = nbd->config;
|
||||
struct request_queue *q = nbd->disk->queue;
|
||||
struct nbd_sock *nsock;
|
||||
struct nbd_sock *nsock = args->nsock;
|
||||
struct nbd_cmd *cmd;
|
||||
struct request *rq;
|
||||
|
||||
while (1) {
|
||||
struct nbd_reply reply;
|
||||
|
||||
if (nbd_read_reply(nbd, args->index, &reply))
|
||||
if (nbd_read_reply(nbd, nsock->sock, &reply))
|
||||
break;
|
||||
|
||||
/*
|
||||
|
|
@ -870,7 +878,6 @@ static void recv_work(struct work_struct *work)
|
|||
percpu_ref_put(&q->q_usage_counter);
|
||||
}
|
||||
|
||||
nsock = config->socks[args->index];
|
||||
mutex_lock(&nsock->tx_lock);
|
||||
nbd_mark_nsock_dead(nbd, nsock, 1);
|
||||
mutex_unlock(&nsock->tx_lock);
|
||||
|
|
@ -1214,6 +1221,7 @@ static int nbd_reconnect_socket(struct nbd_device *nbd, unsigned long arg)
|
|||
INIT_WORK(&args->work, recv_work);
|
||||
args->index = i;
|
||||
args->nbd = nbd;
|
||||
args->nsock = nsock;
|
||||
nsock->cookie++;
|
||||
mutex_unlock(&nsock->tx_lock);
|
||||
sockfd_put(old);
|
||||
|
|
@ -1396,6 +1404,7 @@ static int nbd_start_device(struct nbd_device *nbd)
|
|||
refcount_inc(&nbd->config_refs);
|
||||
INIT_WORK(&args->work, recv_work);
|
||||
args->nbd = nbd;
|
||||
args->nsock = config->socks[i];
|
||||
args->index = i;
|
||||
queue_work(nbd->recv_workq, &args->work);
|
||||
}
|
||||
|
|
@ -1530,17 +1539,20 @@ static int nbd_ioctl(struct block_device *bdev, fmode_t mode,
|
|||
return error;
|
||||
}
|
||||
|
||||
static struct nbd_config *nbd_alloc_config(void)
|
||||
static int nbd_alloc_and_init_config(struct nbd_device *nbd)
|
||||
{
|
||||
struct nbd_config *config;
|
||||
|
||||
if (WARN_ON(nbd->config))
|
||||
return -EINVAL;
|
||||
|
||||
if (!try_module_get(THIS_MODULE))
|
||||
return ERR_PTR(-ENODEV);
|
||||
return -ENODEV;
|
||||
|
||||
config = kzalloc(sizeof(struct nbd_config), GFP_NOFS);
|
||||
if (!config) {
|
||||
module_put(THIS_MODULE);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
atomic_set(&config->recv_threads, 0);
|
||||
|
|
@ -1548,7 +1560,10 @@ static struct nbd_config *nbd_alloc_config(void)
|
|||
init_waitqueue_head(&config->conn_wait);
|
||||
config->blksize_bits = NBD_DEF_BLKSIZE_BITS;
|
||||
atomic_set(&config->live_connections, 0);
|
||||
return config;
|
||||
nbd->config = config;
|
||||
refcount_set(&nbd->config_refs, 1);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int nbd_open(struct block_device *bdev, fmode_t mode)
|
||||
|
|
@ -1567,21 +1582,17 @@ static int nbd_open(struct block_device *bdev, fmode_t mode)
|
|||
goto out;
|
||||
}
|
||||
if (!refcount_inc_not_zero(&nbd->config_refs)) {
|
||||
struct nbd_config *config;
|
||||
|
||||
mutex_lock(&nbd->config_lock);
|
||||
if (refcount_inc_not_zero(&nbd->config_refs)) {
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
goto out;
|
||||
}
|
||||
config = nbd_alloc_config();
|
||||
if (IS_ERR(config)) {
|
||||
ret = PTR_ERR(config);
|
||||
ret = nbd_alloc_and_init_config(nbd);
|
||||
if (ret) {
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
goto out;
|
||||
}
|
||||
nbd->config = config;
|
||||
refcount_set(&nbd->config_refs, 1);
|
||||
|
||||
refcount_inc(&nbd->refs);
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
if (max_part)
|
||||
|
|
@ -1990,22 +2001,17 @@ again:
|
|||
pr_err("nbd%d already in use\n", index);
|
||||
return -EBUSY;
|
||||
}
|
||||
if (WARN_ON(nbd->config)) {
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
nbd_put(nbd);
|
||||
return -EINVAL;
|
||||
}
|
||||
config = nbd_alloc_config();
|
||||
if (IS_ERR(config)) {
|
||||
|
||||
ret = nbd_alloc_and_init_config(nbd);
|
||||
if (ret) {
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
nbd_put(nbd);
|
||||
pr_err("couldn't allocate config\n");
|
||||
return PTR_ERR(config);
|
||||
return ret;
|
||||
}
|
||||
nbd->config = config;
|
||||
refcount_set(&nbd->config_refs, 1);
|
||||
set_bit(NBD_RT_BOUND, &config->runtime_flags);
|
||||
|
||||
config = nbd->config;
|
||||
set_bit(NBD_RT_BOUND, &config->runtime_flags);
|
||||
ret = nbd_genl_size_set(info, nbd);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
|
|
|||
|
|
@ -1249,8 +1249,8 @@ static struct dma_async_tx_descriptor *stm32_dma_prep_dma_memcpy(
|
|||
enum dma_slave_buswidth max_width;
|
||||
struct stm32_dma_desc *desc;
|
||||
size_t xfer_count, offset;
|
||||
u32 num_sgs, best_burst, dma_burst, threshold;
|
||||
int i;
|
||||
u32 num_sgs, best_burst, threshold;
|
||||
int dma_burst, i;
|
||||
|
||||
num_sgs = DIV_ROUND_UP(len, STM32_DMA_ALIGNED_MAX_DATA_ITEMS);
|
||||
desc = kzalloc(struct_size(desc, sg_req, num_sgs), GFP_NOWAIT);
|
||||
|
|
@ -1268,6 +1268,10 @@ static struct dma_async_tx_descriptor *stm32_dma_prep_dma_memcpy(
|
|||
best_burst = stm32_dma_get_best_burst(len, STM32_DMA_MAX_BURST,
|
||||
threshold, max_width);
|
||||
dma_burst = stm32_dma_get_burst(chan, best_burst);
|
||||
if (dma_burst < 0) {
|
||||
kfree(desc);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
stm32_dma_clear_reg(&desc->sg_req[i].chan_reg);
|
||||
desc->sg_req[i].chan_reg.dma_scr =
|
||||
|
|
|
|||
|
|
@ -631,13 +631,14 @@ static void amdgpu_vm_pt_free(struct amdgpu_vm_bo_base *entry)
|
|||
|
||||
if (!entry->bo)
|
||||
return;
|
||||
|
||||
entry->bo->vm_bo = NULL;
|
||||
shadow = amdgpu_bo_shadowed(entry->bo);
|
||||
if (shadow) {
|
||||
ttm_bo_set_bulk_move(&shadow->tbo, NULL);
|
||||
amdgpu_bo_unref(&shadow);
|
||||
}
|
||||
ttm_bo_set_bulk_move(&entry->bo->tbo, NULL);
|
||||
entry->bo->vm_bo = NULL;
|
||||
|
||||
spin_lock(&entry->vm->status_lock);
|
||||
list_del(&entry->vm_status);
|
||||
|
|
|
|||
|
|
@ -1690,6 +1690,32 @@ static void sdma_v5_2_get_clockgating_state(void *handle, u64 *flags)
|
|||
*flags |= AMD_CG_SUPPORT_SDMA_LS;
|
||||
}
|
||||
|
||||
static void sdma_v5_2_ring_begin_use(struct amdgpu_ring *ring)
|
||||
{
|
||||
struct amdgpu_device *adev = ring->adev;
|
||||
|
||||
/* SDMA 5.2.3 (RMB) FW doesn't seem to properly
|
||||
* disallow GFXOFF in some cases leading to
|
||||
* hangs in SDMA. Disallow GFXOFF while SDMA is active.
|
||||
* We can probably just limit this to 5.2.3,
|
||||
* but it shouldn't hurt for other parts since
|
||||
* this GFXOFF will be disallowed anyway when SDMA is
|
||||
* active, this just makes it explicit.
|
||||
*/
|
||||
amdgpu_gfx_off_ctrl(adev, false);
|
||||
}
|
||||
|
||||
static void sdma_v5_2_ring_end_use(struct amdgpu_ring *ring)
|
||||
{
|
||||
struct amdgpu_device *adev = ring->adev;
|
||||
|
||||
/* SDMA 5.2.3 (RMB) FW doesn't seem to properly
|
||||
* disallow GFXOFF in some cases leading to
|
||||
* hangs in SDMA. Allow GFXOFF when SDMA is complete.
|
||||
*/
|
||||
amdgpu_gfx_off_ctrl(adev, true);
|
||||
}
|
||||
|
||||
const struct amd_ip_funcs sdma_v5_2_ip_funcs = {
|
||||
.name = "sdma_v5_2",
|
||||
.early_init = sdma_v5_2_early_init,
|
||||
|
|
@ -1738,6 +1764,8 @@ static const struct amdgpu_ring_funcs sdma_v5_2_ring_funcs = {
|
|||
.test_ib = sdma_v5_2_ring_test_ib,
|
||||
.insert_nop = sdma_v5_2_ring_insert_nop,
|
||||
.pad_ib = sdma_v5_2_ring_pad_ib,
|
||||
.begin_use = sdma_v5_2_ring_begin_use,
|
||||
.end_use = sdma_v5_2_ring_end_use,
|
||||
.emit_wreg = sdma_v5_2_ring_emit_wreg,
|
||||
.emit_reg_wait = sdma_v5_2_ring_emit_reg_wait,
|
||||
.emit_reg_write_reg_wait = sdma_v5_2_ring_emit_reg_write_reg_wait,
|
||||
|
|
|
|||
|
|
@ -816,6 +816,8 @@ bool is_psr_su_specific_panel(struct dc_link *link)
|
|||
((dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x08) ||
|
||||
(dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x07)))
|
||||
isPSRSUSupported = false;
|
||||
else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
|
||||
isPSRSUSupported = false;
|
||||
else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1)
|
||||
isPSRSUSupported = true;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1441,8 +1441,20 @@ static u32 calc_plane_remap_info(const struct intel_framebuffer *fb, int color_p
|
|||
|
||||
size += remap_info->size;
|
||||
} else {
|
||||
unsigned int dst_stride = plane_view_dst_stride_tiles(fb, color_plane,
|
||||
remap_info->width);
|
||||
unsigned int dst_stride;
|
||||
|
||||
/*
|
||||
* The hardware automagically calculates the CCS AUX surface
|
||||
* stride from the main surface stride so can't really remap a
|
||||
* smaller subset (unless we'd remap in whole AUX page units).
|
||||
*/
|
||||
if (intel_fb_needs_pot_stride_remap(fb) &&
|
||||
intel_fb_is_ccs_modifier(fb->base.modifier))
|
||||
dst_stride = remap_info->src_stride;
|
||||
else
|
||||
dst_stride = remap_info->width;
|
||||
|
||||
dst_stride = plane_view_dst_stride_tiles(fb, color_plane, dst_stride);
|
||||
|
||||
assign_chk_ovf(i915, remap_info->dst_stride, dst_stride);
|
||||
color_plane_info->mapping_stride = dst_stride *
|
||||
|
|
|
|||
|
|
@ -736,6 +736,7 @@ static void mtk_drm_crtc_atomic_begin(struct drm_crtc *crtc,
|
|||
crtc);
|
||||
struct mtk_crtc_state *mtk_crtc_state = to_mtk_crtc_state(crtc_state);
|
||||
struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
|
||||
unsigned long flags;
|
||||
|
||||
if (mtk_crtc->event && mtk_crtc_state->base.event)
|
||||
DRM_ERROR("new event while there is still a pending event\n");
|
||||
|
|
@ -743,7 +744,11 @@ static void mtk_drm_crtc_atomic_begin(struct drm_crtc *crtc,
|
|||
if (mtk_crtc_state->base.event) {
|
||||
mtk_crtc_state->base.event->pipe = drm_crtc_index(crtc);
|
||||
WARN_ON(drm_crtc_vblank_get(crtc) != 0);
|
||||
|
||||
spin_lock_irqsave(&crtc->dev->event_lock, flags);
|
||||
mtk_crtc->event = mtk_crtc_state->base.event;
|
||||
spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
|
||||
|
||||
mtk_crtc_state->base.event = NULL;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -380,7 +380,7 @@ static int asus_raw_event(struct hid_device *hdev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int asus_kbd_set_report(struct hid_device *hdev, u8 *buf, size_t buf_size)
|
||||
static int asus_kbd_set_report(struct hid_device *hdev, const u8 *buf, size_t buf_size)
|
||||
{
|
||||
unsigned char *dmabuf;
|
||||
int ret;
|
||||
|
|
@ -403,7 +403,7 @@ static int asus_kbd_set_report(struct hid_device *hdev, u8 *buf, size_t buf_size
|
|||
|
||||
static int asus_kbd_init(struct hid_device *hdev)
|
||||
{
|
||||
u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x41, 0x53, 0x55, 0x53, 0x20, 0x54,
|
||||
const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x41, 0x53, 0x55, 0x53, 0x20, 0x54,
|
||||
0x65, 0x63, 0x68, 0x2e, 0x49, 0x6e, 0x63, 0x2e, 0x00 };
|
||||
int ret;
|
||||
|
||||
|
|
@ -417,7 +417,7 @@ static int asus_kbd_init(struct hid_device *hdev)
|
|||
static int asus_kbd_get_functions(struct hid_device *hdev,
|
||||
unsigned char *kbd_func)
|
||||
{
|
||||
u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x05, 0x20, 0x31, 0x00, 0x08 };
|
||||
const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x05, 0x20, 0x31, 0x00, 0x08 };
|
||||
u8 *readbuf;
|
||||
int ret;
|
||||
|
||||
|
|
@ -448,7 +448,7 @@ static int asus_kbd_get_functions(struct hid_device *hdev,
|
|||
|
||||
static int rog_nkey_led_init(struct hid_device *hdev)
|
||||
{
|
||||
u8 buf_init_start[] = { FEATURE_KBD_LED_REPORT_ID1, 0xB9 };
|
||||
const u8 buf_init_start[] = { FEATURE_KBD_LED_REPORT_ID1, 0xB9 };
|
||||
u8 buf_init2[] = { FEATURE_KBD_LED_REPORT_ID1, 0x41, 0x53, 0x55, 0x53, 0x20,
|
||||
0x54, 0x65, 0x63, 0x68, 0x2e, 0x49, 0x6e, 0x63, 0x2e, 0x00 };
|
||||
u8 buf_init3[] = { FEATURE_KBD_LED_REPORT_ID1,
|
||||
|
|
@ -1012,6 +1012,24 @@ static int asus_start_multitouch(struct hid_device *hdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int __maybe_unused asus_resume(struct hid_device *hdev) {
|
||||
struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
|
||||
int ret = 0;
|
||||
|
||||
if (drvdata->kbd_backlight) {
|
||||
const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0xba, 0xc5, 0xc4,
|
||||
drvdata->kbd_backlight->cdev.brightness };
|
||||
ret = asus_kbd_set_report(hdev, buf, sizeof(buf));
|
||||
if (ret < 0) {
|
||||
hid_err(hdev, "Asus failed to set keyboard backlight: %d\n", ret);
|
||||
goto asus_resume_err;
|
||||
}
|
||||
}
|
||||
|
||||
asus_resume_err:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int __maybe_unused asus_reset_resume(struct hid_device *hdev)
|
||||
{
|
||||
struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
|
||||
|
|
@ -1303,6 +1321,7 @@ static struct hid_driver asus_driver = {
|
|||
.input_configured = asus_input_configured,
|
||||
#ifdef CONFIG_PM
|
||||
.reset_resume = asus_reset_resume,
|
||||
.resume = asus_resume,
|
||||
#endif
|
||||
.event = asus_event,
|
||||
.raw_event = asus_raw_event
|
||||
|
|
|
|||
|
|
@ -21,6 +21,10 @@ MODULE_DESCRIPTION("HID driver for Glorious PC Gaming Race mice");
|
|||
* Glorious Model O and O- specify the const flag in the consumer input
|
||||
* report descriptor, which leads to inputs being ignored. Fix this
|
||||
* by patching the descriptor.
|
||||
*
|
||||
* Glorious Model I incorrectly specifes the Usage Minimum for its
|
||||
* keyboard HID report, causing keycodes to be misinterpreted.
|
||||
* Fix this by setting Usage Minimum to 0 in that report.
|
||||
*/
|
||||
static __u8 *glorious_report_fixup(struct hid_device *hdev, __u8 *rdesc,
|
||||
unsigned int *rsize)
|
||||
|
|
@ -32,6 +36,10 @@ static __u8 *glorious_report_fixup(struct hid_device *hdev, __u8 *rdesc,
|
|||
rdesc[85] = rdesc[113] = rdesc[141] = \
|
||||
HID_MAIN_ITEM_VARIABLE | HID_MAIN_ITEM_RELATIVE;
|
||||
}
|
||||
if (*rsize == 156 && rdesc[41] == 1) {
|
||||
hid_info(hdev, "patching Glorious Model I keyboard report descriptor\n");
|
||||
rdesc[41] = 0;
|
||||
}
|
||||
return rdesc;
|
||||
}
|
||||
|
||||
|
|
@ -44,6 +52,8 @@ static void glorious_update_name(struct hid_device *hdev)
|
|||
model = "Model O"; break;
|
||||
case USB_DEVICE_ID_GLORIOUS_MODEL_D:
|
||||
model = "Model D"; break;
|
||||
case USB_DEVICE_ID_GLORIOUS_MODEL_I:
|
||||
model = "Model I"; break;
|
||||
}
|
||||
|
||||
snprintf(hdev->name, sizeof(hdev->name), "%s %s", "Glorious", model);
|
||||
|
|
@ -66,10 +76,12 @@ static int glorious_probe(struct hid_device *hdev,
|
|||
}
|
||||
|
||||
static const struct hid_device_id glorious_devices[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_GLORIOUS,
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SINOWEALTH,
|
||||
USB_DEVICE_ID_GLORIOUS_MODEL_O) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_GLORIOUS,
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SINOWEALTH,
|
||||
USB_DEVICE_ID_GLORIOUS_MODEL_D) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LAVIEW,
|
||||
USB_DEVICE_ID_GLORIOUS_MODEL_I) },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(hid, glorious_devices);
|
||||
|
|
|
|||
|
|
@ -503,10 +503,6 @@
|
|||
#define USB_DEVICE_ID_GENERAL_TOUCH_WIN8_PIT_010A 0x010a
|
||||
#define USB_DEVICE_ID_GENERAL_TOUCH_WIN8_PIT_E100 0xe100
|
||||
|
||||
#define USB_VENDOR_ID_GLORIOUS 0x258a
|
||||
#define USB_DEVICE_ID_GLORIOUS_MODEL_D 0x0033
|
||||
#define USB_DEVICE_ID_GLORIOUS_MODEL_O 0x0036
|
||||
|
||||
#define I2C_VENDOR_ID_GOODIX 0x27c6
|
||||
#define I2C_DEVICE_ID_GOODIX_01F0 0x01f0
|
||||
|
||||
|
|
@ -729,6 +725,9 @@
|
|||
#define USB_VENDOR_ID_LABTEC 0x1020
|
||||
#define USB_DEVICE_ID_LABTEC_WIRELESS_KEYBOARD 0x0006
|
||||
|
||||
#define USB_VENDOR_ID_LAVIEW 0x22D4
|
||||
#define USB_DEVICE_ID_GLORIOUS_MODEL_I 0x1503
|
||||
|
||||
#define USB_VENDOR_ID_LCPOWER 0x1241
|
||||
#define USB_DEVICE_ID_LCPOWER_LC1000 0xf767
|
||||
|
||||
|
|
@ -1131,6 +1130,10 @@
|
|||
#define USB_VENDOR_ID_SIGMATEL 0x066F
|
||||
#define USB_DEVICE_ID_SIGMATEL_STMP3780 0x3780
|
||||
|
||||
#define USB_VENDOR_ID_SINOWEALTH 0x258a
|
||||
#define USB_DEVICE_ID_GLORIOUS_MODEL_D 0x0033
|
||||
#define USB_DEVICE_ID_GLORIOUS_MODEL_O 0x0036
|
||||
|
||||
#define USB_VENDOR_ID_SIS_TOUCH 0x0457
|
||||
#define USB_DEVICE_ID_SIS9200_TOUCH 0x9200
|
||||
#define USB_DEVICE_ID_SIS817_TOUCH 0x0817
|
||||
|
|
|
|||
|
|
@ -692,7 +692,8 @@ static int lenovo_event_cptkbd(struct hid_device *hdev,
|
|||
* so set middlebutton_state to 3
|
||||
* to never apply workaround anymore
|
||||
*/
|
||||
if (cptkbd_data->middlebutton_state == 1 &&
|
||||
if (hdev->product == USB_DEVICE_ID_LENOVO_CUSBKBD &&
|
||||
cptkbd_data->middlebutton_state == 1 &&
|
||||
usage->type == EV_REL &&
|
||||
(usage->code == REL_X || usage->code == REL_Y)) {
|
||||
cptkbd_data->middlebutton_state = 3;
|
||||
|
|
|
|||
|
|
@ -2048,6 +2048,11 @@ static const struct hid_device_id mt_devices[] = {
|
|||
MT_USB_DEVICE(USB_VENDOR_ID_HANVON_ALT,
|
||||
USB_DEVICE_ID_HANVON_ALT_MULTITOUCH) },
|
||||
|
||||
/* HONOR GLO-GXXX panel */
|
||||
{ .driver_data = MT_CLS_VTL,
|
||||
HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
|
||||
0x347d, 0x7853) },
|
||||
|
||||
/* Ilitek dual touch panel */
|
||||
{ .driver_data = MT_CLS_NSMU,
|
||||
MT_USB_DEVICE(USB_VENDOR_ID_ILITEK,
|
||||
|
|
|
|||
|
|
@ -33,6 +33,7 @@ static const struct hid_device_id hid_quirks[] = {
|
|||
{ HID_USB_DEVICE(USB_VENDOR_ID_AKAI, USB_DEVICE_ID_AKAI_MPKMINI2), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ALPS, USB_DEVICE_ID_IBM_GAMEPAD), HID_QUIRK_BADPAD },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_AMI, USB_DEVICE_ID_AMI_VIRT_KEYBOARD_AND_MOUSE), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_REVB_ANSI), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_2PORTKVM), HID_QUIRK_NOGET },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVMC), HID_QUIRK_NOGET },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVM), HID_QUIRK_NOGET },
|
||||
|
|
|
|||
|
|
@ -265,6 +265,7 @@ struct bcache_device {
|
|||
#define BCACHE_DEV_WB_RUNNING 3
|
||||
#define BCACHE_DEV_RATE_DW_RUNNING 4
|
||||
int nr_stripes;
|
||||
#define BCH_MIN_STRIPE_SZ ((4 << 20) >> SECTOR_SHIFT)
|
||||
unsigned int stripe_size;
|
||||
atomic_t *stripe_sectors_dirty;
|
||||
unsigned long *full_dirty_stripes;
|
||||
|
|
|
|||
|
|
@ -974,6 +974,9 @@ err:
|
|||
*
|
||||
* The btree node will have either a read or a write lock held, depending on
|
||||
* level and op->lock.
|
||||
*
|
||||
* Note: Only error code or btree pointer will be returned, it is unncessary
|
||||
* for callers to check NULL pointer.
|
||||
*/
|
||||
struct btree *bch_btree_node_get(struct cache_set *c, struct btree_op *op,
|
||||
struct bkey *k, int level, bool write,
|
||||
|
|
@ -1085,6 +1088,10 @@ retry:
|
|||
mutex_unlock(&b->c->bucket_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
* Only error code or btree pointer will be returned, it is unncessary for
|
||||
* callers to check NULL pointer.
|
||||
*/
|
||||
struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op,
|
||||
int level, bool wait,
|
||||
struct btree *parent)
|
||||
|
|
|
|||
|
|
@ -905,6 +905,8 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
|
|||
|
||||
if (!d->stripe_size)
|
||||
d->stripe_size = 1 << 31;
|
||||
else if (d->stripe_size < BCH_MIN_STRIPE_SZ)
|
||||
d->stripe_size = roundup(BCH_MIN_STRIPE_SZ, d->stripe_size);
|
||||
|
||||
n = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
|
||||
if (!n || n > max_stripes) {
|
||||
|
|
@ -2017,7 +2019,7 @@ static int run_cache_set(struct cache_set *c)
|
|||
c->root = bch_btree_node_get(c, NULL, k,
|
||||
j->btree_level,
|
||||
true, NULL);
|
||||
if (IS_ERR_OR_NULL(c->root))
|
||||
if (IS_ERR(c->root))
|
||||
goto err;
|
||||
|
||||
list_del_init(&c->root->list);
|
||||
|
|
|
|||
|
|
@ -913,7 +913,7 @@ static int bch_dirty_init_thread(void *arg)
|
|||
int cur_idx, prev_idx, skip_nr;
|
||||
|
||||
k = p = NULL;
|
||||
cur_idx = prev_idx = 0;
|
||||
prev_idx = 0;
|
||||
|
||||
bch_btree_iter_init(&c->root->keys, &iter, NULL);
|
||||
k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad);
|
||||
|
|
|
|||
|
|
@ -328,9 +328,6 @@ static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq,
|
|||
* compare it to the stored version, just create the meta
|
||||
*/
|
||||
if (io_sq->disable_meta_caching) {
|
||||
if (unlikely(!ena_tx_ctx->meta_valid))
|
||||
return -EINVAL;
|
||||
|
||||
*have_meta = true;
|
||||
return ena_com_create_meta(io_sq, ena_meta);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -74,6 +74,8 @@ static void ena_unmap_tx_buff(struct ena_ring *tx_ring,
|
|||
struct ena_tx_buffer *tx_info);
|
||||
static int ena_create_io_tx_queues_in_range(struct ena_adapter *adapter,
|
||||
int first_index, int count);
|
||||
static void ena_free_all_io_tx_resources_in_range(struct ena_adapter *adapter,
|
||||
int first_index, int count);
|
||||
|
||||
/* Increase a stat by cnt while holding syncp seqlock on 32bit machines */
|
||||
static void ena_increase_stat(u64 *statp, u64 cnt,
|
||||
|
|
@ -457,23 +459,22 @@ static void ena_init_all_xdp_queues(struct ena_adapter *adapter)
|
|||
|
||||
static int ena_setup_and_create_all_xdp_queues(struct ena_adapter *adapter)
|
||||
{
|
||||
u32 xdp_first_ring = adapter->xdp_first_ring;
|
||||
u32 xdp_num_queues = adapter->xdp_num_queues;
|
||||
int rc = 0;
|
||||
|
||||
rc = ena_setup_tx_resources_in_range(adapter, adapter->xdp_first_ring,
|
||||
adapter->xdp_num_queues);
|
||||
rc = ena_setup_tx_resources_in_range(adapter, xdp_first_ring, xdp_num_queues);
|
||||
if (rc)
|
||||
goto setup_err;
|
||||
|
||||
rc = ena_create_io_tx_queues_in_range(adapter,
|
||||
adapter->xdp_first_ring,
|
||||
adapter->xdp_num_queues);
|
||||
rc = ena_create_io_tx_queues_in_range(adapter, xdp_first_ring, xdp_num_queues);
|
||||
if (rc)
|
||||
goto create_err;
|
||||
|
||||
return 0;
|
||||
|
||||
create_err:
|
||||
ena_free_all_io_tx_resources(adapter);
|
||||
ena_free_all_io_tx_resources_in_range(adapter, xdp_first_ring, xdp_num_queues);
|
||||
setup_err:
|
||||
return rc;
|
||||
}
|
||||
|
|
@ -1617,20 +1618,23 @@ static void ena_set_rx_hash(struct ena_ring *rx_ring,
|
|||
}
|
||||
}
|
||||
|
||||
static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp)
|
||||
static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp, u16 num_descs)
|
||||
{
|
||||
struct ena_rx_buffer *rx_info;
|
||||
int ret;
|
||||
|
||||
/* XDP multi-buffer packets not supported */
|
||||
if (unlikely(num_descs > 1)) {
|
||||
netdev_err_once(rx_ring->adapter->netdev,
|
||||
"xdp: dropped unsupported multi-buffer packets\n");
|
||||
ena_increase_stat(&rx_ring->rx_stats.xdp_drop, 1, &rx_ring->syncp);
|
||||
return ENA_XDP_DROP;
|
||||
}
|
||||
|
||||
rx_info = &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id];
|
||||
xdp_prepare_buff(xdp, page_address(rx_info->page),
|
||||
rx_info->page_offset,
|
||||
rx_ring->ena_bufs[0].len, false);
|
||||
/* If for some reason we received a bigger packet than
|
||||
* we expect, then we simply drop it
|
||||
*/
|
||||
if (unlikely(rx_ring->ena_bufs[0].len > ENA_XDP_MAX_MTU))
|
||||
return ENA_XDP_DROP;
|
||||
|
||||
ret = ena_xdp_execute(rx_ring, xdp);
|
||||
|
||||
|
|
@ -1699,7 +1703,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
|
|||
ena_rx_ctx.l4_proto, ena_rx_ctx.hash);
|
||||
|
||||
if (ena_xdp_present_ring(rx_ring))
|
||||
xdp_verdict = ena_xdp_handle_buff(rx_ring, &xdp);
|
||||
xdp_verdict = ena_xdp_handle_buff(rx_ring, &xdp, ena_rx_ctx.descs);
|
||||
|
||||
/* allocate skb and fill it */
|
||||
if (xdp_verdict == ENA_XDP_PASS)
|
||||
|
|
|
|||
|
|
@ -938,11 +938,14 @@ void aq_ring_free(struct aq_ring_s *self)
|
|||
return;
|
||||
|
||||
kfree(self->buff_ring);
|
||||
self->buff_ring = NULL;
|
||||
|
||||
if (self->dx_ring)
|
||||
if (self->dx_ring) {
|
||||
dma_free_coherent(aq_nic_get_dev(self->aq_nic),
|
||||
self->size * self->dx_size, self->dx_ring,
|
||||
self->dx_ring_pa);
|
||||
self->dx_ring = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
unsigned int aq_ring_fill_stats_data(struct aq_ring_s *self, u64 *data)
|
||||
|
|
|
|||
|
|
@ -1923,8 +1923,7 @@ u16 bnx2x_select_queue(struct net_device *dev, struct sk_buff *skb,
|
|||
|
||||
/* Skip VLAN tag if present */
|
||||
if (ether_type == ETH_P_8021Q) {
|
||||
struct vlan_ethhdr *vhdr =
|
||||
(struct vlan_ethhdr *)skb->data;
|
||||
struct vlan_ethhdr *vhdr = skb_vlan_eth_hdr(skb);
|
||||
|
||||
ether_type = ntohs(vhdr->h_vlan_encapsulated_proto);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1796,6 +1796,21 @@ static void bnxt_deliver_skb(struct bnxt *bp, struct bnxt_napi *bnapi,
|
|||
napi_gro_receive(&bnapi->napi, skb);
|
||||
}
|
||||
|
||||
static bool bnxt_rx_ts_valid(struct bnxt *bp, u32 flags,
|
||||
struct rx_cmp_ext *rxcmp1, u32 *cmpl_ts)
|
||||
{
|
||||
u32 ts = le32_to_cpu(rxcmp1->rx_cmp_timestamp);
|
||||
|
||||
if (BNXT_PTP_RX_TS_VALID(flags))
|
||||
goto ts_valid;
|
||||
if (!bp->ptp_all_rx_tstamp || !ts || !BNXT_ALL_RX_TS_VALID(flags))
|
||||
return false;
|
||||
|
||||
ts_valid:
|
||||
*cmpl_ts = ts;
|
||||
return true;
|
||||
}
|
||||
|
||||
/* returns the following:
|
||||
* 1 - 1 packet successfully received
|
||||
* 0 - successful TPA_START, packet not completed yet
|
||||
|
|
@ -1821,6 +1836,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
|
|||
struct sk_buff *skb;
|
||||
struct xdp_buff xdp;
|
||||
u32 flags, misc;
|
||||
u32 cmpl_ts;
|
||||
void *data;
|
||||
int rc = 0;
|
||||
|
||||
|
|
@ -2043,10 +2059,8 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
|
|||
}
|
||||
}
|
||||
|
||||
if (unlikely((flags & RX_CMP_FLAGS_ITYPES_MASK) ==
|
||||
RX_CMP_FLAGS_ITYPE_PTP_W_TS) || bp->ptp_all_rx_tstamp) {
|
||||
if (bnxt_rx_ts_valid(bp, flags, rxcmp1, &cmpl_ts)) {
|
||||
if (bp->flags & BNXT_FLAG_CHIP_P5) {
|
||||
u32 cmpl_ts = le32_to_cpu(rxcmp1->rx_cmp_timestamp);
|
||||
u64 ns, ts;
|
||||
|
||||
if (!bnxt_get_rx_ts_p5(bp, &ts, cmpl_ts)) {
|
||||
|
|
@ -10708,8 +10722,10 @@ static void __bnxt_close_nic(struct bnxt *bp, bool irq_re_init,
|
|||
bnxt_free_skbs(bp);
|
||||
|
||||
/* Save ring stats before shutdown */
|
||||
if (bp->bnapi && irq_re_init)
|
||||
if (bp->bnapi && irq_re_init) {
|
||||
bnxt_get_ring_stats(bp, &bp->net_stats_prev);
|
||||
bnxt_get_ring_err_stats(bp, &bp->ring_err_stats_prev);
|
||||
}
|
||||
if (irq_re_init) {
|
||||
bnxt_free_irq(bp);
|
||||
bnxt_del_napi(bp);
|
||||
|
|
@ -10717,10 +10733,8 @@ static void __bnxt_close_nic(struct bnxt *bp, bool irq_re_init,
|
|||
bnxt_free_mem(bp, irq_re_init);
|
||||
}
|
||||
|
||||
int bnxt_close_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
|
||||
void bnxt_close_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
|
||||
{
|
||||
int rc = 0;
|
||||
|
||||
if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state)) {
|
||||
/* If we get here, it means firmware reset is in progress
|
||||
* while we are trying to close. We can safely proceed with
|
||||
|
|
@ -10735,15 +10749,18 @@ int bnxt_close_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
|
|||
|
||||
#ifdef CONFIG_BNXT_SRIOV
|
||||
if (bp->sriov_cfg) {
|
||||
int rc;
|
||||
|
||||
rc = wait_event_interruptible_timeout(bp->sriov_cfg_wait,
|
||||
!bp->sriov_cfg,
|
||||
BNXT_SRIOV_CFG_WAIT_TMO);
|
||||
if (rc)
|
||||
netdev_warn(bp->dev, "timeout waiting for SRIOV config operation to complete!\n");
|
||||
if (!rc)
|
||||
netdev_warn(bp->dev, "timeout waiting for SRIOV config operation to complete, proceeding to close!\n");
|
||||
else if (rc < 0)
|
||||
netdev_warn(bp->dev, "SRIOV config operation interrupted, proceeding to close!\n");
|
||||
}
|
||||
#endif
|
||||
__bnxt_close_nic(bp, irq_re_init, link_re_init);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int bnxt_close(struct net_device *dev)
|
||||
|
|
@ -10958,6 +10975,34 @@ bnxt_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats)
|
|||
clear_bit(BNXT_STATE_READ_STATS, &bp->state);
|
||||
}
|
||||
|
||||
static void bnxt_get_one_ring_err_stats(struct bnxt *bp,
|
||||
struct bnxt_total_ring_err_stats *stats,
|
||||
struct bnxt_cp_ring_info *cpr)
|
||||
{
|
||||
struct bnxt_sw_stats *sw_stats = &cpr->sw_stats;
|
||||
u64 *hw_stats = cpr->stats.sw_stats;
|
||||
|
||||
stats->rx_total_l4_csum_errors += sw_stats->rx.rx_l4_csum_errors;
|
||||
stats->rx_total_resets += sw_stats->rx.rx_resets;
|
||||
stats->rx_total_buf_errors += sw_stats->rx.rx_buf_errors;
|
||||
stats->rx_total_oom_discards += sw_stats->rx.rx_oom_discards;
|
||||
stats->rx_total_netpoll_discards += sw_stats->rx.rx_netpoll_discards;
|
||||
stats->rx_total_ring_discards +=
|
||||
BNXT_GET_RING_STATS64(hw_stats, rx_discard_pkts);
|
||||
stats->tx_total_ring_discards +=
|
||||
BNXT_GET_RING_STATS64(hw_stats, tx_discard_pkts);
|
||||
stats->total_missed_irqs += sw_stats->cmn.missed_irqs;
|
||||
}
|
||||
|
||||
void bnxt_get_ring_err_stats(struct bnxt *bp,
|
||||
struct bnxt_total_ring_err_stats *stats)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < bp->cp_nr_rings; i++)
|
||||
bnxt_get_one_ring_err_stats(bp, stats, &bp->bnapi[i]->cp_ring);
|
||||
}
|
||||
|
||||
static bool bnxt_mc_list_updated(struct bnxt *bp, u32 *rx_mask)
|
||||
{
|
||||
struct net_device *dev = bp->dev;
|
||||
|
|
@ -13882,6 +13927,8 @@ static int bnxt_resume(struct device *device)
|
|||
if (rc)
|
||||
goto resume_exit;
|
||||
|
||||
bnxt_clear_reservations(bp, true);
|
||||
|
||||
if (bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, false)) {
|
||||
rc = -ENODEV;
|
||||
goto resume_exit;
|
||||
|
|
|
|||
|
|
@ -160,7 +160,7 @@ struct rx_cmp {
|
|||
#define RX_CMP_FLAGS_ERROR (1 << 6)
|
||||
#define RX_CMP_FLAGS_PLACEMENT (7 << 7)
|
||||
#define RX_CMP_FLAGS_RSS_VALID (1 << 10)
|
||||
#define RX_CMP_FLAGS_UNUSED (1 << 11)
|
||||
#define RX_CMP_FLAGS_PKT_METADATA_PRESENT (1 << 11)
|
||||
#define RX_CMP_FLAGS_ITYPES_SHIFT 12
|
||||
#define RX_CMP_FLAGS_ITYPES_MASK 0xf000
|
||||
#define RX_CMP_FLAGS_ITYPE_UNKNOWN (0 << 12)
|
||||
|
|
@ -187,6 +187,12 @@ struct rx_cmp {
|
|||
__le32 rx_cmp_rss_hash;
|
||||
};
|
||||
|
||||
#define BNXT_PTP_RX_TS_VALID(flags) \
|
||||
(((flags) & RX_CMP_FLAGS_ITYPES_MASK) == RX_CMP_FLAGS_ITYPE_PTP_W_TS)
|
||||
|
||||
#define BNXT_ALL_RX_TS_VALID(flags) \
|
||||
!((flags) & RX_CMP_FLAGS_PKT_METADATA_PRESENT)
|
||||
|
||||
#define RX_CMP_HASH_VALID(rxcmp) \
|
||||
((rxcmp)->rx_cmp_len_flags_type & cpu_to_le32(RX_CMP_FLAGS_RSS_VALID))
|
||||
|
||||
|
|
@ -950,6 +956,17 @@ struct bnxt_sw_stats {
|
|||
struct bnxt_cmn_sw_stats cmn;
|
||||
};
|
||||
|
||||
struct bnxt_total_ring_err_stats {
|
||||
u64 rx_total_l4_csum_errors;
|
||||
u64 rx_total_resets;
|
||||
u64 rx_total_buf_errors;
|
||||
u64 rx_total_oom_discards;
|
||||
u64 rx_total_netpoll_discards;
|
||||
u64 rx_total_ring_discards;
|
||||
u64 tx_total_ring_discards;
|
||||
u64 total_missed_irqs;
|
||||
};
|
||||
|
||||
struct bnxt_stats_mem {
|
||||
u64 *sw_stats;
|
||||
u64 *hw_masks;
|
||||
|
|
@ -2007,6 +2024,8 @@ struct bnxt {
|
|||
u8 pri2cos_idx[8];
|
||||
u8 pri2cos_valid;
|
||||
|
||||
struct bnxt_total_ring_err_stats ring_err_stats_prev;
|
||||
|
||||
u16 hwrm_max_req_len;
|
||||
u16 hwrm_max_ext_req_len;
|
||||
unsigned int hwrm_cmd_timeout;
|
||||
|
|
@ -2330,7 +2349,9 @@ int bnxt_open_nic(struct bnxt *, bool, bool);
|
|||
int bnxt_half_open_nic(struct bnxt *bp);
|
||||
void bnxt_half_close_nic(struct bnxt *bp);
|
||||
void bnxt_reenable_sriov(struct bnxt *bp);
|
||||
int bnxt_close_nic(struct bnxt *, bool, bool);
|
||||
void bnxt_close_nic(struct bnxt *, bool, bool);
|
||||
void bnxt_get_ring_err_stats(struct bnxt *bp,
|
||||
struct bnxt_total_ring_err_stats *stats);
|
||||
int bnxt_dbg_hwrm_rd_reg(struct bnxt *bp, u32 reg_off, u16 num_words,
|
||||
u32 *reg_buf);
|
||||
void bnxt_fw_exception(struct bnxt *bp);
|
||||
|
|
|
|||
|
|
@ -478,15 +478,8 @@ static int bnxt_dl_reload_down(struct devlink *dl, bool netns_change,
|
|||
return -ENODEV;
|
||||
}
|
||||
bnxt_ulp_stop(bp);
|
||||
if (netif_running(bp->dev)) {
|
||||
rc = bnxt_close_nic(bp, true, true);
|
||||
if (rc) {
|
||||
NL_SET_ERR_MSG_MOD(extack, "Failed to close");
|
||||
dev_close(bp->dev);
|
||||
rtnl_unlock();
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (netif_running(bp->dev))
|
||||
bnxt_close_nic(bp, true, true);
|
||||
bnxt_vf_reps_free(bp);
|
||||
rc = bnxt_hwrm_func_drv_unrgtr(bp);
|
||||
if (rc) {
|
||||
|
|
|
|||
|
|
@ -164,9 +164,8 @@ static int bnxt_set_coalesce(struct net_device *dev,
|
|||
reset_coalesce:
|
||||
if (test_bit(BNXT_STATE_OPEN, &bp->state)) {
|
||||
if (update_stats) {
|
||||
rc = bnxt_close_nic(bp, true, false);
|
||||
if (!rc)
|
||||
rc = bnxt_open_nic(bp, true, false);
|
||||
bnxt_close_nic(bp, true, false);
|
||||
rc = bnxt_open_nic(bp, true, false);
|
||||
} else {
|
||||
rc = bnxt_hwrm_set_coal(bp);
|
||||
}
|
||||
|
|
@ -956,12 +955,7 @@ static int bnxt_set_channels(struct net_device *dev,
|
|||
* before PF unload
|
||||
*/
|
||||
}
|
||||
rc = bnxt_close_nic(bp, true, false);
|
||||
if (rc) {
|
||||
netdev_err(bp->dev, "Set channel failure rc :%x\n",
|
||||
rc);
|
||||
return rc;
|
||||
}
|
||||
bnxt_close_nic(bp, true, false);
|
||||
}
|
||||
|
||||
if (sh) {
|
||||
|
|
@ -3634,12 +3628,7 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
|
|||
bnxt_run_fw_tests(bp, test_mask, &test_results);
|
||||
} else {
|
||||
bnxt_ulp_stop(bp);
|
||||
rc = bnxt_close_nic(bp, true, false);
|
||||
if (rc) {
|
||||
etest->flags |= ETH_TEST_FL_FAILED;
|
||||
bnxt_ulp_start(bp, rc);
|
||||
return;
|
||||
}
|
||||
bnxt_close_nic(bp, true, false);
|
||||
bnxt_run_fw_tests(bp, test_mask, &test_results);
|
||||
|
||||
buf[BNXT_MACLPBK_TEST_IDX] = 1;
|
||||
|
|
|
|||
|
|
@ -506,9 +506,8 @@ static int bnxt_hwrm_ptp_cfg(struct bnxt *bp)
|
|||
|
||||
if (netif_running(bp->dev)) {
|
||||
if (ptp->rx_filter == HWTSTAMP_FILTER_ALL) {
|
||||
rc = bnxt_close_nic(bp, false, false);
|
||||
if (!rc)
|
||||
rc = bnxt_open_nic(bp, false, false);
|
||||
bnxt_close_nic(bp, false, false);
|
||||
rc = bnxt_open_nic(bp, false, false);
|
||||
} else {
|
||||
bnxt_ptp_cfg_tstamp_filters(bp);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1125,7 +1125,7 @@ static struct sk_buff *be_lancer_xmit_workarounds(struct be_adapter *adapter,
|
|||
struct be_wrb_params
|
||||
*wrb_params)
|
||||
{
|
||||
struct vlan_ethhdr *veh = (struct vlan_ethhdr *)skb->data;
|
||||
struct vlan_ethhdr *veh = skb_vlan_eth_hdr(skb);
|
||||
unsigned int eth_hdr_len;
|
||||
struct iphdr *ip;
|
||||
|
||||
|
|
|
|||
|
|
@ -139,7 +139,8 @@ int dpaa2_switch_acl_entry_add(struct dpaa2_switch_filter_block *filter_block,
|
|||
err = dpsw_acl_add_entry(ethsw->mc_io, 0, ethsw->dpsw_handle,
|
||||
filter_block->acl_id, acl_entry_cfg);
|
||||
|
||||
dma_unmap_single(dev, acl_entry_cfg->key_iova, sizeof(cmd_buff),
|
||||
dma_unmap_single(dev, acl_entry_cfg->key_iova,
|
||||
DPAA2_ETHSW_PORT_ACL_CMD_BUF_SIZE,
|
||||
DMA_TO_DEVICE);
|
||||
if (err) {
|
||||
dev_err(dev, "dpsw_acl_add_entry() failed %d\n", err);
|
||||
|
|
@ -181,8 +182,8 @@ dpaa2_switch_acl_entry_remove(struct dpaa2_switch_filter_block *block,
|
|||
err = dpsw_acl_remove_entry(ethsw->mc_io, 0, ethsw->dpsw_handle,
|
||||
block->acl_id, acl_entry_cfg);
|
||||
|
||||
dma_unmap_single(dev, acl_entry_cfg->key_iova, sizeof(cmd_buff),
|
||||
DMA_TO_DEVICE);
|
||||
dma_unmap_single(dev, acl_entry_cfg->key_iova,
|
||||
DPAA2_ETHSW_PORT_ACL_CMD_BUF_SIZE, DMA_TO_DEVICE);
|
||||
if (err) {
|
||||
dev_err(dev, "dpsw_acl_remove_entry() failed %d\n", err);
|
||||
kfree(cmd_buff);
|
||||
|
|
|
|||
|
|
@ -1978,9 +1978,6 @@ static int dpaa2_switch_port_attr_set_event(struct net_device *netdev,
|
|||
return notifier_from_errno(err);
|
||||
}
|
||||
|
||||
static struct notifier_block dpaa2_switch_port_switchdev_nb;
|
||||
static struct notifier_block dpaa2_switch_port_switchdev_blocking_nb;
|
||||
|
||||
static int dpaa2_switch_port_bridge_join(struct net_device *netdev,
|
||||
struct net_device *upper_dev,
|
||||
struct netlink_ext_ack *extack)
|
||||
|
|
@ -2023,9 +2020,7 @@ static int dpaa2_switch_port_bridge_join(struct net_device *netdev,
|
|||
goto err_egress_flood;
|
||||
|
||||
err = switchdev_bridge_port_offload(netdev, netdev, NULL,
|
||||
&dpaa2_switch_port_switchdev_nb,
|
||||
&dpaa2_switch_port_switchdev_blocking_nb,
|
||||
false, extack);
|
||||
NULL, NULL, false, extack);
|
||||
if (err)
|
||||
goto err_switchdev_offload;
|
||||
|
||||
|
|
@ -2059,9 +2054,7 @@ static int dpaa2_switch_port_restore_rxvlan(struct net_device *vdev, int vid, vo
|
|||
|
||||
static void dpaa2_switch_port_pre_bridge_leave(struct net_device *netdev)
|
||||
{
|
||||
switchdev_bridge_port_unoffload(netdev, NULL,
|
||||
&dpaa2_switch_port_switchdev_nb,
|
||||
&dpaa2_switch_port_switchdev_blocking_nb);
|
||||
switchdev_bridge_port_unoffload(netdev, NULL, NULL, NULL);
|
||||
}
|
||||
|
||||
static int dpaa2_switch_port_bridge_leave(struct net_device *netdev)
|
||||
|
|
|
|||
|
|
@ -3541,31 +3541,26 @@ static int fec_set_features(struct net_device *netdev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static u16 fec_enet_get_raw_vlan_tci(struct sk_buff *skb)
|
||||
{
|
||||
struct vlan_ethhdr *vhdr;
|
||||
unsigned short vlan_TCI = 0;
|
||||
|
||||
if (skb->protocol == htons(ETH_P_ALL)) {
|
||||
vhdr = (struct vlan_ethhdr *)(skb->data);
|
||||
vlan_TCI = ntohs(vhdr->h_vlan_TCI);
|
||||
}
|
||||
|
||||
return vlan_TCI;
|
||||
}
|
||||
|
||||
static u16 fec_enet_select_queue(struct net_device *ndev, struct sk_buff *skb,
|
||||
struct net_device *sb_dev)
|
||||
{
|
||||
struct fec_enet_private *fep = netdev_priv(ndev);
|
||||
u16 vlan_tag;
|
||||
u16 vlan_tag = 0;
|
||||
|
||||
if (!(fep->quirks & FEC_QUIRK_HAS_AVB))
|
||||
return netdev_pick_tx(ndev, skb, NULL);
|
||||
|
||||
vlan_tag = fec_enet_get_raw_vlan_tci(skb);
|
||||
if (!vlan_tag)
|
||||
/* VLAN is present in the payload.*/
|
||||
if (eth_type_vlan(skb->protocol)) {
|
||||
struct vlan_ethhdr *vhdr = skb_vlan_eth_hdr(skb);
|
||||
|
||||
vlan_tag = ntohs(vhdr->h_vlan_TCI);
|
||||
/* VLAN is present in the skb but not yet pushed in the payload.*/
|
||||
} else if (skb_vlan_tag_present(skb)) {
|
||||
vlan_tag = skb->vlan_tci;
|
||||
} else {
|
||||
return vlan_tag;
|
||||
}
|
||||
|
||||
return fec_enet_vlan_pri_to_queue[vlan_tag >> 13];
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1532,7 +1532,7 @@ static int hns3_handle_vtags(struct hns3_enet_ring *tx_ring,
|
|||
if (unlikely(rc < 0))
|
||||
return rc;
|
||||
|
||||
vhdr = (struct vlan_ethhdr *)skb->data;
|
||||
vhdr = skb_vlan_eth_hdr(skb);
|
||||
vhdr->h_vlan_TCI |= cpu_to_be16((skb->priority << VLAN_PRIO_SHIFT)
|
||||
& VLAN_PRIO_MASK);
|
||||
|
||||
|
|
|
|||
|
|
@ -2986,7 +2986,7 @@ static inline int i40e_tx_prepare_vlan_flags(struct sk_buff *skb,
|
|||
rc = skb_cow_head(skb, 0);
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
vhdr = (struct vlan_ethhdr *)skb->data;
|
||||
vhdr = skb_vlan_eth_hdr(skb);
|
||||
vhdr->h_vlan_TCI = htons(tx_flags >>
|
||||
I40E_TX_FLAGS_VLAN_SHIFT);
|
||||
} else {
|
||||
|
|
|
|||
|
|
@ -303,6 +303,7 @@ struct iavf_adapter {
|
|||
#define IAVF_FLAG_QUEUES_DISABLED BIT(17)
|
||||
#define IAVF_FLAG_SETUP_NETDEV_FEATURES BIT(18)
|
||||
#define IAVF_FLAG_REINIT_MSIX_NEEDED BIT(20)
|
||||
#define IAVF_FLAG_FDIR_ENABLED BIT(21)
|
||||
/* duplicates for common code */
|
||||
#define IAVF_FLAG_DCB_ENABLED 0
|
||||
/* flags for admin queue service task */
|
||||
|
|
|
|||
|
|
@ -1063,7 +1063,7 @@ iavf_get_ethtool_fdir_entry(struct iavf_adapter *adapter,
|
|||
struct iavf_fdir_fltr *rule = NULL;
|
||||
int ret = 0;
|
||||
|
||||
if (!FDIR_FLTR_SUPPORT(adapter))
|
||||
if (!(adapter->flags & IAVF_FLAG_FDIR_ENABLED))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
spin_lock_bh(&adapter->fdir_fltr_lock);
|
||||
|
|
@ -1205,7 +1205,7 @@ iavf_get_fdir_fltr_ids(struct iavf_adapter *adapter, struct ethtool_rxnfc *cmd,
|
|||
unsigned int cnt = 0;
|
||||
int val = 0;
|
||||
|
||||
if (!FDIR_FLTR_SUPPORT(adapter))
|
||||
if (!(adapter->flags & IAVF_FLAG_FDIR_ENABLED))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
cmd->data = IAVF_MAX_FDIR_FILTERS;
|
||||
|
|
@ -1397,7 +1397,7 @@ static int iavf_add_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
|
|||
int count = 50;
|
||||
int err;
|
||||
|
||||
if (!FDIR_FLTR_SUPPORT(adapter))
|
||||
if (!(adapter->flags & IAVF_FLAG_FDIR_ENABLED))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (fsp->flow_type & FLOW_MAC_EXT)
|
||||
|
|
@ -1438,12 +1438,16 @@ static int iavf_add_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
|
|||
spin_lock_bh(&adapter->fdir_fltr_lock);
|
||||
iavf_fdir_list_add_fltr(adapter, fltr);
|
||||
adapter->fdir_active_fltr++;
|
||||
fltr->state = IAVF_FDIR_FLTR_ADD_REQUEST;
|
||||
adapter->aq_required |= IAVF_FLAG_AQ_ADD_FDIR_FILTER;
|
||||
if (adapter->link_up) {
|
||||
fltr->state = IAVF_FDIR_FLTR_ADD_REQUEST;
|
||||
adapter->aq_required |= IAVF_FLAG_AQ_ADD_FDIR_FILTER;
|
||||
} else {
|
||||
fltr->state = IAVF_FDIR_FLTR_INACTIVE;
|
||||
}
|
||||
spin_unlock_bh(&adapter->fdir_fltr_lock);
|
||||
|
||||
mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0);
|
||||
|
||||
if (adapter->link_up)
|
||||
mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0);
|
||||
ret:
|
||||
if (err && fltr)
|
||||
kfree(fltr);
|
||||
|
|
@ -1465,7 +1469,7 @@ static int iavf_del_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
|
|||
struct iavf_fdir_fltr *fltr = NULL;
|
||||
int err = 0;
|
||||
|
||||
if (!FDIR_FLTR_SUPPORT(adapter))
|
||||
if (!(adapter->flags & IAVF_FLAG_FDIR_ENABLED))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
spin_lock_bh(&adapter->fdir_fltr_lock);
|
||||
|
|
@ -1474,6 +1478,11 @@ static int iavf_del_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
|
|||
if (fltr->state == IAVF_FDIR_FLTR_ACTIVE) {
|
||||
fltr->state = IAVF_FDIR_FLTR_DEL_REQUEST;
|
||||
adapter->aq_required |= IAVF_FLAG_AQ_DEL_FDIR_FILTER;
|
||||
} else if (fltr->state == IAVF_FDIR_FLTR_INACTIVE) {
|
||||
list_del(&fltr->list);
|
||||
kfree(fltr);
|
||||
adapter->fdir_active_fltr--;
|
||||
fltr = NULL;
|
||||
} else {
|
||||
err = -EBUSY;
|
||||
}
|
||||
|
|
@ -1782,7 +1791,7 @@ static int iavf_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
|
|||
ret = 0;
|
||||
break;
|
||||
case ETHTOOL_GRXCLSRLCNT:
|
||||
if (!FDIR_FLTR_SUPPORT(adapter))
|
||||
if (!(adapter->flags & IAVF_FLAG_FDIR_ENABLED))
|
||||
break;
|
||||
spin_lock_bh(&adapter->fdir_fltr_lock);
|
||||
cmd->rule_cnt = adapter->fdir_active_fltr;
|
||||
|
|
|
|||
|
|
@ -6,12 +6,25 @@
|
|||
|
||||
struct iavf_adapter;
|
||||
|
||||
/* State of Flow Director filter */
|
||||
/* State of Flow Director filter
|
||||
*
|
||||
* *_REQUEST states are used to mark filter to be sent to PF driver to perform
|
||||
* an action (either add or delete filter). *_PENDING states are an indication
|
||||
* that request was sent to PF and the driver is waiting for response.
|
||||
*
|
||||
* Both DELETE and DISABLE states are being used to delete a filter in PF.
|
||||
* The difference is that after a successful response filter in DEL_PENDING
|
||||
* state is being deleted from VF driver as well and filter in DIS_PENDING state
|
||||
* is being changed to INACTIVE state.
|
||||
*/
|
||||
enum iavf_fdir_fltr_state_t {
|
||||
IAVF_FDIR_FLTR_ADD_REQUEST, /* User requests to add filter */
|
||||
IAVF_FDIR_FLTR_ADD_PENDING, /* Filter pending add by the PF */
|
||||
IAVF_FDIR_FLTR_DEL_REQUEST, /* User requests to delete filter */
|
||||
IAVF_FDIR_FLTR_DEL_PENDING, /* Filter pending delete by the PF */
|
||||
IAVF_FDIR_FLTR_DIS_REQUEST, /* Filter scheduled to be disabled */
|
||||
IAVF_FDIR_FLTR_DIS_PENDING, /* Filter pending disable by the PF */
|
||||
IAVF_FDIR_FLTR_INACTIVE, /* Filter inactive on link down */
|
||||
IAVF_FDIR_FLTR_ACTIVE, /* Filter is active */
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -1368,18 +1368,20 @@ static void iavf_clear_cloud_filters(struct iavf_adapter *adapter)
|
|||
**/
|
||||
static void iavf_clear_fdir_filters(struct iavf_adapter *adapter)
|
||||
{
|
||||
struct iavf_fdir_fltr *fdir, *fdirtmp;
|
||||
struct iavf_fdir_fltr *fdir;
|
||||
|
||||
/* remove all Flow Director filters */
|
||||
spin_lock_bh(&adapter->fdir_fltr_lock);
|
||||
list_for_each_entry_safe(fdir, fdirtmp, &adapter->fdir_list_head,
|
||||
list) {
|
||||
list_for_each_entry(fdir, &adapter->fdir_list_head, list) {
|
||||
if (fdir->state == IAVF_FDIR_FLTR_ADD_REQUEST) {
|
||||
list_del(&fdir->list);
|
||||
kfree(fdir);
|
||||
adapter->fdir_active_fltr--;
|
||||
} else {
|
||||
fdir->state = IAVF_FDIR_FLTR_DEL_REQUEST;
|
||||
/* Cancel a request, keep filter as inactive */
|
||||
fdir->state = IAVF_FDIR_FLTR_INACTIVE;
|
||||
} else if (fdir->state == IAVF_FDIR_FLTR_ADD_PENDING ||
|
||||
fdir->state == IAVF_FDIR_FLTR_ACTIVE) {
|
||||
/* Disable filters which are active or have a pending
|
||||
* request to PF to be added
|
||||
*/
|
||||
fdir->state = IAVF_FDIR_FLTR_DIS_REQUEST;
|
||||
}
|
||||
}
|
||||
spin_unlock_bh(&adapter->fdir_fltr_lock);
|
||||
|
|
@ -4210,6 +4212,33 @@ static int iavf_setup_tc(struct net_device *netdev, enum tc_setup_type type,
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* iavf_restore_fdir_filters
|
||||
* @adapter: board private structure
|
||||
*
|
||||
* Restore existing FDIR filters when VF netdev comes back up.
|
||||
**/
|
||||
static void iavf_restore_fdir_filters(struct iavf_adapter *adapter)
|
||||
{
|
||||
struct iavf_fdir_fltr *f;
|
||||
|
||||
spin_lock_bh(&adapter->fdir_fltr_lock);
|
||||
list_for_each_entry(f, &adapter->fdir_list_head, list) {
|
||||
if (f->state == IAVF_FDIR_FLTR_DIS_REQUEST) {
|
||||
/* Cancel a request, keep filter as active */
|
||||
f->state = IAVF_FDIR_FLTR_ACTIVE;
|
||||
} else if (f->state == IAVF_FDIR_FLTR_DIS_PENDING ||
|
||||
f->state == IAVF_FDIR_FLTR_INACTIVE) {
|
||||
/* Add filters which are inactive or have a pending
|
||||
* request to PF to be deleted
|
||||
*/
|
||||
f->state = IAVF_FDIR_FLTR_ADD_REQUEST;
|
||||
adapter->aq_required |= IAVF_FLAG_AQ_ADD_FDIR_FILTER;
|
||||
}
|
||||
}
|
||||
spin_unlock_bh(&adapter->fdir_fltr_lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* iavf_open - Called when a network interface is made active
|
||||
* @netdev: network interface device structure
|
||||
|
|
@ -4277,8 +4306,9 @@ static int iavf_open(struct net_device *netdev)
|
|||
|
||||
spin_unlock_bh(&adapter->mac_vlan_list_lock);
|
||||
|
||||
/* Restore VLAN filters that were removed with IFF_DOWN */
|
||||
/* Restore filters that were removed with IFF_DOWN */
|
||||
iavf_restore_filters(adapter);
|
||||
iavf_restore_fdir_filters(adapter);
|
||||
|
||||
iavf_configure(adapter);
|
||||
|
||||
|
|
@ -4415,6 +4445,49 @@ static int iavf_change_mtu(struct net_device *netdev, int new_mtu)
|
|||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* iavf_disable_fdir - disable Flow Director and clear existing filters
|
||||
* @adapter: board private structure
|
||||
**/
|
||||
static void iavf_disable_fdir(struct iavf_adapter *adapter)
|
||||
{
|
||||
struct iavf_fdir_fltr *fdir, *fdirtmp;
|
||||
bool del_filters = false;
|
||||
|
||||
adapter->flags &= ~IAVF_FLAG_FDIR_ENABLED;
|
||||
|
||||
/* remove all Flow Director filters */
|
||||
spin_lock_bh(&adapter->fdir_fltr_lock);
|
||||
list_for_each_entry_safe(fdir, fdirtmp, &adapter->fdir_list_head,
|
||||
list) {
|
||||
if (fdir->state == IAVF_FDIR_FLTR_ADD_REQUEST ||
|
||||
fdir->state == IAVF_FDIR_FLTR_INACTIVE) {
|
||||
/* Delete filters not registered in PF */
|
||||
list_del(&fdir->list);
|
||||
kfree(fdir);
|
||||
adapter->fdir_active_fltr--;
|
||||
} else if (fdir->state == IAVF_FDIR_FLTR_ADD_PENDING ||
|
||||
fdir->state == IAVF_FDIR_FLTR_DIS_REQUEST ||
|
||||
fdir->state == IAVF_FDIR_FLTR_ACTIVE) {
|
||||
/* Filters registered in PF, schedule their deletion */
|
||||
fdir->state = IAVF_FDIR_FLTR_DEL_REQUEST;
|
||||
del_filters = true;
|
||||
} else if (fdir->state == IAVF_FDIR_FLTR_DIS_PENDING) {
|
||||
/* Request to delete filter already sent to PF, change
|
||||
* state to DEL_PENDING to delete filter after PF's
|
||||
* response, not set as INACTIVE
|
||||
*/
|
||||
fdir->state = IAVF_FDIR_FLTR_DEL_PENDING;
|
||||
}
|
||||
}
|
||||
spin_unlock_bh(&adapter->fdir_fltr_lock);
|
||||
|
||||
if (del_filters) {
|
||||
adapter->aq_required |= IAVF_FLAG_AQ_DEL_FDIR_FILTER;
|
||||
mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0);
|
||||
}
|
||||
}
|
||||
|
||||
#define NETIF_VLAN_OFFLOAD_FEATURES (NETIF_F_HW_VLAN_CTAG_RX | \
|
||||
NETIF_F_HW_VLAN_CTAG_TX | \
|
||||
NETIF_F_HW_VLAN_STAG_RX | \
|
||||
|
|
@ -4437,6 +4510,13 @@ static int iavf_set_features(struct net_device *netdev,
|
|||
iavf_set_vlan_offload_features(adapter, netdev->features,
|
||||
features);
|
||||
|
||||
if ((netdev->features & NETIF_F_NTUPLE) ^ (features & NETIF_F_NTUPLE)) {
|
||||
if (features & NETIF_F_NTUPLE)
|
||||
adapter->flags |= IAVF_FLAG_FDIR_ENABLED;
|
||||
else
|
||||
iavf_disable_fdir(adapter);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -4732,6 +4812,9 @@ static netdev_features_t iavf_fix_features(struct net_device *netdev,
|
|||
{
|
||||
struct iavf_adapter *adapter = netdev_priv(netdev);
|
||||
|
||||
if (!FDIR_FLTR_SUPPORT(adapter))
|
||||
features &= ~NETIF_F_NTUPLE;
|
||||
|
||||
return iavf_fix_netdev_vlan_features(adapter, features);
|
||||
}
|
||||
|
||||
|
|
@ -4849,6 +4932,12 @@ int iavf_process_config(struct iavf_adapter *adapter)
|
|||
if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)
|
||||
netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
|
||||
|
||||
if (FDIR_FLTR_SUPPORT(adapter)) {
|
||||
netdev->hw_features |= NETIF_F_NTUPLE;
|
||||
netdev->features |= NETIF_F_NTUPLE;
|
||||
adapter->flags |= IAVF_FLAG_FDIR_ENABLED;
|
||||
}
|
||||
|
||||
netdev->priv_flags |= IFF_UNICAST_FLT;
|
||||
|
||||
/* Do not turn on offloads when they are requested to be turned off.
|
||||
|
|
|
|||
|
|
@ -1752,8 +1752,8 @@ void iavf_add_fdir_filter(struct iavf_adapter *adapter)
|
|||
**/
|
||||
void iavf_del_fdir_filter(struct iavf_adapter *adapter)
|
||||
{
|
||||
struct virtchnl_fdir_del f = {};
|
||||
struct iavf_fdir_fltr *fdir;
|
||||
struct virtchnl_fdir_del f;
|
||||
bool process_fltr = false;
|
||||
int len;
|
||||
|
||||
|
|
@ -1770,11 +1770,16 @@ void iavf_del_fdir_filter(struct iavf_adapter *adapter)
|
|||
list_for_each_entry(fdir, &adapter->fdir_list_head, list) {
|
||||
if (fdir->state == IAVF_FDIR_FLTR_DEL_REQUEST) {
|
||||
process_fltr = true;
|
||||
memset(&f, 0, len);
|
||||
f.vsi_id = fdir->vc_add_msg.vsi_id;
|
||||
f.flow_id = fdir->flow_id;
|
||||
fdir->state = IAVF_FDIR_FLTR_DEL_PENDING;
|
||||
break;
|
||||
} else if (fdir->state == IAVF_FDIR_FLTR_DIS_REQUEST) {
|
||||
process_fltr = true;
|
||||
f.vsi_id = fdir->vc_add_msg.vsi_id;
|
||||
f.flow_id = fdir->flow_id;
|
||||
fdir->state = IAVF_FDIR_FLTR_DIS_PENDING;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock_bh(&adapter->fdir_fltr_lock);
|
||||
|
|
@ -1918,6 +1923,48 @@ static void iavf_netdev_features_vlan_strip_set(struct net_device *netdev,
|
|||
netdev->features &= ~NETIF_F_HW_VLAN_CTAG_RX;
|
||||
}
|
||||
|
||||
/**
|
||||
* iavf_activate_fdir_filters - Reactivate all FDIR filters after a reset
|
||||
* @adapter: private adapter structure
|
||||
*
|
||||
* Called after a reset to re-add all FDIR filters and delete some of them
|
||||
* if they were pending to be deleted.
|
||||
*/
|
||||
static void iavf_activate_fdir_filters(struct iavf_adapter *adapter)
|
||||
{
|
||||
struct iavf_fdir_fltr *f, *ftmp;
|
||||
bool add_filters = false;
|
||||
|
||||
spin_lock_bh(&adapter->fdir_fltr_lock);
|
||||
list_for_each_entry_safe(f, ftmp, &adapter->fdir_list_head, list) {
|
||||
if (f->state == IAVF_FDIR_FLTR_ADD_REQUEST ||
|
||||
f->state == IAVF_FDIR_FLTR_ADD_PENDING ||
|
||||
f->state == IAVF_FDIR_FLTR_ACTIVE) {
|
||||
/* All filters and requests have been removed in PF,
|
||||
* restore them
|
||||
*/
|
||||
f->state = IAVF_FDIR_FLTR_ADD_REQUEST;
|
||||
add_filters = true;
|
||||
} else if (f->state == IAVF_FDIR_FLTR_DIS_REQUEST ||
|
||||
f->state == IAVF_FDIR_FLTR_DIS_PENDING) {
|
||||
/* Link down state, leave filters as inactive */
|
||||
f->state = IAVF_FDIR_FLTR_INACTIVE;
|
||||
} else if (f->state == IAVF_FDIR_FLTR_DEL_REQUEST ||
|
||||
f->state == IAVF_FDIR_FLTR_DEL_PENDING) {
|
||||
/* Delete filters that were pending to be deleted, the
|
||||
* list on PF is already cleared after a reset
|
||||
*/
|
||||
list_del(&f->list);
|
||||
kfree(f);
|
||||
adapter->fdir_active_fltr--;
|
||||
}
|
||||
}
|
||||
spin_unlock_bh(&adapter->fdir_fltr_lock);
|
||||
|
||||
if (add_filters)
|
||||
adapter->aq_required |= IAVF_FLAG_AQ_ADD_FDIR_FILTER;
|
||||
}
|
||||
|
||||
/**
|
||||
* iavf_virtchnl_completion
|
||||
* @adapter: adapter structure
|
||||
|
|
@ -2095,7 +2142,8 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
|
|||
spin_lock_bh(&adapter->fdir_fltr_lock);
|
||||
list_for_each_entry(fdir, &adapter->fdir_list_head,
|
||||
list) {
|
||||
if (fdir->state == IAVF_FDIR_FLTR_DEL_PENDING) {
|
||||
if (fdir->state == IAVF_FDIR_FLTR_DEL_PENDING ||
|
||||
fdir->state == IAVF_FDIR_FLTR_DIS_PENDING) {
|
||||
fdir->state = IAVF_FDIR_FLTR_ACTIVE;
|
||||
dev_info(&adapter->pdev->dev, "Failed to del Flow Director filter, error %s\n",
|
||||
iavf_stat_str(&adapter->hw,
|
||||
|
|
@ -2232,6 +2280,8 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
|
|||
|
||||
spin_unlock_bh(&adapter->mac_vlan_list_lock);
|
||||
|
||||
iavf_activate_fdir_filters(adapter);
|
||||
|
||||
iavf_parse_vf_resource_msg(adapter);
|
||||
|
||||
/* negotiated VIRTCHNL_VF_OFFLOAD_VLAN_V2, so wait for the
|
||||
|
|
@ -2421,7 +2471,9 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
|
|||
list_for_each_entry_safe(fdir, fdir_tmp, &adapter->fdir_list_head,
|
||||
list) {
|
||||
if (fdir->state == IAVF_FDIR_FLTR_DEL_PENDING) {
|
||||
if (del_fltr->status == VIRTCHNL_FDIR_SUCCESS) {
|
||||
if (del_fltr->status == VIRTCHNL_FDIR_SUCCESS ||
|
||||
del_fltr->status ==
|
||||
VIRTCHNL_FDIR_FAILURE_RULE_NONEXIST) {
|
||||
dev_info(&adapter->pdev->dev, "Flow Director filter with location %u is deleted\n",
|
||||
fdir->loc);
|
||||
list_del(&fdir->list);
|
||||
|
|
@ -2433,6 +2485,17 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
|
|||
del_fltr->status);
|
||||
iavf_print_fdir_fltr(adapter, fdir);
|
||||
}
|
||||
} else if (fdir->state == IAVF_FDIR_FLTR_DIS_PENDING) {
|
||||
if (del_fltr->status == VIRTCHNL_FDIR_SUCCESS ||
|
||||
del_fltr->status ==
|
||||
VIRTCHNL_FDIR_FAILURE_RULE_NONEXIST) {
|
||||
fdir->state = IAVF_FDIR_FLTR_INACTIVE;
|
||||
} else {
|
||||
fdir->state = IAVF_FDIR_FLTR_ACTIVE;
|
||||
dev_info(&adapter->pdev->dev, "Failed to disable Flow Director filter with status: %d\n",
|
||||
del_fltr->status);
|
||||
iavf_print_fdir_fltr(adapter, fdir);
|
||||
}
|
||||
}
|
||||
}
|
||||
spin_unlock_bh(&adapter->fdir_fltr_lock);
|
||||
|
|
|
|||
|
|
@ -8822,7 +8822,7 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct sk_buff *skb,
|
|||
|
||||
if (skb_cow_head(skb, 0))
|
||||
goto out_drop;
|
||||
vhdr = (struct vlan_ethhdr *)skb->data;
|
||||
vhdr = skb_vlan_eth_hdr(skb);
|
||||
vhdr->h_vlan_TCI = htons(tx_flags >>
|
||||
IXGBE_TX_FLAGS_VLAN_SHIFT);
|
||||
} else {
|
||||
|
|
|
|||
|
|
@ -642,7 +642,7 @@ static int rvu_nix_register_reporters(struct rvu_devlink *rvu_dl)
|
|||
|
||||
rvu_dl->devlink_wq = create_workqueue("rvu_devlink_wq");
|
||||
if (!rvu_dl->devlink_wq)
|
||||
goto err;
|
||||
return -ENOMEM;
|
||||
|
||||
INIT_WORK(&rvu_reporters->intr_work, rvu_nix_intr_work);
|
||||
INIT_WORK(&rvu_reporters->gen_work, rvu_nix_gen_work);
|
||||
|
|
@ -650,9 +650,6 @@ static int rvu_nix_register_reporters(struct rvu_devlink *rvu_dl)
|
|||
INIT_WORK(&rvu_reporters->ras_work, rvu_nix_ras_work);
|
||||
|
||||
return 0;
|
||||
err:
|
||||
rvu_nix_health_reporters_destroy(rvu_dl);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static int rvu_nix_health_reporters_create(struct rvu_devlink *rvu_dl)
|
||||
|
|
|
|||
|
|
@ -671,6 +671,7 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
|
|||
int blkaddr, ucast_idx, index;
|
||||
struct nix_rx_action action = { 0 };
|
||||
u64 relaxed_mask;
|
||||
u8 flow_key_alg;
|
||||
|
||||
if (!hw->cap.nix_rx_multicast && is_cgx_vf(rvu, pcifunc))
|
||||
return;
|
||||
|
|
@ -701,6 +702,8 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
|
|||
action.op = NIX_RX_ACTIONOP_UCAST;
|
||||
}
|
||||
|
||||
flow_key_alg = action.flow_key_alg;
|
||||
|
||||
/* RX_ACTION set to MCAST for CGX PF's */
|
||||
if (hw->cap.nix_rx_multicast && pfvf->use_mce_list &&
|
||||
is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc))) {
|
||||
|
|
@ -740,7 +743,7 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
|
|||
req.vf = pcifunc;
|
||||
req.index = action.index;
|
||||
req.match_id = action.match_id;
|
||||
req.flow_key_alg = action.flow_key_alg;
|
||||
req.flow_key_alg = flow_key_alg;
|
||||
|
||||
rvu_mbox_handler_npc_install_flow(rvu, &req, &rsp);
|
||||
}
|
||||
|
|
@ -854,6 +857,7 @@ void rvu_npc_install_allmulti_entry(struct rvu *rvu, u16 pcifunc, int nixlf,
|
|||
u8 mac_addr[ETH_ALEN] = { 0 };
|
||||
struct nix_rx_action action = { 0 };
|
||||
struct rvu_pfvf *pfvf;
|
||||
u8 flow_key_alg;
|
||||
u16 vf_func;
|
||||
|
||||
/* Only CGX PF/VF can add allmulticast entry */
|
||||
|
|
@ -888,6 +892,7 @@ void rvu_npc_install_allmulti_entry(struct rvu *rvu, u16 pcifunc, int nixlf,
|
|||
*(u64 *)&action = npc_get_mcam_action(rvu, mcam,
|
||||
blkaddr, ucast_idx);
|
||||
|
||||
flow_key_alg = action.flow_key_alg;
|
||||
if (action.op != NIX_RX_ACTIONOP_RSS) {
|
||||
*(u64 *)&action = 0;
|
||||
action.op = NIX_RX_ACTIONOP_UCAST;
|
||||
|
|
@ -924,7 +929,7 @@ void rvu_npc_install_allmulti_entry(struct rvu *rvu, u16 pcifunc, int nixlf,
|
|||
req.vf = pcifunc | vf_func;
|
||||
req.index = action.index;
|
||||
req.match_id = action.match_id;
|
||||
req.flow_key_alg = action.flow_key_alg;
|
||||
req.flow_key_alg = flow_key_alg;
|
||||
|
||||
rvu_mbox_handler_npc_install_flow(rvu, &req, &rsp);
|
||||
}
|
||||
|
|
@ -990,11 +995,38 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
|
|||
mutex_unlock(&mcam->lock);
|
||||
}
|
||||
|
||||
static void npc_update_rx_action_with_alg_idx(struct rvu *rvu, struct nix_rx_action action,
|
||||
struct rvu_pfvf *pfvf, int mcam_index, int blkaddr,
|
||||
int alg_idx)
|
||||
|
||||
{
|
||||
struct npc_mcam *mcam = &rvu->hw->mcam;
|
||||
struct rvu_hwinfo *hw = rvu->hw;
|
||||
int bank, op_rss;
|
||||
|
||||
if (!is_mcam_entry_enabled(rvu, mcam, blkaddr, mcam_index))
|
||||
return;
|
||||
|
||||
op_rss = (!hw->cap.nix_rx_multicast || !pfvf->use_mce_list);
|
||||
|
||||
bank = npc_get_bank(mcam, mcam_index);
|
||||
mcam_index &= (mcam->banksize - 1);
|
||||
|
||||
/* If Rx action is MCAST update only RSS algorithm index */
|
||||
if (!op_rss) {
|
||||
*(u64 *)&action = rvu_read64(rvu, blkaddr,
|
||||
NPC_AF_MCAMEX_BANKX_ACTION(mcam_index, bank));
|
||||
|
||||
action.flow_key_alg = alg_idx;
|
||||
}
|
||||
rvu_write64(rvu, blkaddr,
|
||||
NPC_AF_MCAMEX_BANKX_ACTION(mcam_index, bank), *(u64 *)&action);
|
||||
}
|
||||
|
||||
void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf,
|
||||
int group, int alg_idx, int mcam_index)
|
||||
{
|
||||
struct npc_mcam *mcam = &rvu->hw->mcam;
|
||||
struct rvu_hwinfo *hw = rvu->hw;
|
||||
struct nix_rx_action action;
|
||||
int blkaddr, index, bank;
|
||||
struct rvu_pfvf *pfvf;
|
||||
|
|
@ -1050,15 +1082,16 @@ void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf,
|
|||
/* If PF's promiscuous entry is enabled,
|
||||
* Set RSS action for that entry as well
|
||||
*/
|
||||
if ((!hw->cap.nix_rx_multicast || !pfvf->use_mce_list) &&
|
||||
is_mcam_entry_enabled(rvu, mcam, blkaddr, index)) {
|
||||
bank = npc_get_bank(mcam, index);
|
||||
index &= (mcam->banksize - 1);
|
||||
npc_update_rx_action_with_alg_idx(rvu, action, pfvf, index, blkaddr,
|
||||
alg_idx);
|
||||
|
||||
rvu_write64(rvu, blkaddr,
|
||||
NPC_AF_MCAMEX_BANKX_ACTION(index, bank),
|
||||
*(u64 *)&action);
|
||||
}
|
||||
index = npc_get_nixlf_mcam_index(mcam, pcifunc,
|
||||
nixlf, NIXLF_ALLMULTI_ENTRY);
|
||||
/* If PF's allmulti entry is enabled,
|
||||
* Set RSS action for that entry as well
|
||||
*/
|
||||
npc_update_rx_action_with_alg_idx(rvu, action, pfvf, index, blkaddr,
|
||||
alg_idx);
|
||||
}
|
||||
|
||||
void npc_enadis_default_mce_entry(struct rvu *rvu, u16 pcifunc,
|
||||
|
|
|
|||
|
|
@ -1638,6 +1638,21 @@ static void otx2_free_hw_resources(struct otx2_nic *pf)
|
|||
mutex_unlock(&mbox->lock);
|
||||
}
|
||||
|
||||
static bool otx2_promisc_use_mce_list(struct otx2_nic *pfvf)
|
||||
{
|
||||
int vf;
|
||||
|
||||
/* The AF driver will determine whether to allow the VF netdev or not */
|
||||
if (is_otx2_vf(pfvf->pcifunc))
|
||||
return true;
|
||||
|
||||
/* check if there are any trusted VFs associated with the PF netdev */
|
||||
for (vf = 0; vf < pci_num_vf(pfvf->pdev); vf++)
|
||||
if (pfvf->vf_configs[vf].trusted)
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
static void otx2_do_set_rx_mode(struct otx2_nic *pf)
|
||||
{
|
||||
struct net_device *netdev = pf->netdev;
|
||||
|
|
@ -1670,7 +1685,8 @@ static void otx2_do_set_rx_mode(struct otx2_nic *pf)
|
|||
if (netdev->flags & (IFF_ALLMULTI | IFF_MULTICAST))
|
||||
req->mode |= NIX_RX_MODE_ALLMULTI;
|
||||
|
||||
req->mode |= NIX_RX_MODE_USE_MCE;
|
||||
if (otx2_promisc_use_mce_list(pf))
|
||||
req->mode |= NIX_RX_MODE_USE_MCE;
|
||||
|
||||
otx2_sync_mbox_msg(&pf->mbox);
|
||||
mutex_unlock(&pf->mbox.lock);
|
||||
|
|
@ -2634,11 +2650,14 @@ static int otx2_ndo_set_vf_trust(struct net_device *netdev, int vf,
|
|||
pf->vf_configs[vf].trusted = enable;
|
||||
rc = otx2_set_vf_permissions(pf, vf, OTX2_TRUSTED_VF);
|
||||
|
||||
if (rc)
|
||||
if (rc) {
|
||||
pf->vf_configs[vf].trusted = !enable;
|
||||
else
|
||||
} else {
|
||||
netdev_info(pf->netdev, "VF %d is %strusted\n",
|
||||
vf, enable ? "" : "not ");
|
||||
otx2_set_rx_mode(netdev);
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -847,6 +847,7 @@ enum {
|
|||
MLX5E_STATE_DESTROYING,
|
||||
MLX5E_STATE_XDP_TX_ENABLED,
|
||||
MLX5E_STATE_XDP_ACTIVE,
|
||||
MLX5E_STATE_CHANNELS_ACTIVE,
|
||||
};
|
||||
|
||||
struct mlx5e_modify_sq_param {
|
||||
|
|
|
|||
|
|
@ -2586,6 +2586,7 @@ void mlx5e_close_channels(struct mlx5e_channels *chs)
|
|||
{
|
||||
int i;
|
||||
|
||||
ASSERT_RTNL();
|
||||
if (chs->ptp) {
|
||||
mlx5e_ptp_close(chs->ptp);
|
||||
chs->ptp = NULL;
|
||||
|
|
@ -2865,17 +2866,29 @@ void mlx5e_activate_priv_channels(struct mlx5e_priv *priv)
|
|||
if (mlx5e_is_vport_rep(priv))
|
||||
mlx5e_rep_activate_channels(priv);
|
||||
|
||||
set_bit(MLX5E_STATE_CHANNELS_ACTIVE, &priv->state);
|
||||
|
||||
mlx5e_wait_channels_min_rx_wqes(&priv->channels);
|
||||
|
||||
if (priv->rx_res)
|
||||
mlx5e_rx_res_channels_activate(priv->rx_res, &priv->channels);
|
||||
}
|
||||
|
||||
static void mlx5e_cancel_tx_timeout_work(struct mlx5e_priv *priv)
|
||||
{
|
||||
WARN_ON_ONCE(test_bit(MLX5E_STATE_CHANNELS_ACTIVE, &priv->state));
|
||||
if (current_work() != &priv->tx_timeout_work)
|
||||
cancel_work_sync(&priv->tx_timeout_work);
|
||||
}
|
||||
|
||||
void mlx5e_deactivate_priv_channels(struct mlx5e_priv *priv)
|
||||
{
|
||||
if (priv->rx_res)
|
||||
mlx5e_rx_res_channels_deactivate(priv->rx_res);
|
||||
|
||||
clear_bit(MLX5E_STATE_CHANNELS_ACTIVE, &priv->state);
|
||||
mlx5e_cancel_tx_timeout_work(priv);
|
||||
|
||||
if (mlx5e_is_vport_rep(priv))
|
||||
mlx5e_rep_deactivate_channels(priv);
|
||||
|
||||
|
|
@ -4617,8 +4630,17 @@ static void mlx5e_tx_timeout_work(struct work_struct *work)
|
|||
struct net_device *netdev = priv->netdev;
|
||||
int i;
|
||||
|
||||
rtnl_lock();
|
||||
mutex_lock(&priv->state_lock);
|
||||
/* Take rtnl_lock to ensure no change in netdev->real_num_tx_queues
|
||||
* through this flow. However, channel closing flows have to wait for
|
||||
* this work to finish while holding rtnl lock too. So either get the
|
||||
* lock or find that channels are being closed for other reason and
|
||||
* this work is not relevant anymore.
|
||||
*/
|
||||
while (!rtnl_trylock()) {
|
||||
if (!test_bit(MLX5E_STATE_CHANNELS_ACTIVE, &priv->state))
|
||||
return;
|
||||
msleep(20);
|
||||
}
|
||||
|
||||
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
|
||||
goto unlock;
|
||||
|
|
@ -4637,7 +4659,6 @@ static void mlx5e_tx_timeout_work(struct work_struct *work)
|
|||
}
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&priv->state_lock);
|
||||
rtnl_unlock();
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1862,7 +1862,7 @@ netxen_tso_check(struct net_device *netdev,
|
|||
|
||||
if (protocol == cpu_to_be16(ETH_P_8021Q)) {
|
||||
|
||||
vh = (struct vlan_ethhdr *)skb->data;
|
||||
vh = skb_vlan_eth_hdr(skb);
|
||||
protocol = vh->h_vlan_encapsulated_proto;
|
||||
flags = FLAGS_VLAN_TAGGED;
|
||||
|
||||
|
|
|
|||
|
|
@ -933,6 +933,7 @@ static void qed_ilt_shadow_free(struct qed_hwfn *p_hwfn)
|
|||
p_dma->virt_addr = NULL;
|
||||
}
|
||||
kfree(p_mngr->ilt_shadow);
|
||||
p_mngr->ilt_shadow = NULL;
|
||||
}
|
||||
|
||||
static int qed_ilt_blk_alloc(struct qed_hwfn *p_hwfn,
|
||||
|
|
|
|||
|
|
@ -318,7 +318,7 @@ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
|
|||
|
||||
if (adapter->flags & QLCNIC_VLAN_FILTERING) {
|
||||
if (protocol == ETH_P_8021Q) {
|
||||
vh = (struct vlan_ethhdr *)skb->data;
|
||||
vh = skb_vlan_eth_hdr(skb);
|
||||
vlan_id = ntohs(vh->h_vlan_TCI);
|
||||
} else if (skb_vlan_tag_present(skb)) {
|
||||
vlan_id = skb_vlan_tag_get(skb);
|
||||
|
|
@ -468,7 +468,7 @@ static int qlcnic_tx_pkt(struct qlcnic_adapter *adapter,
|
|||
u32 producer = tx_ring->producer;
|
||||
|
||||
if (protocol == ETH_P_8021Q) {
|
||||
vh = (struct vlan_ethhdr *)skb->data;
|
||||
vh = skb_vlan_eth_hdr(skb);
|
||||
flags = QLCNIC_FLAGS_VLAN_TAGGED;
|
||||
vlan_tci = ntohs(vh->h_vlan_TCI);
|
||||
protocol = ntohs(vh->h_vlan_encapsulated_proto);
|
||||
|
|
|
|||
|
|
@ -30,6 +30,8 @@
|
|||
|
||||
#define QCASPI_MAX_REGS 0x20
|
||||
|
||||
#define QCASPI_RX_MAX_FRAMES 4
|
||||
|
||||
static const u16 qcaspi_spi_regs[] = {
|
||||
SPI_REG_BFR_SIZE,
|
||||
SPI_REG_WRBUF_SPC_AVA,
|
||||
|
|
@ -252,9 +254,9 @@ qcaspi_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ring,
|
|||
{
|
||||
struct qcaspi *qca = netdev_priv(dev);
|
||||
|
||||
ring->rx_max_pending = 4;
|
||||
ring->rx_max_pending = QCASPI_RX_MAX_FRAMES;
|
||||
ring->tx_max_pending = TX_RING_MAX_LEN;
|
||||
ring->rx_pending = 4;
|
||||
ring->rx_pending = QCASPI_RX_MAX_FRAMES;
|
||||
ring->tx_pending = qca->txr.count;
|
||||
}
|
||||
|
||||
|
|
@ -263,22 +265,21 @@ qcaspi_set_ringparam(struct net_device *dev, struct ethtool_ringparam *ring,
|
|||
struct kernel_ethtool_ringparam *kernel_ring,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
const struct net_device_ops *ops = dev->netdev_ops;
|
||||
struct qcaspi *qca = netdev_priv(dev);
|
||||
|
||||
if ((ring->rx_pending) ||
|
||||
if (ring->rx_pending != QCASPI_RX_MAX_FRAMES ||
|
||||
(ring->rx_mini_pending) ||
|
||||
(ring->rx_jumbo_pending))
|
||||
return -EINVAL;
|
||||
|
||||
if (netif_running(dev))
|
||||
ops->ndo_stop(dev);
|
||||
if (qca->spi_thread)
|
||||
kthread_park(qca->spi_thread);
|
||||
|
||||
qca->txr.count = max_t(u32, ring->tx_pending, TX_RING_MIN_LEN);
|
||||
qca->txr.count = min_t(u16, qca->txr.count, TX_RING_MAX_LEN);
|
||||
|
||||
if (netif_running(dev))
|
||||
ops->ndo_open(dev);
|
||||
if (qca->spi_thread)
|
||||
kthread_unpark(qca->spi_thread);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -581,6 +581,18 @@ qcaspi_spi_thread(void *data)
|
|||
netdev_info(qca->net_dev, "SPI thread created\n");
|
||||
while (!kthread_should_stop()) {
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
if (kthread_should_park()) {
|
||||
netif_tx_disable(qca->net_dev);
|
||||
netif_carrier_off(qca->net_dev);
|
||||
qcaspi_flush_tx_ring(qca);
|
||||
kthread_parkme();
|
||||
if (qca->sync == QCASPI_SYNC_READY) {
|
||||
netif_carrier_on(qca->net_dev);
|
||||
netif_wake_queue(qca->net_dev);
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
if ((qca->intr_req == qca->intr_svc) &&
|
||||
!qca->txr.skb[qca->txr.head])
|
||||
schedule();
|
||||
|
|
@ -609,11 +621,17 @@ qcaspi_spi_thread(void *data)
|
|||
if (intr_cause & SPI_INT_CPU_ON) {
|
||||
qcaspi_qca7k_sync(qca, QCASPI_EVENT_CPUON);
|
||||
|
||||
/* Frame decoding in progress */
|
||||
if (qca->frm_handle.state != qca->frm_handle.init)
|
||||
qca->net_dev->stats.rx_dropped++;
|
||||
|
||||
qcafrm_fsm_init_spi(&qca->frm_handle);
|
||||
qca->stats.device_reset++;
|
||||
|
||||
/* not synced. */
|
||||
if (qca->sync != QCASPI_SYNC_READY)
|
||||
continue;
|
||||
|
||||
qca->stats.device_reset++;
|
||||
netif_wake_queue(qca->net_dev);
|
||||
netif_carrier_on(qca->net_dev);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -147,7 +147,7 @@ static __be16 efx_tso_check_protocol(struct sk_buff *skb)
|
|||
EFX_WARN_ON_ONCE_PARANOID(((struct ethhdr *)skb->data)->h_proto !=
|
||||
protocol);
|
||||
if (protocol == htons(ETH_P_8021Q)) {
|
||||
struct vlan_ethhdr *veh = (struct vlan_ethhdr *)skb->data;
|
||||
struct vlan_ethhdr *veh = skb_vlan_eth_hdr(skb);
|
||||
|
||||
protocol = veh->h_vlan_encapsulated_proto;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -256,7 +256,7 @@ config DWMAC_INTEL
|
|||
config DWMAC_LOONGSON
|
||||
tristate "Loongson PCI DWMAC support"
|
||||
default MACH_LOONGSON64
|
||||
depends on STMMAC_ETH && PCI
|
||||
depends on (MACH_LOONGSON64 || COMPILE_TEST) && STMMAC_ETH && PCI
|
||||
depends on COMMON_CLK
|
||||
help
|
||||
This selects the LOONGSON PCI bus support for the stmmac driver,
|
||||
|
|
|
|||
|
|
@ -68,17 +68,15 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
|
|||
if (!plat)
|
||||
return -ENOMEM;
|
||||
|
||||
plat->mdio_bus_data = devm_kzalloc(&pdev->dev,
|
||||
sizeof(*plat->mdio_bus_data),
|
||||
GFP_KERNEL);
|
||||
if (!plat->mdio_bus_data)
|
||||
return -ENOMEM;
|
||||
|
||||
plat->mdio_node = of_get_child_by_name(np, "mdio");
|
||||
if (plat->mdio_node) {
|
||||
dev_info(&pdev->dev, "Found MDIO subnode\n");
|
||||
|
||||
plat->mdio_bus_data = devm_kzalloc(&pdev->dev,
|
||||
sizeof(*plat->mdio_bus_data),
|
||||
GFP_KERNEL);
|
||||
if (!plat->mdio_bus_data) {
|
||||
ret = -ENOMEM;
|
||||
goto err_put_node;
|
||||
}
|
||||
plat->mdio_bus_data->needs_reset = true;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -4566,13 +4566,10 @@ dma_map_err:
|
|||
|
||||
static void stmmac_rx_vlan(struct net_device *dev, struct sk_buff *skb)
|
||||
{
|
||||
struct vlan_ethhdr *veth;
|
||||
__be16 vlan_proto;
|
||||
struct vlan_ethhdr *veth = skb_vlan_eth_hdr(skb);
|
||||
__be16 vlan_proto = veth->h_vlan_proto;
|
||||
u16 vlanid;
|
||||
|
||||
veth = (struct vlan_ethhdr *)skb->data;
|
||||
vlan_proto = veth->h_vlan_proto;
|
||||
|
||||
if ((vlan_proto == htons(ETH_P_8021Q) &&
|
||||
dev->features & NETIF_F_HW_VLAN_CTAG_RX) ||
|
||||
(vlan_proto == htons(ETH_P_8021AD) &&
|
||||
|
|
|
|||
|
|
@ -483,7 +483,11 @@ int stmmac_mdio_register(struct net_device *ndev)
|
|||
new_bus->parent = priv->device;
|
||||
|
||||
err = of_mdiobus_register(new_bus, mdio_node);
|
||||
if (err != 0) {
|
||||
if (err == -ENODEV) {
|
||||
err = 0;
|
||||
dev_info(dev, "MDIO bus is disabled\n");
|
||||
goto bus_register_fail;
|
||||
} else if (err) {
|
||||
dev_err_probe(dev, err, "Cannot register the MDIO bus\n");
|
||||
goto bus_register_fail;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -285,8 +285,10 @@ static int __team_options_register(struct team *team,
|
|||
return 0;
|
||||
|
||||
inst_rollback:
|
||||
for (i--; i >= 0; i--)
|
||||
for (i--; i >= 0; i--) {
|
||||
__team_option_inst_del_option(team, dst_opts[i]);
|
||||
list_del(&dst_opts[i]->list);
|
||||
}
|
||||
|
||||
i = option_count;
|
||||
alloc_rollback:
|
||||
|
|
|
|||
|
|
@ -1079,17 +1079,17 @@ static int aqc111_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
|
|||
u16 pkt_count = 0;
|
||||
u64 desc_hdr = 0;
|
||||
u16 vlan_tag = 0;
|
||||
u32 skb_len = 0;
|
||||
u32 skb_len;
|
||||
|
||||
if (!skb)
|
||||
goto err;
|
||||
|
||||
if (skb->len == 0)
|
||||
skb_len = skb->len;
|
||||
if (skb_len < sizeof(desc_hdr))
|
||||
goto err;
|
||||
|
||||
skb_len = skb->len;
|
||||
/* RX Descriptor Header */
|
||||
skb_trim(skb, skb->len - sizeof(desc_hdr));
|
||||
skb_trim(skb, skb_len - sizeof(desc_hdr));
|
||||
desc_hdr = le64_to_cpup((u64 *)skb_tail_pointer(skb));
|
||||
|
||||
/* Check these packets */
|
||||
|
|
|
|||
|
|
@ -1288,6 +1288,7 @@ static const struct usb_device_id products[] = {
|
|||
{QMI_FIXED_INTF(0x19d2, 0x0168, 4)},
|
||||
{QMI_FIXED_INTF(0x19d2, 0x0176, 3)},
|
||||
{QMI_FIXED_INTF(0x19d2, 0x0178, 3)},
|
||||
{QMI_FIXED_INTF(0x19d2, 0x0189, 4)}, /* ZTE MF290 */
|
||||
{QMI_FIXED_INTF(0x19d2, 0x0191, 4)}, /* ZTE EuFi890 */
|
||||
{QMI_FIXED_INTF(0x19d2, 0x0199, 1)}, /* ZTE MF820S */
|
||||
{QMI_FIXED_INTF(0x19d2, 0x0200, 1)},
|
||||
|
|
|
|||
|
|
@ -8288,43 +8288,6 @@ static bool rtl_check_vendor_ok(struct usb_interface *intf)
|
|||
return true;
|
||||
}
|
||||
|
||||
static bool rtl_vendor_mode(struct usb_interface *intf)
|
||||
{
|
||||
struct usb_host_interface *alt = intf->cur_altsetting;
|
||||
struct usb_device *udev;
|
||||
struct usb_host_config *c;
|
||||
int i, num_configs;
|
||||
|
||||
if (alt->desc.bInterfaceClass == USB_CLASS_VENDOR_SPEC)
|
||||
return rtl_check_vendor_ok(intf);
|
||||
|
||||
/* The vendor mode is not always config #1, so to find it out. */
|
||||
udev = interface_to_usbdev(intf);
|
||||
c = udev->config;
|
||||
num_configs = udev->descriptor.bNumConfigurations;
|
||||
if (num_configs < 2)
|
||||
return false;
|
||||
|
||||
for (i = 0; i < num_configs; (i++, c++)) {
|
||||
struct usb_interface_descriptor *desc = NULL;
|
||||
|
||||
if (c->desc.bNumInterfaces > 0)
|
||||
desc = &c->intf_cache[0]->altsetting->desc;
|
||||
else
|
||||
continue;
|
||||
|
||||
if (desc->bInterfaceClass == USB_CLASS_VENDOR_SPEC) {
|
||||
usb_driver_set_configuration(udev, c->desc.bConfigurationValue);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (i == num_configs)
|
||||
dev_err(&intf->dev, "Unexpected Device\n");
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static int rtl8152_pre_reset(struct usb_interface *intf)
|
||||
{
|
||||
struct r8152 *tp = usb_get_intfdata(intf);
|
||||
|
|
@ -9556,9 +9519,8 @@ static int rtl_fw_init(struct r8152 *tp)
|
|||
return 0;
|
||||
}
|
||||
|
||||
u8 rtl8152_get_version(struct usb_interface *intf)
|
||||
static u8 __rtl_get_hw_ver(struct usb_device *udev)
|
||||
{
|
||||
struct usb_device *udev = interface_to_usbdev(intf);
|
||||
u32 ocp_data = 0;
|
||||
__le32 *tmp;
|
||||
u8 version;
|
||||
|
|
@ -9628,10 +9590,19 @@ u8 rtl8152_get_version(struct usb_interface *intf)
|
|||
break;
|
||||
default:
|
||||
version = RTL_VER_UNKNOWN;
|
||||
dev_info(&intf->dev, "Unknown version 0x%04x\n", ocp_data);
|
||||
dev_info(&udev->dev, "Unknown version 0x%04x\n", ocp_data);
|
||||
break;
|
||||
}
|
||||
|
||||
return version;
|
||||
}
|
||||
|
||||
u8 rtl8152_get_version(struct usb_interface *intf)
|
||||
{
|
||||
u8 version;
|
||||
|
||||
version = __rtl_get_hw_ver(interface_to_usbdev(intf));
|
||||
|
||||
dev_dbg(&intf->dev, "Detected version 0x%04x\n", version);
|
||||
|
||||
return version;
|
||||
|
|
@ -9675,7 +9646,10 @@ static int rtl8152_probe(struct usb_interface *intf,
|
|||
if (version == RTL_VER_UNKNOWN)
|
||||
return -ENODEV;
|
||||
|
||||
if (!rtl_vendor_mode(intf))
|
||||
if (intf->cur_altsetting->desc.bInterfaceClass != USB_CLASS_VENDOR_SPEC)
|
||||
return -ENODEV;
|
||||
|
||||
if (!rtl_check_vendor_ok(intf))
|
||||
return -ENODEV;
|
||||
|
||||
usb_reset_device(udev);
|
||||
|
|
@ -9875,43 +9849,37 @@ static void rtl8152_disconnect(struct usb_interface *intf)
|
|||
}
|
||||
}
|
||||
|
||||
#define REALTEK_USB_DEVICE(vend, prod) { \
|
||||
USB_DEVICE_INTERFACE_CLASS(vend, prod, USB_CLASS_VENDOR_SPEC), \
|
||||
}, \
|
||||
{ \
|
||||
USB_DEVICE_AND_INTERFACE_INFO(vend, prod, USB_CLASS_COMM, \
|
||||
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), \
|
||||
}
|
||||
|
||||
/* table of devices that work with this driver */
|
||||
static const struct usb_device_id rtl8152_table[] = {
|
||||
/* Realtek */
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8050),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8053),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8152),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8153),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8155),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8156),
|
||||
{ USB_DEVICE(VENDOR_ID_REALTEK, 0x8050) },
|
||||
{ USB_DEVICE(VENDOR_ID_REALTEK, 0x8053) },
|
||||
{ USB_DEVICE(VENDOR_ID_REALTEK, 0x8152) },
|
||||
{ USB_DEVICE(VENDOR_ID_REALTEK, 0x8153) },
|
||||
{ USB_DEVICE(VENDOR_ID_REALTEK, 0x8155) },
|
||||
{ USB_DEVICE(VENDOR_ID_REALTEK, 0x8156) },
|
||||
|
||||
/* Microsoft */
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07ab),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07c6),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0c5e),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x304f),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3054),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3062),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3069),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3082),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x721e),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff),
|
||||
REALTEK_USB_DEVICE(VENDOR_ID_TPLINK, 0x0601),
|
||||
{ USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07ab) },
|
||||
{ USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07c6) },
|
||||
{ USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927) },
|
||||
{ USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0c5e) },
|
||||
{ USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101) },
|
||||
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x304f) },
|
||||
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x3054) },
|
||||
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x3062) },
|
||||
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x3069) },
|
||||
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x3082) },
|
||||
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x7205) },
|
||||
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x720c) },
|
||||
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x7214) },
|
||||
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x721e) },
|
||||
{ USB_DEVICE(VENDOR_ID_LENOVO, 0xa387) },
|
||||
{ USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041) },
|
||||
{ USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff) },
|
||||
{ USB_DEVICE(VENDOR_ID_TPLINK, 0x0601) },
|
||||
{ USB_DEVICE(VENDOR_ID_DLINK, 0xb301) },
|
||||
{ USB_DEVICE(VENDOR_ID_ASUS, 0x1976) },
|
||||
{}
|
||||
};
|
||||
|
||||
|
|
@ -9931,7 +9899,68 @@ static struct usb_driver rtl8152_driver = {
|
|||
.disable_hub_initiated_lpm = 1,
|
||||
};
|
||||
|
||||
module_usb_driver(rtl8152_driver);
|
||||
static int rtl8152_cfgselector_probe(struct usb_device *udev)
|
||||
{
|
||||
struct usb_host_config *c;
|
||||
int i, num_configs;
|
||||
|
||||
/* Switch the device to vendor mode, if and only if the vendor mode
|
||||
* driver supports it.
|
||||
*/
|
||||
if (__rtl_get_hw_ver(udev) == RTL_VER_UNKNOWN)
|
||||
return 0;
|
||||
|
||||
/* The vendor mode is not always config #1, so to find it out. */
|
||||
c = udev->config;
|
||||
num_configs = udev->descriptor.bNumConfigurations;
|
||||
for (i = 0; i < num_configs; (i++, c++)) {
|
||||
struct usb_interface_descriptor *desc = NULL;
|
||||
|
||||
if (!c->desc.bNumInterfaces)
|
||||
continue;
|
||||
desc = &c->intf_cache[0]->altsetting->desc;
|
||||
if (desc->bInterfaceClass == USB_CLASS_VENDOR_SPEC)
|
||||
break;
|
||||
}
|
||||
|
||||
if (i == num_configs)
|
||||
return -ENODEV;
|
||||
|
||||
if (usb_set_configuration(udev, c->desc.bConfigurationValue)) {
|
||||
dev_err(&udev->dev, "Failed to set configuration %d\n",
|
||||
c->desc.bConfigurationValue);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct usb_device_driver rtl8152_cfgselector_driver = {
|
||||
.name = MODULENAME "-cfgselector",
|
||||
.probe = rtl8152_cfgselector_probe,
|
||||
.id_table = rtl8152_table,
|
||||
.generic_subclass = 1,
|
||||
.supports_autosuspend = 1,
|
||||
};
|
||||
|
||||
static int __init rtl8152_driver_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = usb_register_device_driver(&rtl8152_cfgselector_driver, THIS_MODULE);
|
||||
if (ret)
|
||||
return ret;
|
||||
return usb_register(&rtl8152_driver);
|
||||
}
|
||||
|
||||
static void __exit rtl8152_driver_exit(void)
|
||||
{
|
||||
usb_deregister(&rtl8152_driver);
|
||||
usb_deregister_device_driver(&rtl8152_cfgselector_driver);
|
||||
}
|
||||
|
||||
module_init(rtl8152_driver_init);
|
||||
module_exit(rtl8152_driver_exit);
|
||||
|
||||
MODULE_AUTHOR(DRIVER_AUTHOR);
|
||||
MODULE_DESCRIPTION(DRIVER_DESC);
|
||||
|
|
|
|||
|
|
@ -834,6 +834,8 @@ static void nvme_queue_auth_work(struct work_struct *work)
|
|||
}
|
||||
|
||||
fail2:
|
||||
if (chap->status == 0)
|
||||
chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED;
|
||||
dev_dbg(ctrl->device, "%s: qid %d send failure2, status %x\n",
|
||||
__func__, chap->qid, chap->status);
|
||||
tl = nvme_auth_set_dhchap_failure2_data(ctrl, chap);
|
||||
|
|
|
|||
|
|
@ -1845,16 +1845,18 @@ set_pi:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void nvme_configure_metadata(struct nvme_ns *ns, struct nvme_id_ns *id)
|
||||
static int nvme_configure_metadata(struct nvme_ns *ns, struct nvme_id_ns *id)
|
||||
{
|
||||
struct nvme_ctrl *ctrl = ns->ctrl;
|
||||
int ret;
|
||||
|
||||
if (nvme_init_ms(ns, id))
|
||||
return;
|
||||
ret = nvme_init_ms(ns, id);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ns->features &= ~(NVME_NS_METADATA_SUPPORTED | NVME_NS_EXT_LBAS);
|
||||
if (!ns->ms || !(ctrl->ops->flags & NVME_F_METADATA_SUPPORTED))
|
||||
return;
|
||||
return 0;
|
||||
|
||||
if (ctrl->ops->flags & NVME_F_FABRICS) {
|
||||
/*
|
||||
|
|
@ -1863,7 +1865,7 @@ static void nvme_configure_metadata(struct nvme_ns *ns, struct nvme_id_ns *id)
|
|||
* remap the separate metadata buffer from the block layer.
|
||||
*/
|
||||
if (WARN_ON_ONCE(!(id->flbas & NVME_NS_FLBAS_META_EXT)))
|
||||
return;
|
||||
return 0;
|
||||
|
||||
ns->features |= NVME_NS_EXT_LBAS;
|
||||
|
||||
|
|
@ -1890,6 +1892,7 @@ static void nvme_configure_metadata(struct nvme_ns *ns, struct nvme_id_ns *id)
|
|||
else
|
||||
ns->features |= NVME_NS_METADATA_SUPPORTED;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
|
||||
|
|
@ -2070,7 +2073,11 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
|
|||
ns->lba_shift = id->lbaf[lbaf].ds;
|
||||
nvme_set_queue_limits(ns->ctrl, ns->queue);
|
||||
|
||||
nvme_configure_metadata(ns, id);
|
||||
ret = nvme_configure_metadata(ns, id);
|
||||
if (ret < 0) {
|
||||
blk_mq_unfreeze_queue(ns->disk->queue);
|
||||
goto out;
|
||||
}
|
||||
nvme_set_chunk_sectors(ns, id);
|
||||
nvme_update_disk_info(ns->disk, ns, id);
|
||||
|
||||
|
|
|
|||
|
|
@ -80,13 +80,49 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
|||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
|
||||
DEV_LS7A_LPC, system_bus_quirk);
|
||||
|
||||
/*
|
||||
* Some Loongson PCIe ports have hardware limitations on their Maximum Read
|
||||
* Request Size. They can't handle anything larger than this. Sane
|
||||
* firmware will set proper MRRS at boot, so we only need no_inc_mrrs for
|
||||
* bridges. However, some MIPS Loongson firmware doesn't set MRRS properly,
|
||||
* so we have to enforce maximum safe MRRS, which is 256 bytes.
|
||||
*/
|
||||
#ifdef CONFIG_MIPS
|
||||
static void loongson_set_min_mrrs_quirk(struct pci_dev *pdev)
|
||||
{
|
||||
struct pci_bus *bus = pdev->bus;
|
||||
struct pci_dev *bridge;
|
||||
static const struct pci_device_id bridge_devids[] = {
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_LS2K_PCIE_PORT0) },
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT0) },
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT1) },
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT2) },
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT3) },
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT4) },
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT5) },
|
||||
{ PCI_VDEVICE(LOONGSON, DEV_LS7A_PCIE_PORT6) },
|
||||
{ 0, },
|
||||
};
|
||||
|
||||
/* look for the matching bridge */
|
||||
while (!pci_is_root_bus(bus)) {
|
||||
bridge = bus->self;
|
||||
bus = bus->parent;
|
||||
|
||||
if (pci_match_id(bridge_devids, bridge)) {
|
||||
if (pcie_get_readrq(pdev) > 256) {
|
||||
pci_info(pdev, "limiting MRRS to 256\n");
|
||||
pcie_set_readrq(pdev, 256);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, loongson_set_min_mrrs_quirk);
|
||||
#endif
|
||||
|
||||
static void loongson_mrrs_quirk(struct pci_dev *pdev)
|
||||
{
|
||||
/*
|
||||
* Some Loongson PCIe ports have h/w limitations of maximum read
|
||||
* request size. They can't handle anything larger than this. So
|
||||
* force this limit on any devices attached under these ports.
|
||||
*/
|
||||
struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus);
|
||||
|
||||
bridge->no_inc_mrrs = 1;
|
||||
|
|
|
|||
|
|
@ -504,15 +504,12 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge)
|
|||
if (pass && dev->subordinate) {
|
||||
check_hotplug_bridge(slot, dev);
|
||||
pcibios_resource_survey_bus(dev->subordinate);
|
||||
if (pci_is_root_bus(bus))
|
||||
__pci_bus_size_bridges(dev->subordinate, &add_list);
|
||||
__pci_bus_size_bridges(dev->subordinate,
|
||||
&add_list);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (pci_is_root_bus(bus))
|
||||
__pci_bus_assign_resources(bus, &add_list, NULL);
|
||||
else
|
||||
pci_assign_unassigned_bridge_resources(bus->self);
|
||||
__pci_bus_assign_resources(bus, &add_list, NULL);
|
||||
}
|
||||
|
||||
acpiphp_sanitize_bus(bus);
|
||||
|
|
|
|||
|
|
@ -102,7 +102,7 @@ static const struct telemetry_core_ops telm_defpltops = {
|
|||
/**
|
||||
* telemetry_update_events() - Update telemetry Configuration
|
||||
* @pss_evtconfig: PSS related config. No change if num_evts = 0.
|
||||
* @pss_evtconfig: IOSS related config. No change if num_evts = 0.
|
||||
* @ioss_evtconfig: IOSS related config. No change if num_evts = 0.
|
||||
*
|
||||
* This API updates the IOSS & PSS Telemetry configuration. Old config
|
||||
* is overwritten. Call telemetry_reset_events when logging is over
|
||||
|
|
@ -176,7 +176,7 @@ EXPORT_SYMBOL_GPL(telemetry_reset_events);
|
|||
/**
|
||||
* telemetry_get_eventconfig() - Returns the pss and ioss events enabled
|
||||
* @pss_evtconfig: Pointer to PSS related configuration.
|
||||
* @pss_evtconfig: Pointer to IOSS related configuration.
|
||||
* @ioss_evtconfig: Pointer to IOSS related configuration.
|
||||
* @pss_len: Number of u32 elements allocated for pss_evtconfig array
|
||||
* @ioss_len: Number of u32 elements allocated for ioss_evtconfig array
|
||||
*
|
||||
|
|
|
|||
|
|
@ -744,14 +744,15 @@ error_1:
|
|||
* sdw_ml_sync_bank_switch: Multilink register bank switch
|
||||
*
|
||||
* @bus: SDW bus instance
|
||||
* @multi_link: whether this is a multi-link stream with hardware-based sync
|
||||
*
|
||||
* Caller function should free the buffers on error
|
||||
*/
|
||||
static int sdw_ml_sync_bank_switch(struct sdw_bus *bus)
|
||||
static int sdw_ml_sync_bank_switch(struct sdw_bus *bus, bool multi_link)
|
||||
{
|
||||
unsigned long time_left;
|
||||
|
||||
if (!bus->multi_link)
|
||||
if (!multi_link)
|
||||
return 0;
|
||||
|
||||
/* Wait for completion of transfer */
|
||||
|
|
@ -848,7 +849,7 @@ static int do_bank_switch(struct sdw_stream_runtime *stream)
|
|||
bus->bank_switch_timeout = DEFAULT_BANK_SWITCH_TIMEOUT;
|
||||
|
||||
/* Check if bank switch was successful */
|
||||
ret = sdw_ml_sync_bank_switch(bus);
|
||||
ret = sdw_ml_sync_bank_switch(bus, multi_link);
|
||||
if (ret < 0) {
|
||||
dev_err(bus->dev,
|
||||
"multi link bank switch failed: %d\n", ret);
|
||||
|
|
|
|||
|
|
@ -349,7 +349,7 @@ static s32 gdm_lte_tx_nic_type(struct net_device *dev, struct sk_buff *skb)
|
|||
/* Get ethernet protocol */
|
||||
eth = (struct ethhdr *)skb->data;
|
||||
if (ntohs(eth->h_proto) == ETH_P_8021Q) {
|
||||
vlan_eth = (struct vlan_ethhdr *)skb->data;
|
||||
vlan_eth = skb_vlan_eth_hdr(skb);
|
||||
mac_proto = ntohs(vlan_eth->h_vlan_encapsulated_proto);
|
||||
network_data = skb->data + VLAN_ETH_HLEN;
|
||||
nic_type |= NIC_TYPE_F_VLAN;
|
||||
|
|
@ -435,7 +435,7 @@ static netdev_tx_t gdm_lte_tx(struct sk_buff *skb, struct net_device *dev)
|
|||
* driver based on the NIC mac
|
||||
*/
|
||||
if (nic_type & NIC_TYPE_F_VLAN) {
|
||||
struct vlan_ethhdr *vlan_eth = (struct vlan_ethhdr *)skb->data;
|
||||
struct vlan_ethhdr *vlan_eth = skb_vlan_eth_hdr(skb);
|
||||
|
||||
nic->vlan_id = ntohs(vlan_eth->h_vlan_TCI) & VLAN_VID_MASK;
|
||||
data_buf = skb->data + (VLAN_ETH_HLEN - ETH_HLEN);
|
||||
|
|
|
|||
|
|
@ -424,7 +424,7 @@ error_kill_call:
|
|||
if (call->async) {
|
||||
if (cancel_work_sync(&call->async_work))
|
||||
afs_put_call(call);
|
||||
afs_put_call(call);
|
||||
afs_set_call_complete(call, ret, 0);
|
||||
}
|
||||
|
||||
ac->error = ret;
|
||||
|
|
|
|||
|
|
@ -3400,7 +3400,8 @@ static int try_release_extent_state(struct extent_io_tree *tree,
|
|||
ret = 0;
|
||||
} else {
|
||||
u32 clear_bits = ~(EXTENT_LOCKED | EXTENT_NODATASUM |
|
||||
EXTENT_DELALLOC_NEW | EXTENT_CTLBITS);
|
||||
EXTENT_DELALLOC_NEW | EXTENT_CTLBITS |
|
||||
EXTENT_QGROUP_RESERVED);
|
||||
|
||||
/*
|
||||
* At this point we can safely clear everything except the
|
||||
|
|
|
|||
|
|
@ -2182,6 +2182,15 @@ static noinline int __btrfs_ioctl_snap_create(struct file *file,
|
|||
* are limited to own subvolumes only
|
||||
*/
|
||||
ret = -EPERM;
|
||||
} else if (btrfs_ino(BTRFS_I(src_inode)) != BTRFS_FIRST_FREE_OBJECTID) {
|
||||
/*
|
||||
* Snapshots must be made with the src_inode referring
|
||||
* to the subvolume inode, otherwise the permission
|
||||
* checking above is useless because we may have
|
||||
* permission on a lower directory but not the subvol
|
||||
* itself.
|
||||
*/
|
||||
ret = -EINVAL;
|
||||
} else {
|
||||
ret = btrfs_mksnapshot(&file->f_path, mnt_userns,
|
||||
name, namelen,
|
||||
|
|
|
|||
|
|
@ -544,7 +544,9 @@ void btrfs_remove_ordered_extent(struct btrfs_inode *btrfs_inode,
|
|||
release = entry->disk_num_bytes;
|
||||
else
|
||||
release = entry->num_bytes;
|
||||
btrfs_delalloc_release_metadata(btrfs_inode, release, false);
|
||||
btrfs_delalloc_release_metadata(btrfs_inode, release,
|
||||
test_bit(BTRFS_ORDERED_IOERR,
|
||||
&entry->flags));
|
||||
}
|
||||
|
||||
percpu_counter_add_batch(&fs_info->ordered_bytes, -entry->num_bytes,
|
||||
|
|
|
|||
|
|
@ -339,9 +339,10 @@ static void ext4_inode_extension_cleanup(struct inode *inode, ssize_t count)
|
|||
return;
|
||||
}
|
||||
/*
|
||||
* If i_disksize got extended due to writeback of delalloc blocks while
|
||||
* the DIO was running we could fail to cleanup the orphan list in
|
||||
* ext4_handle_inode_extension(). Do it now.
|
||||
* If i_disksize got extended either due to writeback of delalloc
|
||||
* blocks or extending truncate while the DIO was running we could fail
|
||||
* to cleanup the orphan list in ext4_handle_inode_extension(). Do it
|
||||
* now.
|
||||
*/
|
||||
if (!list_empty(&EXT4_I(inode)->i_orphan) && inode->i_nlink) {
|
||||
handle_t *handle = ext4_journal_start(inode, EXT4_HT_INODE, 2);
|
||||
|
|
@ -376,10 +377,11 @@ static int ext4_dio_write_end_io(struct kiocb *iocb, ssize_t size,
|
|||
* blocks. But the code in ext4_iomap_alloc() is careful to use
|
||||
* zeroed/unwritten extents if this is possible; thus we won't leave
|
||||
* uninitialized blocks in a file even if we didn't succeed in writing
|
||||
* as much as we intended.
|
||||
* as much as we intended. Also we can race with truncate or write
|
||||
* expanding the file so we have to be a bit careful here.
|
||||
*/
|
||||
WARN_ON_ONCE(i_size_read(inode) < READ_ONCE(EXT4_I(inode)->i_disksize));
|
||||
if (pos + size <= READ_ONCE(EXT4_I(inode)->i_disksize))
|
||||
if (pos + size <= READ_ONCE(EXT4_I(inode)->i_disksize) &&
|
||||
pos + size <= i_size_read(inode))
|
||||
return size;
|
||||
return ext4_handle_inode_extension(inode, pos, size);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4110,6 +4110,10 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
|
|||
start = max(start, rounddown(ac->ac_o_ex.fe_logical,
|
||||
(ext4_lblk_t)EXT4_BLOCKS_PER_GROUP(ac->ac_sb)));
|
||||
|
||||
/* avoid unnecessary preallocation that may trigger assertions */
|
||||
if (start + size > EXT_MAX_BLOCKS)
|
||||
size = EXT_MAX_BLOCKS - start;
|
||||
|
||||
/* don't cover already allocated blocks in selected range */
|
||||
if (ar->pleft && start <= ar->lleft) {
|
||||
size -= ar->lleft + 1 - start;
|
||||
|
|
|
|||
|
|
@ -1224,6 +1224,7 @@ void fuse_dax_conn_free(struct fuse_conn *fc)
|
|||
if (fc->dax) {
|
||||
fuse_free_dax_mem_ranges(&fc->dax->free_ranges);
|
||||
kfree(fc->dax);
|
||||
fc->dax = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -313,6 +313,9 @@ static const bool has_smb2_data_area[NUMBER_OF_SMB2_COMMANDS] = {
|
|||
char *
|
||||
smb2_get_data_area_len(int *off, int *len, struct smb2_hdr *shdr)
|
||||
{
|
||||
const int max_off = 4096;
|
||||
const int max_len = 128 * 1024;
|
||||
|
||||
*off = 0;
|
||||
*len = 0;
|
||||
|
||||
|
|
@ -384,29 +387,20 @@ smb2_get_data_area_len(int *off, int *len, struct smb2_hdr *shdr)
|
|||
* Invalid length or offset probably means data area is invalid, but
|
||||
* we have little choice but to ignore the data area in this case.
|
||||
*/
|
||||
if (*off > 4096) {
|
||||
cifs_dbg(VFS, "offset %d too large, data area ignored\n", *off);
|
||||
*len = 0;
|
||||
*off = 0;
|
||||
} else if (*off < 0) {
|
||||
cifs_dbg(VFS, "negative offset %d to data invalid ignore data area\n",
|
||||
*off);
|
||||
if (unlikely(*off < 0 || *off > max_off ||
|
||||
*len < 0 || *len > max_len)) {
|
||||
cifs_dbg(VFS, "%s: invalid data area (off=%d len=%d)\n",
|
||||
__func__, *off, *len);
|
||||
*off = 0;
|
||||
*len = 0;
|
||||
} else if (*len < 0) {
|
||||
cifs_dbg(VFS, "negative data length %d invalid, data area ignored\n",
|
||||
*len);
|
||||
*len = 0;
|
||||
} else if (*len > 128 * 1024) {
|
||||
cifs_dbg(VFS, "data area larger than 128K: %d\n", *len);
|
||||
} else if (*off == 0) {
|
||||
*len = 0;
|
||||
}
|
||||
|
||||
/* return pointer to beginning of data area, ie offset from SMB start */
|
||||
if ((*off != 0) && (*len != 0))
|
||||
if (*off > 0 && *len > 0)
|
||||
return (char *)shdr + *off;
|
||||
else
|
||||
return NULL;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -3122,7 +3122,7 @@ smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
|
|||
struct kvec close_iov[1];
|
||||
struct smb2_ioctl_rsp *ioctl_rsp;
|
||||
struct reparse_data_buffer *reparse_buf;
|
||||
u32 plen;
|
||||
u32 off, count, len;
|
||||
|
||||
cifs_dbg(FYI, "%s: path: %s\n", __func__, full_path);
|
||||
|
||||
|
|
@ -3202,16 +3202,22 @@ smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
|
|||
*/
|
||||
if (rc == 0) {
|
||||
/* See MS-FSCC 2.3.23 */
|
||||
off = le32_to_cpu(ioctl_rsp->OutputOffset);
|
||||
count = le32_to_cpu(ioctl_rsp->OutputCount);
|
||||
if (check_add_overflow(off, count, &len) ||
|
||||
len > rsp_iov[1].iov_len) {
|
||||
cifs_tcon_dbg(VFS, "%s: invalid ioctl: off=%d count=%d\n",
|
||||
__func__, off, count);
|
||||
rc = -EIO;
|
||||
goto query_rp_exit;
|
||||
}
|
||||
|
||||
reparse_buf = (struct reparse_data_buffer *)
|
||||
((char *)ioctl_rsp +
|
||||
le32_to_cpu(ioctl_rsp->OutputOffset));
|
||||
plen = le32_to_cpu(ioctl_rsp->OutputCount);
|
||||
|
||||
if (plen + le32_to_cpu(ioctl_rsp->OutputOffset) >
|
||||
rsp_iov[1].iov_len) {
|
||||
cifs_tcon_dbg(FYI, "srv returned invalid ioctl len: %d\n",
|
||||
plen);
|
||||
reparse_buf = (void *)((u8 *)ioctl_rsp + off);
|
||||
len = sizeof(*reparse_buf);
|
||||
if (count < len ||
|
||||
count < le16_to_cpu(reparse_buf->ReparseDataLength) + len) {
|
||||
cifs_tcon_dbg(VFS, "%s: invalid ioctl: off=%d count=%d\n",
|
||||
__func__, off, count);
|
||||
rc = -EIO;
|
||||
goto query_rp_exit;
|
||||
}
|
||||
|
|
@ -5065,6 +5071,7 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
|
|||
struct smb2_hdr *shdr;
|
||||
unsigned int pdu_length = server->pdu_size;
|
||||
unsigned int buf_size;
|
||||
unsigned int next_cmd;
|
||||
struct mid_q_entry *mid_entry;
|
||||
int next_is_large;
|
||||
char *next_buffer = NULL;
|
||||
|
|
@ -5093,14 +5100,15 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
|
|||
next_is_large = server->large_buf;
|
||||
one_more:
|
||||
shdr = (struct smb2_hdr *)buf;
|
||||
if (shdr->NextCommand) {
|
||||
next_cmd = le32_to_cpu(shdr->NextCommand);
|
||||
if (next_cmd) {
|
||||
if (WARN_ON_ONCE(next_cmd > pdu_length))
|
||||
return -1;
|
||||
if (next_is_large)
|
||||
next_buffer = (char *)cifs_buf_get();
|
||||
else
|
||||
next_buffer = (char *)cifs_small_buf_get();
|
||||
memcpy(next_buffer,
|
||||
buf + le32_to_cpu(shdr->NextCommand),
|
||||
pdu_length - le32_to_cpu(shdr->NextCommand));
|
||||
memcpy(next_buffer, buf + next_cmd, pdu_length - next_cmd);
|
||||
}
|
||||
|
||||
mid_entry = smb2_find_mid(server, buf);
|
||||
|
|
@ -5124,8 +5132,8 @@ one_more:
|
|||
else
|
||||
ret = cifs_handle_standard(server, mid_entry);
|
||||
|
||||
if (ret == 0 && shdr->NextCommand) {
|
||||
pdu_length -= le32_to_cpu(shdr->NextCommand);
|
||||
if (ret == 0 && next_cmd) {
|
||||
pdu_length -= next_cmd;
|
||||
server->large_buf = next_is_large;
|
||||
if (next_is_large)
|
||||
server->bigbuf = buf = next_buffer;
|
||||
|
|
|
|||
|
|
@ -1116,7 +1116,7 @@ struct smb2_change_notify_rsp {
|
|||
#define SMB2_CREATE_SD_BUFFER "SecD" /* security descriptor */
|
||||
#define SMB2_CREATE_DURABLE_HANDLE_REQUEST "DHnQ"
|
||||
#define SMB2_CREATE_DURABLE_HANDLE_RECONNECT "DHnC"
|
||||
#define SMB2_CREATE_ALLOCATION_SIZE "AISi"
|
||||
#define SMB2_CREATE_ALLOCATION_SIZE "AlSi"
|
||||
#define SMB2_CREATE_QUERY_MAXIMAL_ACCESS_REQUEST "MxAc"
|
||||
#define SMB2_CREATE_TIMEWARP_REQUEST "TWrp"
|
||||
#define SMB2_CREATE_QUERY_ON_DISK_ID "QFid"
|
||||
|
|
|
|||
|
|
@ -7135,6 +7135,7 @@ skip:
|
|||
smb2_remove_blocked_lock,
|
||||
argv);
|
||||
if (rc) {
|
||||
kfree(argv);
|
||||
err = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -70,7 +70,7 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
|
|||
*/
|
||||
static __always_inline int queued_spin_value_unlocked(struct qspinlock lock)
|
||||
{
|
||||
return !atomic_read(&lock.val);
|
||||
return !lock.val.counter;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -108,7 +108,7 @@ static inline int groups_search(const struct group_info *group_info, kgid_t grp)
|
|||
* same context as task->real_cred.
|
||||
*/
|
||||
struct cred {
|
||||
atomic_t usage;
|
||||
atomic_long_t usage;
|
||||
#ifdef CONFIG_DEBUG_CREDENTIALS
|
||||
atomic_t subscribers; /* number of processes subscribed */
|
||||
void *put_addr;
|
||||
|
|
@ -228,7 +228,7 @@ static inline bool cap_ambient_invariant_ok(const struct cred *cred)
|
|||
*/
|
||||
static inline struct cred *get_new_cred(struct cred *cred)
|
||||
{
|
||||
atomic_inc(&cred->usage);
|
||||
atomic_long_inc(&cred->usage);
|
||||
return cred;
|
||||
}
|
||||
|
||||
|
|
@ -260,7 +260,7 @@ static inline const struct cred *get_cred_rcu(const struct cred *cred)
|
|||
struct cred *nonconst_cred = (struct cred *) cred;
|
||||
if (!cred)
|
||||
return NULL;
|
||||
if (!atomic_inc_not_zero(&nonconst_cred->usage))
|
||||
if (!atomic_long_inc_not_zero(&nonconst_cred->usage))
|
||||
return NULL;
|
||||
validate_creds(cred);
|
||||
nonconst_cred->non_rcu = 0;
|
||||
|
|
@ -284,7 +284,7 @@ static inline void put_cred(const struct cred *_cred)
|
|||
|
||||
if (cred) {
|
||||
validate_creds(cred);
|
||||
if (atomic_dec_and_test(&(cred)->usage))
|
||||
if (atomic_long_dec_and_test(&(cred)->usage))
|
||||
__put_cred(cred);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -62,6 +62,14 @@ static inline struct vlan_ethhdr *vlan_eth_hdr(const struct sk_buff *skb)
|
|||
return (struct vlan_ethhdr *)skb_mac_header(skb);
|
||||
}
|
||||
|
||||
/* Prefer this version in TX path, instead of
|
||||
* skb_reset_mac_header() + vlan_eth_hdr()
|
||||
*/
|
||||
static inline struct vlan_ethhdr *skb_vlan_eth_hdr(const struct sk_buff *skb)
|
||||
{
|
||||
return (struct vlan_ethhdr *)skb->data;
|
||||
}
|
||||
|
||||
#define VLAN_PRIO_MASK 0xe000 /* Priority Code Point */
|
||||
#define VLAN_PRIO_SHIFT 13
|
||||
#define VLAN_CFI_MASK 0x1000 /* Canonical Format Indicator / Drop Eligible Indicator */
|
||||
|
|
@ -531,7 +539,7 @@ static inline void __vlan_hwaccel_put_tag(struct sk_buff *skb,
|
|||
*/
|
||||
static inline int __vlan_get_tag(const struct sk_buff *skb, u16 *vlan_tci)
|
||||
{
|
||||
struct vlan_ethhdr *veth = (struct vlan_ethhdr *)skb->data;
|
||||
struct vlan_ethhdr *veth = skb_vlan_eth_hdr(skb);
|
||||
|
||||
if (!eth_type_vlan(veth->h_vlan_proto))
|
||||
return -EINVAL;
|
||||
|
|
@ -732,7 +740,7 @@ static inline bool skb_vlan_tagged_multi(struct sk_buff *skb)
|
|||
if (unlikely(!pskb_may_pull(skb, VLAN_ETH_HLEN)))
|
||||
return false;
|
||||
|
||||
veh = (struct vlan_ethhdr *)skb->data;
|
||||
veh = skb_vlan_eth_hdr(skb);
|
||||
protocol = veh->h_vlan_encapsulated_proto;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -231,22 +231,27 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio,
|
|||
if (folio_test_unevictable(folio) || !lrugen->enabled)
|
||||
return false;
|
||||
/*
|
||||
* There are three common cases for this page:
|
||||
* 1. If it's hot, e.g., freshly faulted in or previously hot and
|
||||
* migrated, add it to the youngest generation.
|
||||
* 2. If it's cold but can't be evicted immediately, i.e., an anon page
|
||||
* not in swapcache or a dirty page pending writeback, add it to the
|
||||
* second oldest generation.
|
||||
* 3. Everything else (clean, cold) is added to the oldest generation.
|
||||
* There are four common cases for this page:
|
||||
* 1. If it's hot, i.e., freshly faulted in, add it to the youngest
|
||||
* generation, and it's protected over the rest below.
|
||||
* 2. If it can't be evicted immediately, i.e., a dirty page pending
|
||||
* writeback, add it to the second youngest generation.
|
||||
* 3. If it should be evicted first, e.g., cold and clean from
|
||||
* folio_rotate_reclaimable(), add it to the oldest generation.
|
||||
* 4. Everything else falls between 2 & 3 above and is added to the
|
||||
* second oldest generation if it's considered inactive, or the
|
||||
* oldest generation otherwise. See lru_gen_is_active().
|
||||
*/
|
||||
if (folio_test_active(folio))
|
||||
seq = lrugen->max_seq;
|
||||
else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) ||
|
||||
(folio_test_reclaim(folio) &&
|
||||
(folio_test_dirty(folio) || folio_test_writeback(folio))))
|
||||
seq = lrugen->min_seq[type] + 1;
|
||||
else
|
||||
seq = lrugen->max_seq - 1;
|
||||
else if (reclaiming || lrugen->min_seq[type] + MIN_NR_GENS >= lrugen->max_seq)
|
||||
seq = lrugen->min_seq[type];
|
||||
else
|
||||
seq = lrugen->min_seq[type] + 1;
|
||||
|
||||
gen = lru_gen_from_seq(seq);
|
||||
flags = (gen + 1UL) << LRU_GEN_PGOFF;
|
||||
|
|
|
|||
|
|
@ -29,6 +29,8 @@
|
|||
#define VENDOR_ID_LINKSYS 0x13b1
|
||||
#define VENDOR_ID_NVIDIA 0x0955
|
||||
#define VENDOR_ID_TPLINK 0x2357
|
||||
#define VENDOR_ID_DLINK 0x2001
|
||||
#define VENDOR_ID_ASUS 0x0b05
|
||||
|
||||
#if IS_REACHABLE(CONFIG_USB_RTL8152)
|
||||
extern u8 rtl8152_get_version(struct usb_interface *intf);
|
||||
|
|
|
|||
|
|
@ -31,17 +31,22 @@ struct prefix_info {
|
|||
__u8 length;
|
||||
__u8 prefix_len;
|
||||
|
||||
union __packed {
|
||||
__u8 flags;
|
||||
struct __packed {
|
||||
#if defined(__BIG_ENDIAN_BITFIELD)
|
||||
__u8 onlink : 1,
|
||||
__u8 onlink : 1,
|
||||
autoconf : 1,
|
||||
reserved : 6;
|
||||
#elif defined(__LITTLE_ENDIAN_BITFIELD)
|
||||
__u8 reserved : 6,
|
||||
__u8 reserved : 6,
|
||||
autoconf : 1,
|
||||
onlink : 1;
|
||||
#else
|
||||
#error "Please fix <asm/byteorder.h>"
|
||||
#endif
|
||||
};
|
||||
};
|
||||
__be32 valid;
|
||||
__be32 prefered;
|
||||
__be32 reserved2;
|
||||
|
|
@ -49,6 +54,9 @@ struct prefix_info {
|
|||
struct in6_addr prefix;
|
||||
};
|
||||
|
||||
/* rfc4861 4.6.2: IPv6 PIO is 32 bytes in size */
|
||||
static_assert(sizeof(struct prefix_info) == 32);
|
||||
|
||||
#include <linux/ipv6.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <net/if_inet6.h>
|
||||
|
|
|
|||
|
|
@ -22,10 +22,6 @@
|
|||
#define IF_RS_SENT 0x10
|
||||
#define IF_READY 0x80000000
|
||||
|
||||
/* prefix flags */
|
||||
#define IF_PREFIX_ONLINK 0x01
|
||||
#define IF_PREFIX_AUTOCONF 0x02
|
||||
|
||||
enum {
|
||||
INET6_IFADDR_STATE_PREDAD,
|
||||
INET6_IFADDR_STATE_DAD,
|
||||
|
|
|
|||
|
|
@ -101,17 +101,17 @@ static void put_cred_rcu(struct rcu_head *rcu)
|
|||
|
||||
#ifdef CONFIG_DEBUG_CREDENTIALS
|
||||
if (cred->magic != CRED_MAGIC_DEAD ||
|
||||
atomic_read(&cred->usage) != 0 ||
|
||||
atomic_long_read(&cred->usage) != 0 ||
|
||||
read_cred_subscribers(cred) != 0)
|
||||
panic("CRED: put_cred_rcu() sees %p with"
|
||||
" mag %x, put %p, usage %d, subscr %d\n",
|
||||
" mag %x, put %p, usage %ld, subscr %d\n",
|
||||
cred, cred->magic, cred->put_addr,
|
||||
atomic_read(&cred->usage),
|
||||
atomic_long_read(&cred->usage),
|
||||
read_cred_subscribers(cred));
|
||||
#else
|
||||
if (atomic_read(&cred->usage) != 0)
|
||||
panic("CRED: put_cred_rcu() sees %p with usage %d\n",
|
||||
cred, atomic_read(&cred->usage));
|
||||
if (atomic_long_read(&cred->usage) != 0)
|
||||
panic("CRED: put_cred_rcu() sees %p with usage %ld\n",
|
||||
cred, atomic_long_read(&cred->usage));
|
||||
#endif
|
||||
|
||||
security_cred_free(cred);
|
||||
|
|
@ -136,11 +136,11 @@ static void put_cred_rcu(struct rcu_head *rcu)
|
|||
*/
|
||||
void __put_cred(struct cred *cred)
|
||||
{
|
||||
kdebug("__put_cred(%p{%d,%d})", cred,
|
||||
atomic_read(&cred->usage),
|
||||
kdebug("__put_cred(%p{%ld,%d})", cred,
|
||||
atomic_long_read(&cred->usage),
|
||||
read_cred_subscribers(cred));
|
||||
|
||||
BUG_ON(atomic_read(&cred->usage) != 0);
|
||||
BUG_ON(atomic_long_read(&cred->usage) != 0);
|
||||
#ifdef CONFIG_DEBUG_CREDENTIALS
|
||||
BUG_ON(read_cred_subscribers(cred) != 0);
|
||||
cred->magic = CRED_MAGIC_DEAD;
|
||||
|
|
@ -163,8 +163,8 @@ void exit_creds(struct task_struct *tsk)
|
|||
{
|
||||
struct cred *cred;
|
||||
|
||||
kdebug("exit_creds(%u,%p,%p,{%d,%d})", tsk->pid, tsk->real_cred, tsk->cred,
|
||||
atomic_read(&tsk->cred->usage),
|
||||
kdebug("exit_creds(%u,%p,%p,{%ld,%d})", tsk->pid, tsk->real_cred, tsk->cred,
|
||||
atomic_long_read(&tsk->cred->usage),
|
||||
read_cred_subscribers(tsk->cred));
|
||||
|
||||
cred = (struct cred *) tsk->real_cred;
|
||||
|
|
@ -224,7 +224,7 @@ struct cred *cred_alloc_blank(void)
|
|||
if (!new)
|
||||
return NULL;
|
||||
|
||||
atomic_set(&new->usage, 1);
|
||||
atomic_long_set(&new->usage, 1);
|
||||
#ifdef CONFIG_DEBUG_CREDENTIALS
|
||||
new->magic = CRED_MAGIC;
|
||||
#endif
|
||||
|
|
@ -270,7 +270,7 @@ struct cred *prepare_creds(void)
|
|||
memcpy(new, old, sizeof(struct cred));
|
||||
|
||||
new->non_rcu = 0;
|
||||
atomic_set(&new->usage, 1);
|
||||
atomic_long_set(&new->usage, 1);
|
||||
set_cred_subscribers(new, 0);
|
||||
get_group_info(new->group_info);
|
||||
get_uid(new->user);
|
||||
|
|
@ -358,8 +358,8 @@ int copy_creds(struct task_struct *p, unsigned long clone_flags)
|
|||
p->real_cred = get_cred(p->cred);
|
||||
get_cred(p->cred);
|
||||
alter_cred_subscribers(p->cred, 2);
|
||||
kdebug("share_creds(%p{%d,%d})",
|
||||
p->cred, atomic_read(&p->cred->usage),
|
||||
kdebug("share_creds(%p{%ld,%d})",
|
||||
p->cred, atomic_long_read(&p->cred->usage),
|
||||
read_cred_subscribers(p->cred));
|
||||
inc_rlimit_ucounts(task_ucounts(p), UCOUNT_RLIMIT_NPROC, 1);
|
||||
return 0;
|
||||
|
|
@ -452,8 +452,8 @@ int commit_creds(struct cred *new)
|
|||
struct task_struct *task = current;
|
||||
const struct cred *old = task->real_cred;
|
||||
|
||||
kdebug("commit_creds(%p{%d,%d})", new,
|
||||
atomic_read(&new->usage),
|
||||
kdebug("commit_creds(%p{%ld,%d})", new,
|
||||
atomic_long_read(&new->usage),
|
||||
read_cred_subscribers(new));
|
||||
|
||||
BUG_ON(task->cred != old);
|
||||
|
|
@ -462,7 +462,7 @@ int commit_creds(struct cred *new)
|
|||
validate_creds(old);
|
||||
validate_creds(new);
|
||||
#endif
|
||||
BUG_ON(atomic_read(&new->usage) < 1);
|
||||
BUG_ON(atomic_long_read(&new->usage) < 1);
|
||||
|
||||
get_cred(new); /* we will require a ref for the subj creds too */
|
||||
|
||||
|
|
@ -536,14 +536,14 @@ EXPORT_SYMBOL(commit_creds);
|
|||
*/
|
||||
void abort_creds(struct cred *new)
|
||||
{
|
||||
kdebug("abort_creds(%p{%d,%d})", new,
|
||||
atomic_read(&new->usage),
|
||||
kdebug("abort_creds(%p{%ld,%d})", new,
|
||||
atomic_long_read(&new->usage),
|
||||
read_cred_subscribers(new));
|
||||
|
||||
#ifdef CONFIG_DEBUG_CREDENTIALS
|
||||
BUG_ON(read_cred_subscribers(new) != 0);
|
||||
#endif
|
||||
BUG_ON(atomic_read(&new->usage) < 1);
|
||||
BUG_ON(atomic_long_read(&new->usage) < 1);
|
||||
put_cred(new);
|
||||
}
|
||||
EXPORT_SYMBOL(abort_creds);
|
||||
|
|
@ -559,8 +559,8 @@ const struct cred *override_creds(const struct cred *new)
|
|||
{
|
||||
const struct cred *old = current->cred;
|
||||
|
||||
kdebug("override_creds(%p{%d,%d})", new,
|
||||
atomic_read(&new->usage),
|
||||
kdebug("override_creds(%p{%ld,%d})", new,
|
||||
atomic_long_read(&new->usage),
|
||||
read_cred_subscribers(new));
|
||||
|
||||
validate_creds(old);
|
||||
|
|
@ -583,8 +583,8 @@ const struct cred *override_creds(const struct cred *new)
|
|||
trace_android_rvh_override_creds(current, new);
|
||||
alter_cred_subscribers(old, -1);
|
||||
|
||||
kdebug("override_creds() = %p{%d,%d}", old,
|
||||
atomic_read(&old->usage),
|
||||
kdebug("override_creds() = %p{%ld,%d}", old,
|
||||
atomic_long_read(&old->usage),
|
||||
read_cred_subscribers(old));
|
||||
return old;
|
||||
}
|
||||
|
|
@ -601,8 +601,8 @@ void revert_creds(const struct cred *old)
|
|||
{
|
||||
const struct cred *override = current->cred;
|
||||
|
||||
kdebug("revert_creds(%p{%d,%d})", old,
|
||||
atomic_read(&old->usage),
|
||||
kdebug("revert_creds(%p{%ld,%d})", old,
|
||||
atomic_long_read(&old->usage),
|
||||
read_cred_subscribers(old));
|
||||
|
||||
validate_creds(old);
|
||||
|
|
@ -735,7 +735,7 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon)
|
|||
|
||||
*new = *old;
|
||||
new->non_rcu = 0;
|
||||
atomic_set(&new->usage, 1);
|
||||
atomic_long_set(&new->usage, 1);
|
||||
set_cred_subscribers(new, 0);
|
||||
get_uid(new->user);
|
||||
get_user_ns(new->user_ns);
|
||||
|
|
@ -849,8 +849,8 @@ static void dump_invalid_creds(const struct cred *cred, const char *label,
|
|||
cred == tsk->cred ? "[eff]" : "");
|
||||
printk(KERN_ERR "CRED: ->magic=%x, put_addr=%p\n",
|
||||
cred->magic, cred->put_addr);
|
||||
printk(KERN_ERR "CRED: ->usage=%d, subscr=%d\n",
|
||||
atomic_read(&cred->usage),
|
||||
printk(KERN_ERR "CRED: ->usage=%ld, subscr=%d\n",
|
||||
atomic_long_read(&cred->usage),
|
||||
read_cred_subscribers(cred));
|
||||
printk(KERN_ERR "CRED: ->*uid = { %d,%d,%d,%d }\n",
|
||||
from_kuid_munged(&init_user_ns, cred->uid),
|
||||
|
|
@ -922,9 +922,9 @@ EXPORT_SYMBOL(__validate_process_creds);
|
|||
*/
|
||||
void validate_creds_for_do_exit(struct task_struct *tsk)
|
||||
{
|
||||
kdebug("validate_creds_for_do_exit(%p,%p{%d,%d})",
|
||||
kdebug("validate_creds_for_do_exit(%p,%p{%ld,%d})",
|
||||
tsk->real_cred, tsk->cred,
|
||||
atomic_read(&tsk->cred->usage),
|
||||
atomic_long_read(&tsk->cred->usage),
|
||||
read_cred_subscribers(tsk->cred));
|
||||
|
||||
__validate_process_creds(tsk, __FILE__, __LINE__);
|
||||
|
|
|
|||
|
|
@ -1945,6 +1945,16 @@ static bool perf_event_validate_size(struct perf_event *event)
|
|||
group_leader->nr_siblings + 1) > 16*1024)
|
||||
return false;
|
||||
|
||||
/*
|
||||
* When creating a new group leader, group_leader->ctx is initialized
|
||||
* after the size has been validated, but we cannot safely use
|
||||
* for_each_sibling_event() until group_leader->ctx is set. A new group
|
||||
* leader cannot have any siblings yet, so we can safely skip checking
|
||||
* the non-existent siblings.
|
||||
*/
|
||||
if (event == group_leader)
|
||||
return true;
|
||||
|
||||
for_each_sibling_event(sibling, group_leader) {
|
||||
if (__perf_event_read_size(sibling->attr.read_format,
|
||||
group_leader->nr_siblings + 1) > 16*1024)
|
||||
|
|
|
|||
|
|
@ -672,6 +672,9 @@ static int rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set)
|
|||
unsigned long cnt2, top2, bottom2, msb2;
|
||||
u64 val;
|
||||
|
||||
/* Any interruptions in this function should cause a failure */
|
||||
cnt = local_read(&t->cnt);
|
||||
|
||||
/* The cmpxchg always fails if it interrupted an update */
|
||||
if (!__rb_time_read(t, &val, &cnt2))
|
||||
return false;
|
||||
|
|
@ -679,17 +682,18 @@ static int rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set)
|
|||
if (val != expect)
|
||||
return false;
|
||||
|
||||
cnt = local_read(&t->cnt);
|
||||
if ((cnt & 3) != cnt2)
|
||||
return false;
|
||||
|
||||
cnt2 = cnt + 1;
|
||||
|
||||
rb_time_split(val, &top, &bottom, &msb);
|
||||
msb = rb_time_val_cnt(msb, cnt);
|
||||
top = rb_time_val_cnt(top, cnt);
|
||||
bottom = rb_time_val_cnt(bottom, cnt);
|
||||
|
||||
rb_time_split(set, &top2, &bottom2, &msb2);
|
||||
msb2 = rb_time_val_cnt(msb2, cnt);
|
||||
top2 = rb_time_val_cnt(top2, cnt2);
|
||||
bottom2 = rb_time_val_cnt(bottom2, cnt2);
|
||||
|
||||
|
|
@ -1772,6 +1776,8 @@ static void rb_free_cpu_buffer(struct ring_buffer_per_cpu *cpu_buffer)
|
|||
free_buffer_page(bpage);
|
||||
}
|
||||
|
||||
free_page((unsigned long)cpu_buffer->free_page);
|
||||
|
||||
kfree(cpu_buffer);
|
||||
}
|
||||
|
||||
|
|
@ -2400,7 +2406,7 @@ rb_iter_head_event(struct ring_buffer_iter *iter)
|
|||
*/
|
||||
barrier();
|
||||
|
||||
if ((iter->head + length) > commit || length > BUF_MAX_DATA_SIZE)
|
||||
if ((iter->head + length) > commit || length > BUF_PAGE_SIZE)
|
||||
/* Writer corrupted the read? */
|
||||
goto reset;
|
||||
|
||||
|
|
@ -3575,7 +3581,10 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer,
|
|||
* absolute timestamp.
|
||||
* Don't bother if this is the start of a new page (w == 0).
|
||||
*/
|
||||
if (unlikely(!a_ok || !b_ok || (info->before != info->after && w))) {
|
||||
if (!w) {
|
||||
/* Use the sub-buffer timestamp */
|
||||
info->delta = 0;
|
||||
} else if (unlikely(!a_ok || !b_ok || info->before != info->after)) {
|
||||
info->add_timestamp |= RB_ADD_STAMP_FORCE | RB_ADD_STAMP_EXTEND;
|
||||
info->length += RB_LEN_TIME_EXTEND;
|
||||
} else {
|
||||
|
|
@ -3598,26 +3607,19 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer,
|
|||
|
||||
/* See if we shot pass the end of this buffer page */
|
||||
if (unlikely(write > BUF_PAGE_SIZE)) {
|
||||
/* before and after may now different, fix it up*/
|
||||
b_ok = rb_time_read(&cpu_buffer->before_stamp, &info->before);
|
||||
a_ok = rb_time_read(&cpu_buffer->write_stamp, &info->after);
|
||||
if (a_ok && b_ok && info->before != info->after)
|
||||
(void)rb_time_cmpxchg(&cpu_buffer->before_stamp,
|
||||
info->before, info->after);
|
||||
if (a_ok && b_ok)
|
||||
check_buffer(cpu_buffer, info, CHECK_FULL_PAGE);
|
||||
check_buffer(cpu_buffer, info, CHECK_FULL_PAGE);
|
||||
return rb_move_tail(cpu_buffer, tail, info);
|
||||
}
|
||||
|
||||
if (likely(tail == w)) {
|
||||
u64 save_before;
|
||||
bool s_ok;
|
||||
|
||||
/* Nothing interrupted us between A and C */
|
||||
/*D*/ rb_time_set(&cpu_buffer->write_stamp, info->ts);
|
||||
barrier();
|
||||
/*E*/ s_ok = rb_time_read(&cpu_buffer->before_stamp, &save_before);
|
||||
RB_WARN_ON(cpu_buffer, !s_ok);
|
||||
/*
|
||||
* If something came in between C and D, the write stamp
|
||||
* may now not be in sync. But that's fine as the before_stamp
|
||||
* will be different and then next event will just be forced
|
||||
* to use an absolute timestamp.
|
||||
*/
|
||||
if (likely(!(info->add_timestamp &
|
||||
(RB_ADD_STAMP_FORCE | RB_ADD_STAMP_ABSOLUTE))))
|
||||
/* This did not interrupt any time update */
|
||||
|
|
@ -3625,24 +3627,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer,
|
|||
else
|
||||
/* Just use full timestamp for interrupting event */
|
||||
info->delta = info->ts;
|
||||
barrier();
|
||||
check_buffer(cpu_buffer, info, tail);
|
||||
if (unlikely(info->ts != save_before)) {
|
||||
/* SLOW PATH - Interrupted between C and E */
|
||||
|
||||
a_ok = rb_time_read(&cpu_buffer->write_stamp, &info->after);
|
||||
RB_WARN_ON(cpu_buffer, !a_ok);
|
||||
|
||||
/* Write stamp must only go forward */
|
||||
if (save_before > info->after) {
|
||||
/*
|
||||
* We do not care about the result, only that
|
||||
* it gets updated atomically.
|
||||
*/
|
||||
(void)rb_time_cmpxchg(&cpu_buffer->write_stamp,
|
||||
info->after, save_before);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
u64 ts;
|
||||
/* SLOW PATH - Interrupted between A and C */
|
||||
|
|
@ -3733,6 +3718,8 @@ rb_reserve_next_event(struct trace_buffer *buffer,
|
|||
if (ring_buffer_time_stamp_abs(cpu_buffer->buffer)) {
|
||||
add_ts_default = RB_ADD_STAMP_ABSOLUTE;
|
||||
info.length += RB_LEN_TIME_EXTEND;
|
||||
if (info.length > BUF_MAX_DATA_SIZE)
|
||||
goto out_fail;
|
||||
} else {
|
||||
add_ts_default = RB_ADD_STAMP_NONE;
|
||||
}
|
||||
|
|
@ -5315,7 +5302,8 @@ ring_buffer_read_prepare(struct trace_buffer *buffer, int cpu, gfp_t flags)
|
|||
if (!iter)
|
||||
return NULL;
|
||||
|
||||
iter->event = kmalloc(BUF_MAX_DATA_SIZE, flags);
|
||||
/* Holds the entire event: data and meta data */
|
||||
iter->event = kmalloc(BUF_PAGE_SIZE, flags);
|
||||
if (!iter->event) {
|
||||
kfree(iter);
|
||||
return NULL;
|
||||
|
|
|
|||
|
|
@ -6254,7 +6254,7 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
|
|||
if (!tr->array_buffer.buffer)
|
||||
return 0;
|
||||
|
||||
/* Do not allow tracing while resizng ring buffer */
|
||||
/* Do not allow tracing while resizing ring buffer */
|
||||
tracing_stop_tr(tr);
|
||||
|
||||
ret = ring_buffer_resize(tr->array_buffer.buffer, size, cpu);
|
||||
|
|
@ -6262,7 +6262,7 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
|
|||
goto out_start;
|
||||
|
||||
#ifdef CONFIG_TRACER_MAX_TRACE
|
||||
if (!tr->current_trace->use_max_tr)
|
||||
if (!tr->allocated_snapshot)
|
||||
goto out;
|
||||
|
||||
ret = ring_buffer_resize(tr->max_buffer.buffer, size, cpu);
|
||||
|
|
|
|||
19
mm/shmem.c
19
mm/shmem.c
|
|
@ -1029,7 +1029,24 @@ whole_folios:
|
|||
}
|
||||
VM_BUG_ON_FOLIO(folio_test_writeback(folio),
|
||||
folio);
|
||||
truncate_inode_folio(mapping, folio);
|
||||
|
||||
if (!folio_test_large(folio)) {
|
||||
truncate_inode_folio(mapping, folio);
|
||||
} else if (truncate_inode_partial_folio(folio, lstart, lend)) {
|
||||
/*
|
||||
* If we split a page, reset the loop so
|
||||
* that we pick up the new sub pages.
|
||||
* Otherwise the THP was entirely
|
||||
* dropped or the target range was
|
||||
* zeroed, so just continue the loop as
|
||||
* is.
|
||||
*/
|
||||
if (!folio_test_large(folio)) {
|
||||
folio_unlock(folio);
|
||||
index = start;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
index = folio->index + folio_nr_pages(folio) - 1;
|
||||
folio_unlock(folio);
|
||||
|
|
|
|||
|
|
@ -4881,7 +4881,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
|
|||
}
|
||||
|
||||
/* protected */
|
||||
if (tier > tier_idx) {
|
||||
if (tier > tier_idx || refs == BIT(LRU_REFS_WIDTH)) {
|
||||
int hist = lru_hist_from_seq(lrugen->min_seq[type]);
|
||||
|
||||
gen = folio_inc_gen(lruvec, folio, false);
|
||||
|
|
|
|||
|
|
@ -291,10 +291,10 @@ static void lru_gen_refault(struct folio *folio, void *shadow)
|
|||
* 1. For pages accessed through page tables, hotter pages pushed out
|
||||
* hot pages which refaulted immediately.
|
||||
* 2. For pages accessed multiple times through file descriptors,
|
||||
* numbers of accesses might have been out of the range.
|
||||
* they would have been protected by sort_folio().
|
||||
*/
|
||||
if (lru_gen_in_fault() || refs == BIT(LRU_REFS_WIDTH)) {
|
||||
folio_set_workingset(folio);
|
||||
if (lru_gen_in_fault() || refs >= BIT(LRU_REFS_WIDTH) - 1) {
|
||||
set_mask_bits(&folio->flags, 0, LRU_REFS_MASK | BIT(PG_workingset));
|
||||
mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + type, delta);
|
||||
}
|
||||
unlock:
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue