Merge 05a59d7979 ("Merge git://git.kernel.org:/pub/scm/linux/kernel/git/netdev/net") into android-mainline
Steps on the way to 5.12-rc3 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Id7fa18f2ce8728123f1720b7ce9a9843a4f08dac
This commit is contained in:
commit
d892ad24b8
197 changed files with 1753 additions and 1525 deletions
|
|
@ -1988,7 +1988,7 @@ netif_carrier.
|
|||
If use_carrier is 0, then the MII monitor will first query the
|
||||
device's (via ioctl) MII registers and check the link state. If that
|
||||
request fails (not just that it returns carrier down), then the MII
|
||||
monitor will make an ethtool ETHOOL_GLINK request to attempt to obtain
|
||||
monitor will make an ethtool ETHTOOL_GLINK request to attempt to obtain
|
||||
the same information. If both methods fail (i.e., the driver either
|
||||
does not support or had some error in processing both the MII register
|
||||
and ethtool requests), then the MII monitor will assume the link is
|
||||
|
|
|
|||
|
|
@ -142,73 +142,13 @@ Please send incremental versions on top of what has been merged in order to fix
|
|||
the patches the way they would look like if your latest patch series was to be
|
||||
merged.
|
||||
|
||||
How can I tell what patches are queued up for backporting to the various stable releases?
|
||||
-----------------------------------------------------------------------------------------
|
||||
Normally Greg Kroah-Hartman collects stable commits himself, but for
|
||||
networking, Dave collects up patches he deems critical for the
|
||||
networking subsystem, and then hands them off to Greg.
|
||||
|
||||
There is a patchworks queue that you can see here:
|
||||
|
||||
https://patchwork.kernel.org/bundle/netdev/stable/?state=*
|
||||
|
||||
It contains the patches which Dave has selected, but not yet handed off
|
||||
to Greg. If Greg already has the patch, then it will be here:
|
||||
|
||||
https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
|
||||
|
||||
A quick way to find whether the patch is in this stable-queue is to
|
||||
simply clone the repo, and then git grep the mainline commit ID, e.g.
|
||||
::
|
||||
|
||||
stable-queue$ git grep -l 284041ef21fdf2e
|
||||
releases/3.0.84/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
|
||||
releases/3.4.51/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
|
||||
releases/3.9.8/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
|
||||
stable/stable-queue$
|
||||
|
||||
I see a network patch and I think it should be backported to stable. Should I request it via stable@vger.kernel.org like the references in the kernel's Documentation/process/stable-kernel-rules.rst file say?
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
No, not for networking. Check the stable queues as per above first
|
||||
to see if it is already queued. If not, then send a mail to netdev,
|
||||
listing the upstream commit ID and why you think it should be a stable
|
||||
candidate.
|
||||
|
||||
Before you jump to go do the above, do note that the normal stable rules
|
||||
in :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
|
||||
still apply. So you need to explicitly indicate why it is a critical
|
||||
fix and exactly what users are impacted. In addition, you need to
|
||||
convince yourself that you *really* think it has been overlooked,
|
||||
vs. having been considered and rejected.
|
||||
|
||||
Generally speaking, the longer it has had a chance to "soak" in
|
||||
mainline, the better the odds that it is an OK candidate for stable. So
|
||||
scrambling to request a commit be added the day after it appears should
|
||||
be avoided.
|
||||
|
||||
I have created a network patch and I think it should be backported to stable. Should I add a Cc: stable@vger.kernel.org like the references in the kernel's Documentation/ directory say?
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
No. See above answer. In short, if you think it really belongs in
|
||||
stable, then ensure you write a decent commit log that describes who
|
||||
gets impacted by the bug fix and how it manifests itself, and when the
|
||||
bug was introduced. If you do that properly, then the commit will get
|
||||
handled appropriately and most likely get put in the patchworks stable
|
||||
queue if it really warrants it.
|
||||
|
||||
If you think there is some valid information relating to it being in
|
||||
stable that does *not* belong in the commit log, then use the three dash
|
||||
marker line as described in
|
||||
:ref:`Documentation/process/submitting-patches.rst <the_canonical_patch_format>`
|
||||
to temporarily embed that information into the patch that you send.
|
||||
|
||||
Are all networking bug fixes backported to all stable releases?
|
||||
Are there special rules regarding stable submissions on netdev?
|
||||
---------------------------------------------------------------
|
||||
Due to capacity, Dave could only take care of the backports for the
|
||||
last two stable releases. For earlier stable releases, each stable
|
||||
branch maintainer is supposed to take care of them. If you find any
|
||||
patch is missing from an earlier stable branch, please notify
|
||||
stable@vger.kernel.org with either a commit ID or a formal patch
|
||||
backported, and CC Dave and other relevant networking developers.
|
||||
While it used to be the case that netdev submissions were not supposed
|
||||
to carry explicit ``CC: stable@vger.kernel.org`` tags that is no longer
|
||||
the case today. Please follow the standard stable rules in
|
||||
:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`,
|
||||
and make sure you include appropriate Fixes tags!
|
||||
|
||||
Is the comment style convention different for the networking content?
|
||||
---------------------------------------------------------------------
|
||||
|
|
|
|||
|
|
@ -35,12 +35,6 @@ Rules on what kind of patches are accepted, and which ones are not, into the
|
|||
Procedure for submitting patches to the -stable tree
|
||||
----------------------------------------------------
|
||||
|
||||
- If the patch covers files in net/ or drivers/net please follow netdev stable
|
||||
submission guidelines as described in
|
||||
:ref:`Documentation/networking/netdev-FAQ.rst <netdev-FAQ>`
|
||||
after first checking the stable networking queue at
|
||||
https://patchwork.kernel.org/bundle/netdev/stable/?state=*
|
||||
to ensure the requested patch is not already queued up.
|
||||
- Security patches should not be handled (solely) by the -stable review
|
||||
process but should follow the procedures in
|
||||
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.
|
||||
|
|
|
|||
|
|
@ -250,11 +250,6 @@ should also read
|
|||
:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
|
||||
in addition to this file.
|
||||
|
||||
Note, however, that some subsystem maintainers want to come to their own
|
||||
conclusions on which patches should go to the stable trees. The networking
|
||||
maintainer, in particular, would rather not see individual developers
|
||||
adding lines like the above to their patches.
|
||||
|
||||
If changes affect userland-kernel interfaces, please send the MAN-PAGES
|
||||
maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at
|
||||
least a notification of the change, so that some information makes its way
|
||||
|
|
|
|||
|
|
@ -10723,7 +10723,8 @@ F: drivers/net/ethernet/marvell/mvpp2/
|
|||
|
||||
MARVELL MWIFIEX WIRELESS DRIVER
|
||||
M: Amitkumar Karwar <amitkarwar@gmail.com>
|
||||
M: Ganapathi Bhat <ganapathi.bhat@nxp.com>
|
||||
M: Ganapathi Bhat <ganapathi017@gmail.com>
|
||||
M: Sharvari Harisangam <sharvari.harisangam@nxp.com>
|
||||
M: Xinming Hu <huxinming820@gmail.com>
|
||||
L: linux-wireless@vger.kernel.org
|
||||
S: Maintained
|
||||
|
|
|
|||
|
|
@ -14,6 +14,7 @@
|
|||
|
||||
#include <asm/addrspace.h>
|
||||
#include <asm/unaligned.h>
|
||||
#include <asm-generic/vmlinux.lds.h>
|
||||
|
||||
/*
|
||||
* These two variables specify the free mem region
|
||||
|
|
@ -120,6 +121,13 @@ void decompress_kernel(unsigned long boot_heap_start)
|
|||
/* last four bytes is always image size in little endian */
|
||||
image_size = get_unaligned_le32((void *)&__image_end - 4);
|
||||
|
||||
/* The device tree's address must be properly aligned */
|
||||
image_size = ALIGN(image_size, STRUCT_ALIGNMENT);
|
||||
|
||||
puts("Copy device tree to address ");
|
||||
puthex(VMLINUX_LOAD_ADDRESS_ULL + image_size);
|
||||
puts("\n");
|
||||
|
||||
/* copy dtb to where the booted kernel will expect it */
|
||||
memcpy((void *)VMLINUX_LOAD_ADDRESS_ULL + image_size,
|
||||
__appended_dtb, dtb_size);
|
||||
|
|
|
|||
|
|
@ -12,8 +12,8 @@ AFLAGS_chacha-core.o += -O2 # needed to fill branch delay slots
|
|||
obj-$(CONFIG_CRYPTO_POLY1305_MIPS) += poly1305-mips.o
|
||||
poly1305-mips-y := poly1305-core.o poly1305-glue.o
|
||||
|
||||
perlasm-flavour-$(CONFIG_CPU_MIPS32) := o32
|
||||
perlasm-flavour-$(CONFIG_CPU_MIPS64) := 64
|
||||
perlasm-flavour-$(CONFIG_32BIT) := o32
|
||||
perlasm-flavour-$(CONFIG_64BIT) := 64
|
||||
|
||||
quiet_cmd_perlasm = PERLASM $@
|
||||
cmd_perlasm = $(PERL) $(<) $(perlasm-flavour-y) $(@)
|
||||
|
|
|
|||
|
|
@ -24,8 +24,11 @@ extern void (*board_ebase_setup)(void);
|
|||
extern void (*board_cache_error_setup)(void);
|
||||
|
||||
extern int register_nmi_notifier(struct notifier_block *nb);
|
||||
extern void reserve_exception_space(phys_addr_t addr, unsigned long size);
|
||||
extern char except_vec_nmi[];
|
||||
|
||||
#define VECTORSPACING 0x100 /* for EI/VI mode */
|
||||
|
||||
#define nmi_notifier(fn, pri) \
|
||||
({ \
|
||||
static struct notifier_block fn##_nb = { \
|
||||
|
|
|
|||
|
|
@ -26,6 +26,7 @@
|
|||
#include <asm/elf.h>
|
||||
#include <asm/pgtable-bits.h>
|
||||
#include <asm/spram.h>
|
||||
#include <asm/traps.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#include "fpu-probe.h"
|
||||
|
|
@ -1628,6 +1629,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
|
|||
c->cputype = CPU_BMIPS3300;
|
||||
__cpu_name[cpu] = "Broadcom BMIPS3300";
|
||||
set_elf_platform(cpu, "bmips3300");
|
||||
reserve_exception_space(0x400, VECTORSPACING * 64);
|
||||
break;
|
||||
case PRID_IMP_BMIPS43XX: {
|
||||
int rev = c->processor_id & PRID_REV_MASK;
|
||||
|
|
@ -1638,6 +1640,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
|
|||
__cpu_name[cpu] = "Broadcom BMIPS4380";
|
||||
set_elf_platform(cpu, "bmips4380");
|
||||
c->options |= MIPS_CPU_RIXI;
|
||||
reserve_exception_space(0x400, VECTORSPACING * 64);
|
||||
} else {
|
||||
c->cputype = CPU_BMIPS4350;
|
||||
__cpu_name[cpu] = "Broadcom BMIPS4350";
|
||||
|
|
@ -1654,6 +1657,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
|
|||
__cpu_name[cpu] = "Broadcom BMIPS5000";
|
||||
set_elf_platform(cpu, "bmips5000");
|
||||
c->options |= MIPS_CPU_ULRI | MIPS_CPU_RIXI;
|
||||
reserve_exception_space(0x1000, VECTORSPACING * 64);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
|
@ -2133,6 +2137,8 @@ void cpu_probe(void)
|
|||
if (cpu == 0)
|
||||
__ua_limit = ~((1ull << cpu_vmbits) - 1);
|
||||
#endif
|
||||
|
||||
reserve_exception_space(0, 0x1000);
|
||||
}
|
||||
|
||||
void cpu_report(void)
|
||||
|
|
|
|||
|
|
@ -21,6 +21,7 @@
|
|||
#include <asm/fpu.h>
|
||||
#include <asm/mipsregs.h>
|
||||
#include <asm/elf.h>
|
||||
#include <asm/traps.h>
|
||||
|
||||
#include "fpu-probe.h"
|
||||
|
||||
|
|
@ -158,6 +159,8 @@ void cpu_probe(void)
|
|||
cpu_set_fpu_opts(c);
|
||||
else
|
||||
cpu_set_nofpu_opts(c);
|
||||
|
||||
reserve_exception_space(0, 0x400);
|
||||
}
|
||||
|
||||
void cpu_report(void)
|
||||
|
|
|
|||
|
|
@ -2009,13 +2009,16 @@ void __noreturn nmi_exception_handler(struct pt_regs *regs)
|
|||
nmi_exit();
|
||||
}
|
||||
|
||||
#define VECTORSPACING 0x100 /* for EI/VI mode */
|
||||
|
||||
unsigned long ebase;
|
||||
EXPORT_SYMBOL_GPL(ebase);
|
||||
unsigned long exception_handlers[32];
|
||||
unsigned long vi_handlers[64];
|
||||
|
||||
void reserve_exception_space(phys_addr_t addr, unsigned long size)
|
||||
{
|
||||
memblock_reserve(addr, size);
|
||||
}
|
||||
|
||||
void __init *set_except_vector(int n, void *addr)
|
||||
{
|
||||
unsigned long handler = (unsigned long) addr;
|
||||
|
|
@ -2367,10 +2370,7 @@ void __init trap_init(void)
|
|||
|
||||
if (!cpu_has_mips_r2_r6) {
|
||||
ebase = CAC_BASE;
|
||||
ebase_pa = virt_to_phys((void *)ebase);
|
||||
vec_size = 0x400;
|
||||
|
||||
memblock_reserve(ebase_pa, vec_size);
|
||||
} else {
|
||||
if (cpu_has_veic || cpu_has_vint)
|
||||
vec_size = 0x200 + VECTORSPACING*64;
|
||||
|
|
|
|||
|
|
@ -145,6 +145,7 @@ SECTIONS
|
|||
}
|
||||
|
||||
#ifdef CONFIG_MIPS_ELF_APPENDED_DTB
|
||||
STRUCT_ALIGN();
|
||||
.appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) {
|
||||
*(.appended_dtb)
|
||||
KEEP(*(.appended_dtb))
|
||||
|
|
@ -172,6 +173,11 @@ SECTIONS
|
|||
#endif
|
||||
|
||||
#ifdef CONFIG_MIPS_RAW_APPENDED_DTB
|
||||
.fill : {
|
||||
FILL(0);
|
||||
BYTE(0);
|
||||
. = ALIGN(8);
|
||||
}
|
||||
__appended_dtb = .;
|
||||
/* leave space for appended DTB */
|
||||
. += 0x100000;
|
||||
|
|
|
|||
|
|
@ -93,7 +93,7 @@ CONFIG_NETDEVICES=y
|
|||
CONFIG_NET_ETHERNET=y
|
||||
CONFIG_MII=m
|
||||
CONFIG_SUNLANCE=m
|
||||
CONFIG_HAPPYMEAL=m
|
||||
CONFIG_HAPPYMEAL=y
|
||||
CONFIG_SUNGEM=m
|
||||
CONFIG_SUNVNET=m
|
||||
CONFIG_LDMVSW=m
|
||||
|
|
@ -234,9 +234,7 @@ CONFIG_CRYPTO_TWOFISH=m
|
|||
CONFIG_CRC16=m
|
||||
CONFIG_LIBCRC32C=m
|
||||
CONFIG_VCC=m
|
||||
CONFIG_ATA=y
|
||||
CONFIG_PATA_CMD64X=y
|
||||
CONFIG_HAPPYMEAL=y
|
||||
CONFIG_IP_PNP=y
|
||||
CONFIG_IP_PNP_DHCP=y
|
||||
CONFIG_DEVTMPFS=y
|
||||
|
|
|
|||
|
|
@ -8,7 +8,6 @@
|
|||
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/extable_64.h>
|
||||
#include <asm/spitfire.h>
|
||||
#include <asm/adi.h>
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef __ASM_EXTABLE64_H
|
||||
#define __ASM_EXTABLE64_H
|
||||
#ifndef __ASM_EXTABLE_H
|
||||
#define __ASM_EXTABLE_H
|
||||
/*
|
||||
* The exception table consists of pairs of addresses: the first is the
|
||||
* address of an instruction that is allowed to fault, and the second is
|
||||
|
|
@ -50,16 +50,12 @@ struct thread_struct {
|
|||
unsigned long fsr;
|
||||
unsigned long fpqdepth;
|
||||
struct fpq fpqueue[16];
|
||||
unsigned long flags;
|
||||
mm_segment_t current_ds;
|
||||
};
|
||||
|
||||
#define SPARC_FLAG_KTHREAD 0x1 /* task is a kernel thread */
|
||||
#define SPARC_FLAG_UNALIGNED 0x2 /* is allowed to do unaligned accesses */
|
||||
|
||||
#define INIT_THREAD { \
|
||||
.flags = SPARC_FLAG_KTHREAD, \
|
||||
.current_ds = KERNEL_DS, \
|
||||
.kregs = (struct pt_regs *)(init_stack+THREAD_SIZE)-1 \
|
||||
}
|
||||
|
||||
/* Do necessary setup to start up a newly executed thread. */
|
||||
|
|
|
|||
|
|
@ -118,6 +118,7 @@ struct thread_info {
|
|||
.task = &tsk, \
|
||||
.current_ds = ASI_P, \
|
||||
.preempt_count = INIT_PREEMPT_COUNT, \
|
||||
.kregs = (struct pt_regs *)(init_stack+THREAD_SIZE)-1 \
|
||||
}
|
||||
|
||||
/* how to get the thread information struct from C */
|
||||
|
|
|
|||
|
|
@ -1,6 +1,9 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef ___ASM_SPARC_UACCESS_H
|
||||
#define ___ASM_SPARC_UACCESS_H
|
||||
|
||||
#include <asm/extable.h>
|
||||
|
||||
#if defined(__sparc__) && defined(__arch64__)
|
||||
#include <asm/uaccess_64.h>
|
||||
#else
|
||||
|
|
|
|||
|
|
@ -13,9 +13,6 @@
|
|||
|
||||
#include <asm/processor.h>
|
||||
|
||||
#define ARCH_HAS_SORT_EXTABLE
|
||||
#define ARCH_HAS_SEARCH_EXTABLE
|
||||
|
||||
/* Sparc is not segmented, however we need to be able to fool access_ok()
|
||||
* when doing system calls from kernel mode legitimately.
|
||||
*
|
||||
|
|
@ -40,36 +37,6 @@
|
|||
#define __access_ok(addr, size) (__user_ok((addr) & get_fs().seg, (size)))
|
||||
#define access_ok(addr, size) __access_ok((unsigned long)(addr), size)
|
||||
|
||||
/*
|
||||
* The exception table consists of pairs of addresses: the first is the
|
||||
* address of an instruction that is allowed to fault, and the second is
|
||||
* the address at which the program should continue. No registers are
|
||||
* modified, so it is entirely up to the continuation code to figure out
|
||||
* what to do.
|
||||
*
|
||||
* All the routines below use bits of fixup code that are out of line
|
||||
* with the main instruction path. This means when everything is well,
|
||||
* we don't even have to jump over them. Further, they do not intrude
|
||||
* on our cache or tlb entries.
|
||||
*
|
||||
* There is a special way how to put a range of potentially faulting
|
||||
* insns (like twenty ldd/std's with now intervening other instructions)
|
||||
* You specify address of first in insn and 0 in fixup and in the next
|
||||
* exception_table_entry you specify last potentially faulting insn + 1
|
||||
* and in fixup the routine which should handle the fault.
|
||||
* That fixup code will get
|
||||
* (faulting_insn_address - first_insn_in_the_range_address)/4
|
||||
* in %g2 (ie. index of the faulting instruction in the range).
|
||||
*/
|
||||
|
||||
struct exception_table_entry
|
||||
{
|
||||
unsigned long insn, fixup;
|
||||
};
|
||||
|
||||
/* Returns 0 if exception not found and fixup otherwise. */
|
||||
unsigned long search_extables_range(unsigned long addr, unsigned long *g2);
|
||||
|
||||
/* Uh, these should become the main single-value transfer routines..
|
||||
* They automatically use the right size if we just have the right
|
||||
* pointer type..
|
||||
|
|
@ -252,12 +219,7 @@ static inline unsigned long __clear_user(void __user *addr, unsigned long size)
|
|||
unsigned long ret;
|
||||
|
||||
__asm__ __volatile__ (
|
||||
".section __ex_table,#alloc\n\t"
|
||||
".align 4\n\t"
|
||||
".word 1f,3\n\t"
|
||||
".previous\n\t"
|
||||
"mov %2, %%o1\n"
|
||||
"1:\n\t"
|
||||
"call __bzero\n\t"
|
||||
" mov %1, %%o0\n\t"
|
||||
"mov %%o0, %0\n"
|
||||
|
|
|
|||
|
|
@ -10,7 +10,6 @@
|
|||
#include <linux/string.h>
|
||||
#include <asm/asi.h>
|
||||
#include <asm/spitfire.h>
|
||||
#include <asm/extable_64.h>
|
||||
|
||||
#include <asm/processor.h>
|
||||
|
||||
|
|
|
|||
|
|
@ -515,7 +515,7 @@ continue_boot:
|
|||
|
||||
/* I want a kernel stack NOW! */
|
||||
set init_thread_union, %g1
|
||||
set (THREAD_SIZE - STACKFRAME_SZ), %g2
|
||||
set (THREAD_SIZE - STACKFRAME_SZ - TRACEREG_SZ), %g2
|
||||
add %g1, %g2, %sp
|
||||
mov 0, %fp /* And for good luck */
|
||||
|
||||
|
|
|
|||
|
|
@ -706,7 +706,7 @@ tlb_fixup_done:
|
|||
wr %g0, ASI_P, %asi
|
||||
mov 1, %g1
|
||||
sllx %g1, THREAD_SHIFT, %g1
|
||||
sub %g1, (STACKFRAME_SZ + STACK_BIAS), %g1
|
||||
sub %g1, (STACKFRAME_SZ + STACK_BIAS + TRACEREG_SZ), %g1
|
||||
add %g6, %g1, %sp
|
||||
|
||||
/* Set per-cpu pointer initially to zero, this makes
|
||||
|
|
|
|||
|
|
@ -216,16 +216,6 @@ void flush_thread(void)
|
|||
clear_thread_flag(TIF_USEDFPU);
|
||||
#endif
|
||||
}
|
||||
|
||||
/* This task is no longer a kernel thread. */
|
||||
if (current->thread.flags & SPARC_FLAG_KTHREAD) {
|
||||
current->thread.flags &= ~SPARC_FLAG_KTHREAD;
|
||||
|
||||
/* We must fixup kregs as well. */
|
||||
/* XXX This was not fixed for ti for a while, worked. Unused? */
|
||||
current->thread.kregs = (struct pt_regs *)
|
||||
(task_stack_page(current) + (THREAD_SIZE - TRACEREG_SZ));
|
||||
}
|
||||
}
|
||||
|
||||
static inline struct sparc_stackf __user *
|
||||
|
|
@ -313,7 +303,6 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
|
|||
extern int nwindows;
|
||||
unsigned long psr;
|
||||
memset(new_stack, 0, STACKFRAME_SZ + TRACEREG_SZ);
|
||||
p->thread.flags |= SPARC_FLAG_KTHREAD;
|
||||
p->thread.current_ds = KERNEL_DS;
|
||||
ti->kpc = (((unsigned long) ret_from_kernel_thread) - 0x8);
|
||||
childregs->u_regs[UREG_G1] = sp; /* function */
|
||||
|
|
@ -325,7 +314,6 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
|
|||
}
|
||||
memcpy(new_stack, (char *)regs - STACKFRAME_SZ, STACKFRAME_SZ + TRACEREG_SZ);
|
||||
childregs->u_regs[UREG_FP] = sp;
|
||||
p->thread.flags &= ~SPARC_FLAG_KTHREAD;
|
||||
p->thread.current_ds = USER_DS;
|
||||
ti->kpc = (((unsigned long) ret_from_fork) - 0x8);
|
||||
ti->kpsr = current->thread.fork_kpsr | PSR_PIL;
|
||||
|
|
|
|||
|
|
@ -266,7 +266,6 @@ static __init void leon_patch(void)
|
|||
}
|
||||
|
||||
struct tt_entry *sparc_ttable;
|
||||
static struct pt_regs fake_swapper_regs;
|
||||
|
||||
/* Called from head_32.S - before we have setup anything
|
||||
* in the kernel. Be very careful with what you do here.
|
||||
|
|
@ -363,8 +362,6 @@ void __init setup_arch(char **cmdline_p)
|
|||
(*(linux_dbvec->teach_debugger))();
|
||||
}
|
||||
|
||||
init_task.thread.kregs = &fake_swapper_regs;
|
||||
|
||||
/* Run-time patch instructions to match the cpu model */
|
||||
per_cpu_patch();
|
||||
|
||||
|
|
|
|||
|
|
@ -165,8 +165,6 @@ extern int root_mountflags;
|
|||
|
||||
char reboot_command[COMMAND_LINE_SIZE];
|
||||
|
||||
static struct pt_regs fake_swapper_regs = { { 0, }, 0, 0, 0, 0 };
|
||||
|
||||
static void __init per_cpu_patch(void)
|
||||
{
|
||||
struct cpuid_patch_entry *p;
|
||||
|
|
@ -661,8 +659,6 @@ void __init setup_arch(char **cmdline_p)
|
|||
rd_image_start = ram_flags & RAMDISK_IMAGE_START_MASK;
|
||||
#endif
|
||||
|
||||
task_thread_info(&init_task)->kregs = &fake_swapper_regs;
|
||||
|
||||
#ifdef CONFIG_IP_PNP
|
||||
if (!ic_set_manually) {
|
||||
phandle chosen = prom_finddevice("/chosen");
|
||||
|
|
|
|||
|
|
@ -275,14 +275,13 @@ bool is_no_fault_exception(struct pt_regs *regs)
|
|||
asi = (regs->tstate >> 24); /* saved %asi */
|
||||
else
|
||||
asi = (insn >> 5); /* immediate asi */
|
||||
if ((asi & 0xf2) == ASI_PNF) {
|
||||
if (insn & 0x1000000) { /* op3[5:4]=3 */
|
||||
handle_ldf_stq(insn, regs);
|
||||
return true;
|
||||
} else if (insn & 0x200000) { /* op3[2], stores */
|
||||
if ((asi & 0xf6) == ASI_PNF) {
|
||||
if (insn & 0x200000) /* op3[2], stores */
|
||||
return false;
|
||||
}
|
||||
handle_ld_nf(insn, regs);
|
||||
if (insn & 0x1000000) /* op3[5:4]=3 (fp) */
|
||||
handle_ldf_stq(insn, regs);
|
||||
else
|
||||
handle_ld_nf(insn, regs);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -16,6 +16,7 @@
|
|||
#include <linux/uaccess.h>
|
||||
#include <linux/smp.h>
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/extable.h>
|
||||
|
||||
#include <asm/setup.h>
|
||||
|
||||
|
|
@ -213,10 +214,10 @@ static inline int ok_for_kernel(unsigned int insn)
|
|||
|
||||
static void kernel_mna_trap_fault(struct pt_regs *regs, unsigned int insn)
|
||||
{
|
||||
unsigned long g2 = regs->u_regs [UREG_G2];
|
||||
unsigned long fixup = search_extables_range(regs->pc, &g2);
|
||||
const struct exception_table_entry *entry;
|
||||
|
||||
if (!fixup) {
|
||||
entry = search_exception_tables(regs->pc);
|
||||
if (!entry) {
|
||||
unsigned long address = compute_effective_address(regs, insn);
|
||||
if(address < PAGE_SIZE) {
|
||||
printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference in mna handler");
|
||||
|
|
@ -232,9 +233,8 @@ static void kernel_mna_trap_fault(struct pt_regs *regs, unsigned int insn)
|
|||
die_if_kernel("Oops", regs);
|
||||
/* Not reached */
|
||||
}
|
||||
regs->pc = fixup;
|
||||
regs->pc = entry->fixup;
|
||||
regs->npc = regs->pc + 4;
|
||||
regs->u_regs [UREG_G2] = g2;
|
||||
}
|
||||
|
||||
asmlinkage void kernel_unaligned_trap(struct pt_regs *regs, unsigned int insn)
|
||||
|
|
@ -274,103 +274,9 @@ asmlinkage void kernel_unaligned_trap(struct pt_regs *regs, unsigned int insn)
|
|||
}
|
||||
}
|
||||
|
||||
static inline int ok_for_user(struct pt_regs *regs, unsigned int insn,
|
||||
enum direction dir)
|
||||
{
|
||||
unsigned int reg;
|
||||
int size = ((insn >> 19) & 3) == 3 ? 8 : 4;
|
||||
|
||||
if ((regs->pc | regs->npc) & 3)
|
||||
return 0;
|
||||
|
||||
/* Must access_ok() in all the necessary places. */
|
||||
#define WINREG_ADDR(regnum) \
|
||||
((void __user *)(((unsigned long *)regs->u_regs[UREG_FP])+(regnum)))
|
||||
|
||||
reg = (insn >> 25) & 0x1f;
|
||||
if (reg >= 16) {
|
||||
if (!access_ok(WINREG_ADDR(reg - 16), size))
|
||||
return -EFAULT;
|
||||
}
|
||||
reg = (insn >> 14) & 0x1f;
|
||||
if (reg >= 16) {
|
||||
if (!access_ok(WINREG_ADDR(reg - 16), size))
|
||||
return -EFAULT;
|
||||
}
|
||||
if (!(insn & 0x2000)) {
|
||||
reg = (insn & 0x1f);
|
||||
if (reg >= 16) {
|
||||
if (!access_ok(WINREG_ADDR(reg - 16), size))
|
||||
return -EFAULT;
|
||||
}
|
||||
}
|
||||
#undef WINREG_ADDR
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void user_mna_trap_fault(struct pt_regs *regs, unsigned int insn)
|
||||
asmlinkage void user_unaligned_trap(struct pt_regs *regs, unsigned int insn)
|
||||
{
|
||||
send_sig_fault(SIGBUS, BUS_ADRALN,
|
||||
(void __user *)safe_compute_effective_address(regs, insn),
|
||||
0, current);
|
||||
}
|
||||
|
||||
asmlinkage void user_unaligned_trap(struct pt_regs *regs, unsigned int insn)
|
||||
{
|
||||
enum direction dir;
|
||||
|
||||
if(!(current->thread.flags & SPARC_FLAG_UNALIGNED) ||
|
||||
(((insn >> 30) & 3) != 3))
|
||||
goto kill_user;
|
||||
dir = decode_direction(insn);
|
||||
if(!ok_for_user(regs, insn, dir)) {
|
||||
goto kill_user;
|
||||
} else {
|
||||
int err, size = decode_access_size(insn);
|
||||
unsigned long addr;
|
||||
|
||||
if(floating_point_load_or_store_p(insn)) {
|
||||
printk("User FPU load/store unaligned unsupported.\n");
|
||||
goto kill_user;
|
||||
}
|
||||
|
||||
addr = compute_effective_address(regs, insn);
|
||||
perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr);
|
||||
switch(dir) {
|
||||
case load:
|
||||
err = do_int_load(fetch_reg_addr(((insn>>25)&0x1f),
|
||||
regs),
|
||||
size, (unsigned long *) addr,
|
||||
decode_signedness(insn));
|
||||
break;
|
||||
|
||||
case store:
|
||||
err = do_int_store(((insn>>25)&0x1f), size,
|
||||
(unsigned long *) addr, regs);
|
||||
break;
|
||||
|
||||
case both:
|
||||
/*
|
||||
* This was supported in 2.4. However, we question
|
||||
* the value of SWAP instruction across word boundaries.
|
||||
*/
|
||||
printk("Unaligned SWAP unsupported.\n");
|
||||
err = -EFAULT;
|
||||
break;
|
||||
|
||||
default:
|
||||
unaligned_panic("Impossible user unaligned trap.");
|
||||
goto out;
|
||||
}
|
||||
if (err)
|
||||
goto kill_user;
|
||||
else
|
||||
advance(regs);
|
||||
goto out;
|
||||
}
|
||||
|
||||
kill_user:
|
||||
user_mna_trap_fault(regs, insn);
|
||||
out:
|
||||
;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -155,13 +155,6 @@ cpout: retl ! get outta here
|
|||
.text; \
|
||||
.align 4
|
||||
|
||||
#define EXT(start,end) \
|
||||
.section __ex_table,ALLOC; \
|
||||
.align 4; \
|
||||
.word start, 0, end, cc_fault; \
|
||||
.text; \
|
||||
.align 4
|
||||
|
||||
/* This aligned version executes typically in 8.5 superscalar cycles, this
|
||||
* is the best I can do. I say 8.5 because the final add will pair with
|
||||
* the next ldd in the main unrolled loop. Thus the pipe is always full.
|
||||
|
|
@ -169,20 +162,20 @@ cpout: retl ! get outta here
|
|||
* please check the fixup code below as well.
|
||||
*/
|
||||
#define CSUMCOPY_BIGCHUNK_ALIGNED(src, dst, sum, off, t0, t1, t2, t3, t4, t5, t6, t7) \
|
||||
ldd [src + off + 0x00], t0; \
|
||||
ldd [src + off + 0x08], t2; \
|
||||
EX(ldd [src + off + 0x00], t0); \
|
||||
EX(ldd [src + off + 0x08], t2); \
|
||||
addxcc t0, sum, sum; \
|
||||
ldd [src + off + 0x10], t4; \
|
||||
EX(ldd [src + off + 0x10], t4); \
|
||||
addxcc t1, sum, sum; \
|
||||
ldd [src + off + 0x18], t6; \
|
||||
EX(ldd [src + off + 0x18], t6); \
|
||||
addxcc t2, sum, sum; \
|
||||
std t0, [dst + off + 0x00]; \
|
||||
EX(std t0, [dst + off + 0x00]); \
|
||||
addxcc t3, sum, sum; \
|
||||
std t2, [dst + off + 0x08]; \
|
||||
EX(std t2, [dst + off + 0x08]); \
|
||||
addxcc t4, sum, sum; \
|
||||
std t4, [dst + off + 0x10]; \
|
||||
EX(std t4, [dst + off + 0x10]); \
|
||||
addxcc t5, sum, sum; \
|
||||
std t6, [dst + off + 0x18]; \
|
||||
EX(std t6, [dst + off + 0x18]); \
|
||||
addxcc t6, sum, sum; \
|
||||
addxcc t7, sum, sum;
|
||||
|
||||
|
|
@ -191,39 +184,39 @@ cpout: retl ! get outta here
|
|||
* Viking MXCC into streaming mode. Ho hum...
|
||||
*/
|
||||
#define CSUMCOPY_BIGCHUNK(src, dst, sum, off, t0, t1, t2, t3, t4, t5, t6, t7) \
|
||||
ldd [src + off + 0x00], t0; \
|
||||
ldd [src + off + 0x08], t2; \
|
||||
ldd [src + off + 0x10], t4; \
|
||||
ldd [src + off + 0x18], t6; \
|
||||
st t0, [dst + off + 0x00]; \
|
||||
EX(ldd [src + off + 0x00], t0); \
|
||||
EX(ldd [src + off + 0x08], t2); \
|
||||
EX(ldd [src + off + 0x10], t4); \
|
||||
EX(ldd [src + off + 0x18], t6); \
|
||||
EX(st t0, [dst + off + 0x00]); \
|
||||
addxcc t0, sum, sum; \
|
||||
st t1, [dst + off + 0x04]; \
|
||||
EX(st t1, [dst + off + 0x04]); \
|
||||
addxcc t1, sum, sum; \
|
||||
st t2, [dst + off + 0x08]; \
|
||||
EX(st t2, [dst + off + 0x08]); \
|
||||
addxcc t2, sum, sum; \
|
||||
st t3, [dst + off + 0x0c]; \
|
||||
EX(st t3, [dst + off + 0x0c]); \
|
||||
addxcc t3, sum, sum; \
|
||||
st t4, [dst + off + 0x10]; \
|
||||
EX(st t4, [dst + off + 0x10]); \
|
||||
addxcc t4, sum, sum; \
|
||||
st t5, [dst + off + 0x14]; \
|
||||
EX(st t5, [dst + off + 0x14]); \
|
||||
addxcc t5, sum, sum; \
|
||||
st t6, [dst + off + 0x18]; \
|
||||
EX(st t6, [dst + off + 0x18]); \
|
||||
addxcc t6, sum, sum; \
|
||||
st t7, [dst + off + 0x1c]; \
|
||||
EX(st t7, [dst + off + 0x1c]); \
|
||||
addxcc t7, sum, sum;
|
||||
|
||||
/* Yuck, 6 superscalar cycles... */
|
||||
#define CSUMCOPY_LASTCHUNK(src, dst, sum, off, t0, t1, t2, t3) \
|
||||
ldd [src - off - 0x08], t0; \
|
||||
ldd [src - off - 0x00], t2; \
|
||||
EX(ldd [src - off - 0x08], t0); \
|
||||
EX(ldd [src - off - 0x00], t2); \
|
||||
addxcc t0, sum, sum; \
|
||||
st t0, [dst - off - 0x08]; \
|
||||
EX(st t0, [dst - off - 0x08]); \
|
||||
addxcc t1, sum, sum; \
|
||||
st t1, [dst - off - 0x04]; \
|
||||
EX(st t1, [dst - off - 0x04]); \
|
||||
addxcc t2, sum, sum; \
|
||||
st t2, [dst - off - 0x00]; \
|
||||
EX(st t2, [dst - off - 0x00]); \
|
||||
addxcc t3, sum, sum; \
|
||||
st t3, [dst - off + 0x04];
|
||||
EX(st t3, [dst - off + 0x04]);
|
||||
|
||||
/* Handle the end cruft code out of band for better cache patterns. */
|
||||
cc_end_cruft:
|
||||
|
|
@ -331,7 +324,6 @@ __csum_partial_copy_sparc_generic:
|
|||
CSUMCOPY_BIGCHUNK(%o0,%o1,%g7,0x20,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3)
|
||||
CSUMCOPY_BIGCHUNK(%o0,%o1,%g7,0x40,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3)
|
||||
CSUMCOPY_BIGCHUNK(%o0,%o1,%g7,0x60,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3)
|
||||
10: EXT(5b, 10b) ! note for exception handling
|
||||
sub %g1, 128, %g1 ! detract from length
|
||||
addx %g0, %g7, %g7 ! add in last carry bit
|
||||
andcc %g1, 0xffffff80, %g0 ! more to csum?
|
||||
|
|
@ -356,8 +348,7 @@ cctbl: CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x68,%g2,%g3,%g4,%g5)
|
|||
CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x28,%g2,%g3,%g4,%g5)
|
||||
CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x18,%g2,%g3,%g4,%g5)
|
||||
CSUMCOPY_LASTCHUNK(%o0,%o1,%g7,0x08,%g2,%g3,%g4,%g5)
|
||||
12: EXT(cctbl, 12b) ! note for exception table handling
|
||||
addx %g0, %g7, %g7
|
||||
12: addx %g0, %g7, %g7
|
||||
andcc %o3, 0xf, %g0 ! check for low bits set
|
||||
ccte: bne cc_end_cruft ! something left, handle it out of band
|
||||
andcc %o3, 8, %g0 ! begin checks for that code
|
||||
|
|
@ -367,7 +358,6 @@ ccdbl: CSUMCOPY_BIGCHUNK_ALIGNED(%o0,%o1,%g7,0x00,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o
|
|||
CSUMCOPY_BIGCHUNK_ALIGNED(%o0,%o1,%g7,0x20,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3)
|
||||
CSUMCOPY_BIGCHUNK_ALIGNED(%o0,%o1,%g7,0x40,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3)
|
||||
CSUMCOPY_BIGCHUNK_ALIGNED(%o0,%o1,%g7,0x60,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3)
|
||||
11: EXT(ccdbl, 11b) ! note for exception table handling
|
||||
sub %g1, 128, %g1 ! detract from length
|
||||
addx %g0, %g7, %g7 ! add in last carry bit
|
||||
andcc %g1, 0xffffff80, %g0 ! more to csum?
|
||||
|
|
|
|||
|
|
@ -21,98 +21,134 @@
|
|||
/* Work around cpp -rob */
|
||||
#define ALLOC #alloc
|
||||
#define EXECINSTR #execinstr
|
||||
|
||||
#define EX_ENTRY(l1, l2) \
|
||||
.section __ex_table,ALLOC; \
|
||||
.align 4; \
|
||||
.word l1, l2; \
|
||||
.text;
|
||||
|
||||
#define EX(x,y,a,b) \
|
||||
98: x,y; \
|
||||
.section .fixup,ALLOC,EXECINSTR; \
|
||||
.align 4; \
|
||||
99: ba fixupretl; \
|
||||
a, b, %g3; \
|
||||
.section __ex_table,ALLOC; \
|
||||
.align 4; \
|
||||
.word 98b, 99b; \
|
||||
.text; \
|
||||
.align 4
|
||||
99: retl; \
|
||||
a, b, %o0; \
|
||||
EX_ENTRY(98b, 99b)
|
||||
|
||||
#define EX2(x,y,c,d,e,a,b) \
|
||||
98: x,y; \
|
||||
.section .fixup,ALLOC,EXECINSTR; \
|
||||
.align 4; \
|
||||
99: c, d, e; \
|
||||
ba fixupretl; \
|
||||
a, b, %g3; \
|
||||
.section __ex_table,ALLOC; \
|
||||
.align 4; \
|
||||
.word 98b, 99b; \
|
||||
.text; \
|
||||
.align 4
|
||||
retl; \
|
||||
a, b, %o0; \
|
||||
EX_ENTRY(98b, 99b)
|
||||
|
||||
#define EXO2(x,y) \
|
||||
98: x, y; \
|
||||
.section __ex_table,ALLOC; \
|
||||
.align 4; \
|
||||
.word 98b, 97f; \
|
||||
.text; \
|
||||
.align 4
|
||||
EX_ENTRY(98b, 97f)
|
||||
|
||||
#define EXT(start,end,handler) \
|
||||
.section __ex_table,ALLOC; \
|
||||
.align 4; \
|
||||
.word start, 0, end, handler; \
|
||||
.text; \
|
||||
.align 4
|
||||
#define LD(insn, src, offset, reg, label) \
|
||||
98: insn [%src + (offset)], %reg; \
|
||||
.section .fixup,ALLOC,EXECINSTR; \
|
||||
99: ba label; \
|
||||
mov offset, %g5; \
|
||||
EX_ENTRY(98b, 99b)
|
||||
|
||||
/* Please do not change following macros unless you change logic used
|
||||
* in .fixup at the end of this file as well
|
||||
*/
|
||||
#define ST(insn, dst, offset, reg, label) \
|
||||
98: insn %reg, [%dst + (offset)]; \
|
||||
.section .fixup,ALLOC,EXECINSTR; \
|
||||
99: ba label; \
|
||||
mov offset, %g5; \
|
||||
EX_ENTRY(98b, 99b)
|
||||
|
||||
/* Both these macros have to start with exactly the same insn */
|
||||
/* left: g7 + (g1 % 128) - offset */
|
||||
#define MOVE_BIGCHUNK(src, dst, offset, t0, t1, t2, t3, t4, t5, t6, t7) \
|
||||
ldd [%src + (offset) + 0x00], %t0; \
|
||||
ldd [%src + (offset) + 0x08], %t2; \
|
||||
ldd [%src + (offset) + 0x10], %t4; \
|
||||
ldd [%src + (offset) + 0x18], %t6; \
|
||||
st %t0, [%dst + (offset) + 0x00]; \
|
||||
st %t1, [%dst + (offset) + 0x04]; \
|
||||
st %t2, [%dst + (offset) + 0x08]; \
|
||||
st %t3, [%dst + (offset) + 0x0c]; \
|
||||
st %t4, [%dst + (offset) + 0x10]; \
|
||||
st %t5, [%dst + (offset) + 0x14]; \
|
||||
st %t6, [%dst + (offset) + 0x18]; \
|
||||
st %t7, [%dst + (offset) + 0x1c];
|
||||
LD(ldd, src, offset + 0x00, t0, bigchunk_fault) \
|
||||
LD(ldd, src, offset + 0x08, t2, bigchunk_fault) \
|
||||
LD(ldd, src, offset + 0x10, t4, bigchunk_fault) \
|
||||
LD(ldd, src, offset + 0x18, t6, bigchunk_fault) \
|
||||
ST(st, dst, offset + 0x00, t0, bigchunk_fault) \
|
||||
ST(st, dst, offset + 0x04, t1, bigchunk_fault) \
|
||||
ST(st, dst, offset + 0x08, t2, bigchunk_fault) \
|
||||
ST(st, dst, offset + 0x0c, t3, bigchunk_fault) \
|
||||
ST(st, dst, offset + 0x10, t4, bigchunk_fault) \
|
||||
ST(st, dst, offset + 0x14, t5, bigchunk_fault) \
|
||||
ST(st, dst, offset + 0x18, t6, bigchunk_fault) \
|
||||
ST(st, dst, offset + 0x1c, t7, bigchunk_fault)
|
||||
|
||||
/* left: g7 + (g1 % 128) - offset */
|
||||
#define MOVE_BIGALIGNCHUNK(src, dst, offset, t0, t1, t2, t3, t4, t5, t6, t7) \
|
||||
ldd [%src + (offset) + 0x00], %t0; \
|
||||
ldd [%src + (offset) + 0x08], %t2; \
|
||||
ldd [%src + (offset) + 0x10], %t4; \
|
||||
ldd [%src + (offset) + 0x18], %t6; \
|
||||
std %t0, [%dst + (offset) + 0x00]; \
|
||||
std %t2, [%dst + (offset) + 0x08]; \
|
||||
std %t4, [%dst + (offset) + 0x10]; \
|
||||
std %t6, [%dst + (offset) + 0x18];
|
||||
LD(ldd, src, offset + 0x00, t0, bigchunk_fault) \
|
||||
LD(ldd, src, offset + 0x08, t2, bigchunk_fault) \
|
||||
LD(ldd, src, offset + 0x10, t4, bigchunk_fault) \
|
||||
LD(ldd, src, offset + 0x18, t6, bigchunk_fault) \
|
||||
ST(std, dst, offset + 0x00, t0, bigchunk_fault) \
|
||||
ST(std, dst, offset + 0x08, t2, bigchunk_fault) \
|
||||
ST(std, dst, offset + 0x10, t4, bigchunk_fault) \
|
||||
ST(std, dst, offset + 0x18, t6, bigchunk_fault)
|
||||
|
||||
.section .fixup,#alloc,#execinstr
|
||||
bigchunk_fault:
|
||||
sub %g7, %g5, %o0
|
||||
and %g1, 127, %g1
|
||||
retl
|
||||
add %o0, %g1, %o0
|
||||
|
||||
/* left: offset + 16 + (g1 % 16) */
|
||||
#define MOVE_LASTCHUNK(src, dst, offset, t0, t1, t2, t3) \
|
||||
ldd [%src - (offset) - 0x10], %t0; \
|
||||
ldd [%src - (offset) - 0x08], %t2; \
|
||||
st %t0, [%dst - (offset) - 0x10]; \
|
||||
st %t1, [%dst - (offset) - 0x0c]; \
|
||||
st %t2, [%dst - (offset) - 0x08]; \
|
||||
st %t3, [%dst - (offset) - 0x04];
|
||||
LD(ldd, src, -(offset + 0x10), t0, lastchunk_fault) \
|
||||
LD(ldd, src, -(offset + 0x08), t2, lastchunk_fault) \
|
||||
ST(st, dst, -(offset + 0x10), t0, lastchunk_fault) \
|
||||
ST(st, dst, -(offset + 0x0c), t1, lastchunk_fault) \
|
||||
ST(st, dst, -(offset + 0x08), t2, lastchunk_fault) \
|
||||
ST(st, dst, -(offset + 0x04), t3, lastchunk_fault)
|
||||
|
||||
.section .fixup,#alloc,#execinstr
|
||||
lastchunk_fault:
|
||||
and %g1, 15, %g1
|
||||
retl
|
||||
sub %g1, %g5, %o0
|
||||
|
||||
/* left: o3 + (o2 % 16) - offset */
|
||||
#define MOVE_HALFCHUNK(src, dst, offset, t0, t1, t2, t3) \
|
||||
lduh [%src + (offset) + 0x00], %t0; \
|
||||
lduh [%src + (offset) + 0x02], %t1; \
|
||||
lduh [%src + (offset) + 0x04], %t2; \
|
||||
lduh [%src + (offset) + 0x06], %t3; \
|
||||
sth %t0, [%dst + (offset) + 0x00]; \
|
||||
sth %t1, [%dst + (offset) + 0x02]; \
|
||||
sth %t2, [%dst + (offset) + 0x04]; \
|
||||
sth %t3, [%dst + (offset) + 0x06];
|
||||
LD(lduh, src, offset + 0x00, t0, halfchunk_fault) \
|
||||
LD(lduh, src, offset + 0x02, t1, halfchunk_fault) \
|
||||
LD(lduh, src, offset + 0x04, t2, halfchunk_fault) \
|
||||
LD(lduh, src, offset + 0x06, t3, halfchunk_fault) \
|
||||
ST(sth, dst, offset + 0x00, t0, halfchunk_fault) \
|
||||
ST(sth, dst, offset + 0x02, t1, halfchunk_fault) \
|
||||
ST(sth, dst, offset + 0x04, t2, halfchunk_fault) \
|
||||
ST(sth, dst, offset + 0x06, t3, halfchunk_fault)
|
||||
|
||||
/* left: o3 + (o2 % 16) + offset + 2 */
|
||||
#define MOVE_SHORTCHUNK(src, dst, offset, t0, t1) \
|
||||
ldub [%src - (offset) - 0x02], %t0; \
|
||||
ldub [%src - (offset) - 0x01], %t1; \
|
||||
stb %t0, [%dst - (offset) - 0x02]; \
|
||||
stb %t1, [%dst - (offset) - 0x01];
|
||||
LD(ldub, src, -(offset + 0x02), t0, halfchunk_fault) \
|
||||
LD(ldub, src, -(offset + 0x01), t1, halfchunk_fault) \
|
||||
ST(stb, dst, -(offset + 0x02), t0, halfchunk_fault) \
|
||||
ST(stb, dst, -(offset + 0x01), t1, halfchunk_fault)
|
||||
|
||||
.section .fixup,#alloc,#execinstr
|
||||
halfchunk_fault:
|
||||
and %o2, 15, %o2
|
||||
sub %o3, %g5, %o3
|
||||
retl
|
||||
add %o2, %o3, %o0
|
||||
|
||||
/* left: offset + 2 + (o2 % 2) */
|
||||
#define MOVE_LAST_SHORTCHUNK(src, dst, offset, t0, t1) \
|
||||
LD(ldub, src, -(offset + 0x02), t0, last_shortchunk_fault) \
|
||||
LD(ldub, src, -(offset + 0x01), t1, last_shortchunk_fault) \
|
||||
ST(stb, dst, -(offset + 0x02), t0, last_shortchunk_fault) \
|
||||
ST(stb, dst, -(offset + 0x01), t1, last_shortchunk_fault)
|
||||
|
||||
.section .fixup,#alloc,#execinstr
|
||||
last_shortchunk_fault:
|
||||
and %o2, 1, %o2
|
||||
retl
|
||||
sub %o2, %g5, %o0
|
||||
|
||||
.text
|
||||
.align 4
|
||||
|
|
@ -182,8 +218,6 @@ __copy_user: /* %o0=dst %o1=src %o2=len */
|
|||
MOVE_BIGCHUNK(o1, o0, 0x20, o2, o3, o4, o5, g2, g3, g4, g5)
|
||||
MOVE_BIGCHUNK(o1, o0, 0x40, o2, o3, o4, o5, g2, g3, g4, g5)
|
||||
MOVE_BIGCHUNK(o1, o0, 0x60, o2, o3, o4, o5, g2, g3, g4, g5)
|
||||
80:
|
||||
EXT(5b, 80b, 50f)
|
||||
subcc %g7, 128, %g7
|
||||
add %o1, 128, %o1
|
||||
bne 5b
|
||||
|
|
@ -201,7 +235,6 @@ __copy_user: /* %o0=dst %o1=src %o2=len */
|
|||
jmpl %o5 + %lo(copy_user_table_end), %g0
|
||||
add %o0, %g7, %o0
|
||||
|
||||
copy_user_table:
|
||||
MOVE_LASTCHUNK(o1, o0, 0x60, g2, g3, g4, g5)
|
||||
MOVE_LASTCHUNK(o1, o0, 0x50, g2, g3, g4, g5)
|
||||
MOVE_LASTCHUNK(o1, o0, 0x40, g2, g3, g4, g5)
|
||||
|
|
@ -210,7 +243,6 @@ copy_user_table:
|
|||
MOVE_LASTCHUNK(o1, o0, 0x10, g2, g3, g4, g5)
|
||||
MOVE_LASTCHUNK(o1, o0, 0x00, g2, g3, g4, g5)
|
||||
copy_user_table_end:
|
||||
EXT(copy_user_table, copy_user_table_end, 51f)
|
||||
be copy_user_last7
|
||||
andcc %g1, 4, %g0
|
||||
|
||||
|
|
@ -250,8 +282,6 @@ ldd_std:
|
|||
MOVE_BIGALIGNCHUNK(o1, o0, 0x20, o2, o3, o4, o5, g2, g3, g4, g5)
|
||||
MOVE_BIGALIGNCHUNK(o1, o0, 0x40, o2, o3, o4, o5, g2, g3, g4, g5)
|
||||
MOVE_BIGALIGNCHUNK(o1, o0, 0x60, o2, o3, o4, o5, g2, g3, g4, g5)
|
||||
81:
|
||||
EXT(ldd_std, 81b, 52f)
|
||||
subcc %g7, 128, %g7
|
||||
add %o1, 128, %o1
|
||||
bne ldd_std
|
||||
|
|
@ -290,8 +320,6 @@ cannot_optimize:
|
|||
10:
|
||||
MOVE_HALFCHUNK(o1, o0, 0x00, g2, g3, g4, g5)
|
||||
MOVE_HALFCHUNK(o1, o0, 0x08, g2, g3, g4, g5)
|
||||
82:
|
||||
EXT(10b, 82b, 53f)
|
||||
subcc %o3, 0x10, %o3
|
||||
add %o1, 0x10, %o1
|
||||
bne 10b
|
||||
|
|
@ -308,8 +336,6 @@ byte_chunk:
|
|||
MOVE_SHORTCHUNK(o1, o0, -0x0c, g2, g3)
|
||||
MOVE_SHORTCHUNK(o1, o0, -0x0e, g2, g3)
|
||||
MOVE_SHORTCHUNK(o1, o0, -0x10, g2, g3)
|
||||
83:
|
||||
EXT(byte_chunk, 83b, 54f)
|
||||
subcc %o3, 0x10, %o3
|
||||
add %o1, 0x10, %o1
|
||||
bne byte_chunk
|
||||
|
|
@ -325,16 +351,14 @@ short_end:
|
|||
add %o1, %o3, %o1
|
||||
jmpl %o5 + %lo(short_table_end), %g0
|
||||
andcc %o2, 1, %g0
|
||||
84:
|
||||
MOVE_SHORTCHUNK(o1, o0, 0x0c, g2, g3)
|
||||
MOVE_SHORTCHUNK(o1, o0, 0x0a, g2, g3)
|
||||
MOVE_SHORTCHUNK(o1, o0, 0x08, g2, g3)
|
||||
MOVE_SHORTCHUNK(o1, o0, 0x06, g2, g3)
|
||||
MOVE_SHORTCHUNK(o1, o0, 0x04, g2, g3)
|
||||
MOVE_SHORTCHUNK(o1, o0, 0x02, g2, g3)
|
||||
MOVE_SHORTCHUNK(o1, o0, 0x00, g2, g3)
|
||||
MOVE_LAST_SHORTCHUNK(o1, o0, 0x0c, g2, g3)
|
||||
MOVE_LAST_SHORTCHUNK(o1, o0, 0x0a, g2, g3)
|
||||
MOVE_LAST_SHORTCHUNK(o1, o0, 0x08, g2, g3)
|
||||
MOVE_LAST_SHORTCHUNK(o1, o0, 0x06, g2, g3)
|
||||
MOVE_LAST_SHORTCHUNK(o1, o0, 0x04, g2, g3)
|
||||
MOVE_LAST_SHORTCHUNK(o1, o0, 0x02, g2, g3)
|
||||
MOVE_LAST_SHORTCHUNK(o1, o0, 0x00, g2, g3)
|
||||
short_table_end:
|
||||
EXT(84b, short_table_end, 55f)
|
||||
be 1f
|
||||
nop
|
||||
EX(ldub [%o1], %g2, add %g0, 1)
|
||||
|
|
@ -363,123 +387,8 @@ short_aligned_end:
|
|||
.section .fixup,#alloc,#execinstr
|
||||
.align 4
|
||||
97:
|
||||
mov %o2, %g3
|
||||
fixupretl:
|
||||
retl
|
||||
mov %g3, %o0
|
||||
|
||||
/* exception routine sets %g2 to (broken_insn - first_insn)>>2 */
|
||||
50:
|
||||
/* This magic counts how many bytes are left when crash in MOVE_BIGCHUNK
|
||||
* happens. This is derived from the amount ldd reads, st stores, etc.
|
||||
* x = g2 % 12;
|
||||
* g3 = g1 + g7 - ((g2 / 12) * 32 + (x < 4) ? 0 : (x - 4) * 4);
|
||||
* o0 += (g2 / 12) * 32;
|
||||
*/
|
||||
cmp %g2, 12
|
||||
add %o0, %g7, %o0
|
||||
bcs 1f
|
||||
cmp %g2, 24
|
||||
bcs 2f
|
||||
cmp %g2, 36
|
||||
bcs 3f
|
||||
nop
|
||||
sub %g2, 12, %g2
|
||||
sub %g7, 32, %g7
|
||||
3: sub %g2, 12, %g2
|
||||
sub %g7, 32, %g7
|
||||
2: sub %g2, 12, %g2
|
||||
sub %g7, 32, %g7
|
||||
1: cmp %g2, 4
|
||||
bcs,a 60f
|
||||
clr %g2
|
||||
sub %g2, 4, %g2
|
||||
sll %g2, 2, %g2
|
||||
60: and %g1, 0x7f, %g3
|
||||
sub %o0, %g7, %o0
|
||||
add %g3, %g7, %g3
|
||||
ba fixupretl
|
||||
sub %g3, %g2, %g3
|
||||
51:
|
||||
/* i = 41 - g2; j = i % 6;
|
||||
* g3 = (g1 & 15) + (i / 6) * 16 + (j < 4) ? (j + 1) * 4 : 16;
|
||||
* o0 -= (i / 6) * 16 + 16;
|
||||
*/
|
||||
neg %g2
|
||||
and %g1, 0xf, %g1
|
||||
add %g2, 41, %g2
|
||||
add %o0, %g1, %o0
|
||||
1: cmp %g2, 6
|
||||
bcs,a 2f
|
||||
cmp %g2, 4
|
||||
add %g1, 16, %g1
|
||||
b 1b
|
||||
sub %g2, 6, %g2
|
||||
2: bcc,a 2f
|
||||
mov 16, %g2
|
||||
inc %g2
|
||||
sll %g2, 2, %g2
|
||||
2: add %g1, %g2, %g3
|
||||
ba fixupretl
|
||||
sub %o0, %g3, %o0
|
||||
52:
|
||||
/* g3 = g1 + g7 - (g2 / 8) * 32 + (g2 & 4) ? (g2 & 3) * 8 : 0;
|
||||
o0 += (g2 / 8) * 32 */
|
||||
andn %g2, 7, %g4
|
||||
add %o0, %g7, %o0
|
||||
andcc %g2, 4, %g0
|
||||
and %g2, 3, %g2
|
||||
sll %g4, 2, %g4
|
||||
sll %g2, 3, %g2
|
||||
bne 60b
|
||||
sub %g7, %g4, %g7
|
||||
ba 60b
|
||||
clr %g2
|
||||
53:
|
||||
/* g3 = o3 + (o2 & 15) - (g2 & 8) - (g2 & 4) ? (g2 & 3) * 2 : 0;
|
||||
o0 += (g2 & 8) */
|
||||
and %g2, 3, %g4
|
||||
andcc %g2, 4, %g0
|
||||
and %g2, 8, %g2
|
||||
sll %g4, 1, %g4
|
||||
be 1f
|
||||
add %o0, %g2, %o0
|
||||
add %g2, %g4, %g2
|
||||
1: and %o2, 0xf, %g3
|
||||
add %g3, %o3, %g3
|
||||
ba fixupretl
|
||||
sub %g3, %g2, %g3
|
||||
54:
|
||||
/* g3 = o3 + (o2 & 15) - (g2 / 4) * 2 - (g2 & 2) ? (g2 & 1) : 0;
|
||||
o0 += (g2 / 4) * 2 */
|
||||
srl %g2, 2, %o4
|
||||
and %g2, 1, %o5
|
||||
srl %g2, 1, %g2
|
||||
add %o4, %o4, %o4
|
||||
and %o5, %g2, %o5
|
||||
and %o2, 0xf, %o2
|
||||
add %o0, %o4, %o0
|
||||
sub %o3, %o5, %o3
|
||||
sub %o2, %o4, %o2
|
||||
ba fixupretl
|
||||
add %o2, %o3, %g3
|
||||
55:
|
||||
/* i = 27 - g2;
|
||||
g3 = (o2 & 1) + i / 4 * 2 + !(i & 3);
|
||||
o0 -= i / 4 * 2 + 1 */
|
||||
neg %g2
|
||||
and %o2, 1, %o2
|
||||
add %g2, 27, %g2
|
||||
srl %g2, 2, %o5
|
||||
andcc %g2, 3, %g0
|
||||
mov 1, %g2
|
||||
add %o5, %o5, %o5
|
||||
be,a 1f
|
||||
clr %g2
|
||||
1: add %g2, %o5, %g3
|
||||
sub %o0, %g3, %o0
|
||||
ba fixupretl
|
||||
add %g3, %o2, %g3
|
||||
mov %o2, %o0
|
||||
|
||||
.globl __copy_user_end
|
||||
__copy_user_end:
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@
|
|||
98: x,y; \
|
||||
.section .fixup,ALLOC,EXECINSTR; \
|
||||
.align 4; \
|
||||
99: ba 30f; \
|
||||
99: retl; \
|
||||
a, b, %o0; \
|
||||
.section __ex_table,ALLOC; \
|
||||
.align 4; \
|
||||
|
|
@ -27,35 +27,44 @@
|
|||
.text; \
|
||||
.align 4
|
||||
|
||||
#define EXT(start,end,handler) \
|
||||
#define STORE(source, base, offset, n) \
|
||||
98: std source, [base + offset + n]; \
|
||||
.section .fixup,ALLOC,EXECINSTR; \
|
||||
.align 4; \
|
||||
99: ba 30f; \
|
||||
sub %o3, n - offset, %o3; \
|
||||
.section __ex_table,ALLOC; \
|
||||
.align 4; \
|
||||
.word start, 0, end, handler; \
|
||||
.word 98b, 99b; \
|
||||
.text; \
|
||||
.align 4
|
||||
.align 4;
|
||||
|
||||
#define STORE_LAST(source, base, offset, n) \
|
||||
EX(std source, [base - offset - n], \
|
||||
add %o1, offset + n);
|
||||
|
||||
/* Please don't change these macros, unless you change the logic
|
||||
* in the .fixup section below as well.
|
||||
* Store 64 bytes at (BASE + OFFSET) using value SOURCE. */
|
||||
#define ZERO_BIG_BLOCK(base, offset, source) \
|
||||
std source, [base + offset + 0x00]; \
|
||||
std source, [base + offset + 0x08]; \
|
||||
std source, [base + offset + 0x10]; \
|
||||
std source, [base + offset + 0x18]; \
|
||||
std source, [base + offset + 0x20]; \
|
||||
std source, [base + offset + 0x28]; \
|
||||
std source, [base + offset + 0x30]; \
|
||||
std source, [base + offset + 0x38];
|
||||
#define ZERO_BIG_BLOCK(base, offset, source) \
|
||||
STORE(source, base, offset, 0x00); \
|
||||
STORE(source, base, offset, 0x08); \
|
||||
STORE(source, base, offset, 0x10); \
|
||||
STORE(source, base, offset, 0x18); \
|
||||
STORE(source, base, offset, 0x20); \
|
||||
STORE(source, base, offset, 0x28); \
|
||||
STORE(source, base, offset, 0x30); \
|
||||
STORE(source, base, offset, 0x38);
|
||||
|
||||
#define ZERO_LAST_BLOCKS(base, offset, source) \
|
||||
std source, [base - offset - 0x38]; \
|
||||
std source, [base - offset - 0x30]; \
|
||||
std source, [base - offset - 0x28]; \
|
||||
std source, [base - offset - 0x20]; \
|
||||
std source, [base - offset - 0x18]; \
|
||||
std source, [base - offset - 0x10]; \
|
||||
std source, [base - offset - 0x08]; \
|
||||
std source, [base - offset - 0x00];
|
||||
STORE_LAST(source, base, offset, 0x38); \
|
||||
STORE_LAST(source, base, offset, 0x30); \
|
||||
STORE_LAST(source, base, offset, 0x28); \
|
||||
STORE_LAST(source, base, offset, 0x20); \
|
||||
STORE_LAST(source, base, offset, 0x18); \
|
||||
STORE_LAST(source, base, offset, 0x10); \
|
||||
STORE_LAST(source, base, offset, 0x08); \
|
||||
STORE_LAST(source, base, offset, 0x00);
|
||||
|
||||
.text
|
||||
.align 4
|
||||
|
|
@ -68,8 +77,6 @@ __bzero_begin:
|
|||
.globl memset
|
||||
EXPORT_SYMBOL(__bzero)
|
||||
EXPORT_SYMBOL(memset)
|
||||
.globl __memset_start, __memset_end
|
||||
__memset_start:
|
||||
memset:
|
||||
mov %o0, %g1
|
||||
mov 1, %g4
|
||||
|
|
@ -122,8 +129,6 @@ __bzero:
|
|||
ZERO_BIG_BLOCK(%o0, 0x00, %g2)
|
||||
subcc %o3, 128, %o3
|
||||
ZERO_BIG_BLOCK(%o0, 0x40, %g2)
|
||||
11:
|
||||
EXT(10b, 11b, 20f)
|
||||
bne 10b
|
||||
add %o0, 128, %o0
|
||||
|
||||
|
|
@ -138,11 +143,9 @@ __bzero:
|
|||
jmp %o4
|
||||
add %o0, %o2, %o0
|
||||
|
||||
12:
|
||||
ZERO_LAST_BLOCKS(%o0, 0x48, %g2)
|
||||
ZERO_LAST_BLOCKS(%o0, 0x08, %g2)
|
||||
13:
|
||||
EXT(12b, 13b, 21f)
|
||||
be 8f
|
||||
andcc %o1, 4, %g0
|
||||
|
||||
|
|
@ -182,37 +185,13 @@ __bzero:
|
|||
5:
|
||||
retl
|
||||
clr %o0
|
||||
__memset_end:
|
||||
|
||||
.section .fixup,#alloc,#execinstr
|
||||
.align 4
|
||||
20:
|
||||
cmp %g2, 8
|
||||
bleu 1f
|
||||
and %o1, 0x7f, %o1
|
||||
sub %g2, 9, %g2
|
||||
add %o3, 64, %o3
|
||||
1:
|
||||
sll %g2, 3, %g2
|
||||
add %o3, %o1, %o0
|
||||
b 30f
|
||||
sub %o0, %g2, %o0
|
||||
21:
|
||||
mov 8, %o0
|
||||
and %o1, 7, %o1
|
||||
sub %o0, %g2, %o0
|
||||
sll %o0, 3, %o0
|
||||
b 30f
|
||||
add %o0, %o1, %o0
|
||||
30:
|
||||
/* %o4 is faulting address, %o5 is %pc where fault occurred */
|
||||
save %sp, -104, %sp
|
||||
mov %i5, %o0
|
||||
mov %i7, %o1
|
||||
call lookup_fault
|
||||
mov %i4, %o2
|
||||
ret
|
||||
restore
|
||||
and %o1, 0x7f, %o1
|
||||
retl
|
||||
add %o3, %o1, %o0
|
||||
|
||||
.globl __bzero_end
|
||||
__bzero_end:
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ ccflags-y := -Werror
|
|||
obj-$(CONFIG_SPARC64) += ultra.o tlb.o tsb.o
|
||||
obj-y += fault_$(BITS).o
|
||||
obj-y += init_$(BITS).o
|
||||
obj-$(CONFIG_SPARC32) += extable.o srmmu.o iommu.o io-unit.o
|
||||
obj-$(CONFIG_SPARC32) += srmmu.o iommu.o io-unit.o
|
||||
obj-$(CONFIG_SPARC32) += srmmu_access.o
|
||||
obj-$(CONFIG_SPARC32) += hypersparc.o viking.o tsunami.o swift.o
|
||||
obj-$(CONFIG_SPARC32) += leon_mm.o
|
||||
|
|
|
|||
|
|
@ -1,107 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* linux/arch/sparc/mm/extable.c
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/extable.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
void sort_extable(struct exception_table_entry *start,
|
||||
struct exception_table_entry *finish)
|
||||
{
|
||||
}
|
||||
|
||||
/* Caller knows they are in a range if ret->fixup == 0 */
|
||||
const struct exception_table_entry *
|
||||
search_extable(const struct exception_table_entry *base,
|
||||
const size_t num,
|
||||
unsigned long value)
|
||||
{
|
||||
int i;
|
||||
|
||||
/* Single insn entries are encoded as:
|
||||
* word 1: insn address
|
||||
* word 2: fixup code address
|
||||
*
|
||||
* Range entries are encoded as:
|
||||
* word 1: first insn address
|
||||
* word 2: 0
|
||||
* word 3: last insn address + 4 bytes
|
||||
* word 4: fixup code address
|
||||
*
|
||||
* Deleted entries are encoded as:
|
||||
* word 1: unused
|
||||
* word 2: -1
|
||||
*
|
||||
* See asm/uaccess.h for more details.
|
||||
*/
|
||||
|
||||
/* 1. Try to find an exact match. */
|
||||
for (i = 0; i < num; i++) {
|
||||
if (base[i].fixup == 0) {
|
||||
/* A range entry, skip both parts. */
|
||||
i++;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* A deleted entry; see trim_init_extable */
|
||||
if (base[i].fixup == -1)
|
||||
continue;
|
||||
|
||||
if (base[i].insn == value)
|
||||
return &base[i];
|
||||
}
|
||||
|
||||
/* 2. Try to find a range match. */
|
||||
for (i = 0; i < (num - 1); i++) {
|
||||
if (base[i].fixup)
|
||||
continue;
|
||||
|
||||
if (base[i].insn <= value && base[i + 1].insn > value)
|
||||
return &base[i];
|
||||
|
||||
i++;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
/* We could memmove them around; easier to mark the trimmed ones. */
|
||||
void trim_init_extable(struct module *m)
|
||||
{
|
||||
unsigned int i;
|
||||
bool range;
|
||||
|
||||
for (i = 0; i < m->num_exentries; i += range ? 2 : 1) {
|
||||
range = m->extable[i].fixup == 0;
|
||||
|
||||
if (within_module_init(m->extable[i].insn, m)) {
|
||||
m->extable[i].fixup = -1;
|
||||
if (range)
|
||||
m->extable[i+1].fixup = -1;
|
||||
}
|
||||
if (range)
|
||||
i++;
|
||||
}
|
||||
}
|
||||
#endif /* CONFIG_MODULES */
|
||||
|
||||
/* Special extable search, which handles ranges. Returns fixup */
|
||||
unsigned long search_extables_range(unsigned long addr, unsigned long *g2)
|
||||
{
|
||||
const struct exception_table_entry *entry;
|
||||
|
||||
entry = search_exception_tables(addr);
|
||||
if (!entry)
|
||||
return 0;
|
||||
|
||||
/* Inside range? Fix g2 and return correct fixup */
|
||||
if (!entry->fixup) {
|
||||
*g2 = (addr - entry->insn) / 4;
|
||||
return (entry + 1)->fixup;
|
||||
}
|
||||
|
||||
return entry->fixup;
|
||||
}
|
||||
|
|
@ -23,6 +23,7 @@
|
|||
#include <linux/interrupt.h>
|
||||
#include <linux/kdebug.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/extable.h>
|
||||
|
||||
#include <asm/page.h>
|
||||
#include <asm/openprom.h>
|
||||
|
|
@ -54,54 +55,6 @@ static void __noreturn unhandled_fault(unsigned long address,
|
|||
die_if_kernel("Oops", regs);
|
||||
}
|
||||
|
||||
asmlinkage int lookup_fault(unsigned long pc, unsigned long ret_pc,
|
||||
unsigned long address)
|
||||
{
|
||||
struct pt_regs regs;
|
||||
unsigned long g2;
|
||||
unsigned int insn;
|
||||
int i;
|
||||
|
||||
i = search_extables_range(ret_pc, &g2);
|
||||
switch (i) {
|
||||
case 3:
|
||||
/* load & store will be handled by fixup */
|
||||
return 3;
|
||||
|
||||
case 1:
|
||||
/* store will be handled by fixup, load will bump out */
|
||||
/* for _to_ macros */
|
||||
insn = *((unsigned int *) pc);
|
||||
if ((insn >> 21) & 1)
|
||||
return 1;
|
||||
break;
|
||||
|
||||
case 2:
|
||||
/* load will be handled by fixup, store will bump out */
|
||||
/* for _from_ macros */
|
||||
insn = *((unsigned int *) pc);
|
||||
if (!((insn >> 21) & 1) || ((insn>>19)&0x3f) == 15)
|
||||
return 2;
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
memset(®s, 0, sizeof(regs));
|
||||
regs.pc = pc;
|
||||
regs.npc = pc + 4;
|
||||
__asm__ __volatile__(
|
||||
"rd %%psr, %0\n\t"
|
||||
"nop\n\t"
|
||||
"nop\n\t"
|
||||
"nop\n" : "=r" (regs.psr));
|
||||
unhandled_fault(address, current, ®s);
|
||||
|
||||
/* Not reached */
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void
|
||||
show_signal_msg(struct pt_regs *regs, int sig, int code,
|
||||
unsigned long address, struct task_struct *tsk)
|
||||
|
|
@ -162,8 +115,6 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
|
|||
struct vm_area_struct *vma;
|
||||
struct task_struct *tsk = current;
|
||||
struct mm_struct *mm = tsk->mm;
|
||||
unsigned int fixup;
|
||||
unsigned long g2;
|
||||
int from_user = !(regs->psr & PSR_PS);
|
||||
int code;
|
||||
vm_fault_t fault;
|
||||
|
|
@ -281,30 +232,19 @@ bad_area_nosemaphore:
|
|||
|
||||
/* Is this in ex_table? */
|
||||
no_context:
|
||||
g2 = regs->u_regs[UREG_G2];
|
||||
if (!from_user) {
|
||||
fixup = search_extables_range(regs->pc, &g2);
|
||||
/* Values below 10 are reserved for other things */
|
||||
if (fixup > 10) {
|
||||
extern const unsigned int __memset_start[];
|
||||
extern const unsigned int __memset_end[];
|
||||
const struct exception_table_entry *entry;
|
||||
|
||||
entry = search_exception_tables(regs->pc);
|
||||
#ifdef DEBUG_EXCEPTIONS
|
||||
printk("Exception: PC<%08lx> faddr<%08lx>\n",
|
||||
regs->pc, address);
|
||||
printk("EX_TABLE: insn<%08lx> fixup<%08x> g2<%08lx>\n",
|
||||
regs->pc, fixup, g2);
|
||||
printk("Exception: PC<%08lx> faddr<%08lx>\n",
|
||||
regs->pc, address);
|
||||
printk("EX_TABLE: insn<%08lx> fixup<%08x>\n",
|
||||
regs->pc, entry->fixup);
|
||||
#endif
|
||||
if ((regs->pc >= (unsigned long)__memset_start &&
|
||||
regs->pc < (unsigned long)__memset_end)) {
|
||||
regs->u_regs[UREG_I4] = address;
|
||||
regs->u_regs[UREG_I5] = regs->pc;
|
||||
}
|
||||
regs->u_regs[UREG_G2] = g2;
|
||||
regs->pc = fixup;
|
||||
regs->npc = regs->pc + 4;
|
||||
return;
|
||||
}
|
||||
regs->pc = entry->fixup;
|
||||
regs->npc = regs->pc + 4;
|
||||
return;
|
||||
}
|
||||
|
||||
unhandled_fault(address, tsk, regs);
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/* fault_32.c - visible as they are called from assembler */
|
||||
asmlinkage int lookup_fault(unsigned long pc, unsigned long ret_pc,
|
||||
unsigned long address);
|
||||
asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
|
||||
unsigned long address);
|
||||
|
||||
|
|
|
|||
|
|
@ -1349,6 +1349,7 @@ st: if (is_imm8(insn->off))
|
|||
insn->imm == (BPF_XOR | BPF_FETCH)) {
|
||||
u8 *branch_target;
|
||||
bool is64 = BPF_SIZE(insn->code) == BPF_DW;
|
||||
u32 real_src_reg = src_reg;
|
||||
|
||||
/*
|
||||
* Can't be implemented with a single x86 insn.
|
||||
|
|
@ -1357,6 +1358,9 @@ st: if (is_imm8(insn->off))
|
|||
|
||||
/* Will need RAX as a CMPXCHG operand so save R0 */
|
||||
emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0);
|
||||
if (src_reg == BPF_REG_0)
|
||||
real_src_reg = BPF_REG_AX;
|
||||
|
||||
branch_target = prog;
|
||||
/* Load old value */
|
||||
emit_ldx(&prog, BPF_SIZE(insn->code),
|
||||
|
|
@ -1366,9 +1370,9 @@ st: if (is_imm8(insn->off))
|
|||
* put the result in the AUX_REG.
|
||||
*/
|
||||
emit_mov_reg(&prog, is64, AUX_REG, BPF_REG_0);
|
||||
maybe_emit_mod(&prog, AUX_REG, src_reg, is64);
|
||||
maybe_emit_mod(&prog, AUX_REG, real_src_reg, is64);
|
||||
EMIT2(simple_alu_opcodes[BPF_OP(insn->imm)],
|
||||
add_2reg(0xC0, AUX_REG, src_reg));
|
||||
add_2reg(0xC0, AUX_REG, real_src_reg));
|
||||
/* Attempt to swap in new value */
|
||||
err = emit_atomic(&prog, BPF_CMPXCHG,
|
||||
dst_reg, AUX_REG, insn->off,
|
||||
|
|
@ -1381,7 +1385,7 @@ st: if (is_imm8(insn->off))
|
|||
*/
|
||||
EMIT2(X86_JNE, -(prog - branch_target) - 2);
|
||||
/* Return the pre-modification value */
|
||||
emit_mov_reg(&prog, is64, src_reg, BPF_REG_0);
|
||||
emit_mov_reg(&prog, is64, real_src_reg, BPF_REG_0);
|
||||
/* Restore R0 after clobbering RAX */
|
||||
emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX);
|
||||
break;
|
||||
|
|
|
|||
|
|
@ -767,7 +767,7 @@ config CRYPTO_POLY1305_X86_64
|
|||
|
||||
config CRYPTO_POLY1305_MIPS
|
||||
tristate "Poly1305 authenticator algorithm (MIPS optimized)"
|
||||
depends on CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
|
||||
depends on MIPS
|
||||
select CRYPTO_ARCH_HAVE_LIB_POLY1305
|
||||
|
||||
config CRYPTO_MD4
|
||||
|
|
|
|||
|
|
@ -2260,7 +2260,8 @@ out:
|
|||
return rc;
|
||||
|
||||
err_eni_release:
|
||||
eni_do_release(dev);
|
||||
dev->phy = NULL;
|
||||
iounmap(ENI_DEV(dev)->ioaddr);
|
||||
err_unregister:
|
||||
atm_dev_deregister(dev);
|
||||
err_free_consistent:
|
||||
|
|
|
|||
|
|
@ -262,7 +262,7 @@ static int idt77105_start(struct atm_dev *dev)
|
|||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (!(dev->dev_data = kmalloc(sizeof(struct idt77105_priv),GFP_KERNEL)))
|
||||
if (!(dev->phy_data = kmalloc(sizeof(struct idt77105_priv),GFP_KERNEL)))
|
||||
return -ENOMEM;
|
||||
PRIV(dev)->dev = dev;
|
||||
spin_lock_irqsave(&idt77105_priv_lock, flags);
|
||||
|
|
@ -337,7 +337,7 @@ static int idt77105_stop(struct atm_dev *dev)
|
|||
else
|
||||
idt77105_all = walk->next;
|
||||
dev->phy = NULL;
|
||||
dev->dev_data = NULL;
|
||||
dev->phy_data = NULL;
|
||||
kfree(walk);
|
||||
break;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2233,6 +2233,7 @@ static int lanai_dev_open(struct atm_dev *atmdev)
|
|||
conf1_write(lanai);
|
||||
#endif
|
||||
iounmap(lanai->base);
|
||||
lanai->base = NULL;
|
||||
error_pci:
|
||||
pci_disable_device(lanai->pci);
|
||||
error:
|
||||
|
|
@ -2245,6 +2246,8 @@ static int lanai_dev_open(struct atm_dev *atmdev)
|
|||
static void lanai_dev_close(struct atm_dev *atmdev)
|
||||
{
|
||||
struct lanai_dev *lanai = (struct lanai_dev *) atmdev->dev_data;
|
||||
if (lanai->base==NULL)
|
||||
return;
|
||||
printk(KERN_INFO DEV_LABEL "(itf %d): shutting down interface\n",
|
||||
lanai->number);
|
||||
lanai_timed_poll_stop(lanai);
|
||||
|
|
@ -2552,7 +2555,7 @@ static int lanai_init_one(struct pci_dev *pci,
|
|||
struct atm_dev *atmdev;
|
||||
int result;
|
||||
|
||||
lanai = kmalloc(sizeof(*lanai), GFP_KERNEL);
|
||||
lanai = kzalloc(sizeof(*lanai), GFP_KERNEL);
|
||||
if (lanai == NULL) {
|
||||
printk(KERN_ERR DEV_LABEL
|
||||
": couldn't allocate dev_data structure!\n");
|
||||
|
|
|
|||
|
|
@ -211,7 +211,7 @@ static void uPD98402_int(struct atm_dev *dev)
|
|||
static int uPD98402_start(struct atm_dev *dev)
|
||||
{
|
||||
DPRINTK("phy_start\n");
|
||||
if (!(dev->dev_data = kmalloc(sizeof(struct uPD98402_priv),GFP_KERNEL)))
|
||||
if (!(dev->phy_data = kmalloc(sizeof(struct uPD98402_priv),GFP_KERNEL)))
|
||||
return -ENOMEM;
|
||||
spin_lock_init(&PRIV(dev)->lock);
|
||||
memset(&PRIV(dev)->sonet_stats,0,sizeof(struct k_sonet_stats));
|
||||
|
|
|
|||
|
|
@ -113,8 +113,29 @@ MODULE_DEVICE_TABLE(i2c, pca953x_id);
|
|||
#ifdef CONFIG_GPIO_PCA953X_IRQ
|
||||
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/gpio.h>
|
||||
#include <linux/list.h>
|
||||
|
||||
static const struct acpi_gpio_params pca953x_irq_gpios = { 0, 0, true };
|
||||
|
||||
static const struct acpi_gpio_mapping pca953x_acpi_irq_gpios[] = {
|
||||
{ "irq-gpios", &pca953x_irq_gpios, 1, ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER },
|
||||
{ }
|
||||
};
|
||||
|
||||
static int pca953x_acpi_get_irq(struct device *dev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = devm_acpi_dev_add_driver_gpios(dev, pca953x_acpi_irq_gpios);
|
||||
if (ret)
|
||||
dev_warn(dev, "can't add GPIO ACPI mapping\n");
|
||||
|
||||
ret = acpi_dev_gpio_irq_get_by(ACPI_COMPANION(dev), "irq-gpios", 0);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
dev_info(dev, "ACPI interrupt quirk (IRQ %d)\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct dmi_system_id pca953x_dmi_acpi_irq_info[] = {
|
||||
{
|
||||
|
|
@ -133,59 +154,6 @@ static const struct dmi_system_id pca953x_dmi_acpi_irq_info[] = {
|
|||
},
|
||||
{}
|
||||
};
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static int pca953x_acpi_get_pin(struct acpi_resource *ares, void *data)
|
||||
{
|
||||
struct acpi_resource_gpio *agpio;
|
||||
int *pin = data;
|
||||
|
||||
if (acpi_gpio_get_irq_resource(ares, &agpio))
|
||||
*pin = agpio->pin_table[0];
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int pca953x_acpi_find_pin(struct device *dev)
|
||||
{
|
||||
struct acpi_device *adev = ACPI_COMPANION(dev);
|
||||
int pin = -ENOENT, ret;
|
||||
LIST_HEAD(r);
|
||||
|
||||
ret = acpi_dev_get_resources(adev, &r, pca953x_acpi_get_pin, &pin);
|
||||
acpi_dev_free_resource_list(&r);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return pin;
|
||||
}
|
||||
#else
|
||||
static inline int pca953x_acpi_find_pin(struct device *dev) { return -ENXIO; }
|
||||
#endif
|
||||
|
||||
static int pca953x_acpi_get_irq(struct device *dev)
|
||||
{
|
||||
int pin, ret;
|
||||
|
||||
pin = pca953x_acpi_find_pin(dev);
|
||||
if (pin < 0)
|
||||
return pin;
|
||||
|
||||
dev_info(dev, "Applying ACPI interrupt quirk (GPIO %d)\n", pin);
|
||||
|
||||
if (!gpio_is_valid(pin))
|
||||
return -EINVAL;
|
||||
|
||||
ret = gpio_request(pin, "pca953x interrupt");
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = gpio_to_irq(pin);
|
||||
|
||||
/* When pin is used as an IRQ, no need to keep it requested */
|
||||
gpio_free(pin);
|
||||
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
|
||||
static const struct acpi_device_id pca953x_acpi_ids[] = {
|
||||
|
|
|
|||
|
|
@ -174,7 +174,7 @@ static void acpi_gpiochip_request_irq(struct acpi_gpio_chip *acpi_gpio,
|
|||
int ret, value;
|
||||
|
||||
ret = request_threaded_irq(event->irq, NULL, event->handler,
|
||||
event->irqflags, "ACPI:Event", event);
|
||||
event->irqflags | IRQF_ONESHOT, "ACPI:Event", event);
|
||||
if (ret) {
|
||||
dev_err(acpi_gpio->chip->parent,
|
||||
"Failed to setup interrupt handler for %d\n",
|
||||
|
|
@ -677,6 +677,7 @@ static int acpi_populate_gpio_lookup(struct acpi_resource *ares, void *data)
|
|||
if (!lookup->desc) {
|
||||
const struct acpi_resource_gpio *agpio = &ares->data.gpio;
|
||||
bool gpioint = agpio->connection_type == ACPI_RESOURCE_GPIO_TYPE_INT;
|
||||
struct gpio_desc *desc;
|
||||
u16 pin_index;
|
||||
|
||||
if (lookup->info.quirks & ACPI_GPIO_QUIRK_ONLY_GPIOIO && gpioint)
|
||||
|
|
@ -689,8 +690,12 @@ static int acpi_populate_gpio_lookup(struct acpi_resource *ares, void *data)
|
|||
if (pin_index >= agpio->pin_table_length)
|
||||
return 1;
|
||||
|
||||
lookup->desc = acpi_get_gpiod(agpio->resource_source.string_ptr,
|
||||
if (lookup->info.quirks & ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER)
|
||||
desc = gpio_to_desc(agpio->pin_table[pin_index]);
|
||||
else
|
||||
desc = acpi_get_gpiod(agpio->resource_source.string_ptr,
|
||||
agpio->pin_table[pin_index]);
|
||||
lookup->desc = desc;
|
||||
lookup->info.pin_config = agpio->pin_config;
|
||||
lookup->info.debounce = agpio->debounce_timeout;
|
||||
lookup->info.gpioint = gpioint;
|
||||
|
|
@ -940,8 +945,9 @@ struct gpio_desc *acpi_node_get_gpiod(struct fwnode_handle *fwnode,
|
|||
}
|
||||
|
||||
/**
|
||||
* acpi_dev_gpio_irq_get() - Find GpioInt and translate it to Linux IRQ number
|
||||
* acpi_dev_gpio_irq_get_by() - Find GpioInt and translate it to Linux IRQ number
|
||||
* @adev: pointer to a ACPI device to get IRQ from
|
||||
* @name: optional name of GpioInt resource
|
||||
* @index: index of GpioInt resource (starting from %0)
|
||||
*
|
||||
* If the device has one or more GpioInt resources, this function can be
|
||||
|
|
@ -951,9 +957,12 @@ struct gpio_desc *acpi_node_get_gpiod(struct fwnode_handle *fwnode,
|
|||
* The function is idempotent, though each time it runs it will configure GPIO
|
||||
* pin direction according to the flags in GpioInt resource.
|
||||
*
|
||||
* The function takes optional @name parameter. If the resource has a property
|
||||
* name, then only those will be taken into account.
|
||||
*
|
||||
* Return: Linux IRQ number (> %0) on success, negative errno on failure.
|
||||
*/
|
||||
int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
||||
int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int index)
|
||||
{
|
||||
int idx, i;
|
||||
unsigned int irq_flags;
|
||||
|
|
@ -963,7 +972,7 @@ int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|||
struct acpi_gpio_info info;
|
||||
struct gpio_desc *desc;
|
||||
|
||||
desc = acpi_get_gpiod_by_index(adev, NULL, i, &info);
|
||||
desc = acpi_get_gpiod_by_index(adev, name, i, &info);
|
||||
|
||||
/* Ignore -EPROBE_DEFER, it only matters if idx matches */
|
||||
if (IS_ERR(desc) && PTR_ERR(desc) != -EPROBE_DEFER)
|
||||
|
|
@ -1008,7 +1017,7 @@ int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|||
}
|
||||
return -ENOENT;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(acpi_dev_gpio_irq_get);
|
||||
EXPORT_SYMBOL_GPL(acpi_dev_gpio_irq_get_by);
|
||||
|
||||
static acpi_status
|
||||
acpi_gpio_adr_space_handler(u32 function, acpi_physical_address address,
|
||||
|
|
|
|||
|
|
@ -367,22 +367,18 @@ static int gpiochip_set_desc_names(struct gpio_chip *gc)
|
|||
*
|
||||
* Looks for device property "gpio-line-names" and if it exists assigns
|
||||
* GPIO line names for the chip. The memory allocated for the assigned
|
||||
* names belong to the underlying software node and should not be released
|
||||
* names belong to the underlying firmware node and should not be released
|
||||
* by the caller.
|
||||
*/
|
||||
static int devprop_gpiochip_set_names(struct gpio_chip *chip)
|
||||
{
|
||||
struct gpio_device *gdev = chip->gpiodev;
|
||||
struct device *dev = chip->parent;
|
||||
struct fwnode_handle *fwnode = dev_fwnode(&gdev->dev);
|
||||
const char **names;
|
||||
int ret, i;
|
||||
int count;
|
||||
|
||||
/* GPIO chip may not have a parent device whose properties we inspect. */
|
||||
if (!dev)
|
||||
return 0;
|
||||
|
||||
count = device_property_string_array_count(dev, "gpio-line-names");
|
||||
count = fwnode_property_string_array_count(fwnode, "gpio-line-names");
|
||||
if (count < 0)
|
||||
return 0;
|
||||
|
||||
|
|
@ -396,7 +392,7 @@ static int devprop_gpiochip_set_names(struct gpio_chip *chip)
|
|||
if (!names)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = device_property_read_string_array(dev, "gpio-line-names",
|
||||
ret = fwnode_property_read_string_array(fwnode, "gpio-line-names",
|
||||
names, count);
|
||||
if (ret < 0) {
|
||||
dev_warn(&gdev->dev, "failed to read GPIO line names\n");
|
||||
|
|
@ -474,9 +470,13 @@ EXPORT_SYMBOL_GPL(gpiochip_line_is_valid);
|
|||
|
||||
static void gpiodevice_release(struct device *dev)
|
||||
{
|
||||
struct gpio_device *gdev = dev_get_drvdata(dev);
|
||||
struct gpio_device *gdev = container_of(dev, struct gpio_device, dev);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&gpio_lock, flags);
|
||||
list_del(&gdev->list);
|
||||
spin_unlock_irqrestore(&gpio_lock, flags);
|
||||
|
||||
ida_free(&gpio_ida, gdev->id);
|
||||
kfree_const(gdev->label);
|
||||
kfree(gdev->descs);
|
||||
|
|
@ -605,7 +605,6 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
|
|||
goto err_free_ida;
|
||||
|
||||
device_initialize(&gdev->dev);
|
||||
dev_set_drvdata(&gdev->dev, gdev);
|
||||
if (gc->parent && gc->parent->driver)
|
||||
gdev->owner = gc->parent->driver->owner;
|
||||
else if (gc->owner)
|
||||
|
|
|
|||
|
|
@ -94,7 +94,7 @@ config WIREGUARD
|
|||
select CRYPTO_BLAKE2S_ARM if ARM
|
||||
select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
|
||||
select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
|
||||
select CRYPTO_POLY1305_MIPS if CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
|
||||
select CRYPTO_POLY1305_MIPS if MIPS
|
||||
help
|
||||
WireGuard is a secure, fast, and easy to use replacement for IPSec
|
||||
that uses modern cryptography and clever networking tricks. It's
|
||||
|
|
|
|||
|
|
@ -3978,11 +3978,15 @@ static int bond_neigh_init(struct neighbour *n)
|
|||
|
||||
rcu_read_lock();
|
||||
slave = bond_first_slave_rcu(bond);
|
||||
if (!slave)
|
||||
if (!slave) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
slave_ops = slave->dev->netdev_ops;
|
||||
if (!slave_ops->ndo_neigh_setup)
|
||||
if (!slave_ops->ndo_neigh_setup) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* TODO: find another way [1] to implement this.
|
||||
* Passing a zeroed structure is fragile,
|
||||
|
|
|
|||
|
|
@ -701,7 +701,7 @@ static int flexcan_chip_freeze(struct flexcan_priv *priv)
|
|||
u32 reg;
|
||||
|
||||
reg = priv->read(®s->mcr);
|
||||
reg |= FLEXCAN_MCR_HALT;
|
||||
reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT;
|
||||
priv->write(reg, ®s->mcr);
|
||||
|
||||
while (timeout-- && !(priv->read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK))
|
||||
|
|
@ -1480,10 +1480,13 @@ static int flexcan_chip_start(struct net_device *dev)
|
|||
|
||||
flexcan_set_bittiming(dev);
|
||||
|
||||
/* set freeze, halt */
|
||||
err = flexcan_chip_freeze(priv);
|
||||
if (err)
|
||||
goto out_chip_disable;
|
||||
|
||||
/* MCR
|
||||
*
|
||||
* enable freeze
|
||||
* halt now
|
||||
* only supervisor access
|
||||
* enable warning int
|
||||
* enable individual RX masking
|
||||
|
|
@ -1492,9 +1495,8 @@ static int flexcan_chip_start(struct net_device *dev)
|
|||
*/
|
||||
reg_mcr = priv->read(®s->mcr);
|
||||
reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
|
||||
reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV |
|
||||
FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ | FLEXCAN_MCR_IDAM_C |
|
||||
FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
|
||||
reg_mcr |= FLEXCAN_MCR_SUPV | FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ |
|
||||
FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
|
||||
|
||||
/* MCR
|
||||
*
|
||||
|
|
@ -1865,10 +1867,14 @@ static int register_flexcandev(struct net_device *dev)
|
|||
if (err)
|
||||
goto out_chip_disable;
|
||||
|
||||
/* set freeze, halt and activate FIFO, restrict register access */
|
||||
/* set freeze, halt */
|
||||
err = flexcan_chip_freeze(priv);
|
||||
if (err)
|
||||
goto out_chip_disable;
|
||||
|
||||
/* activate FIFO, restrict register access */
|
||||
reg = priv->read(®s->mcr);
|
||||
reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT |
|
||||
FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
|
||||
reg |= FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
|
||||
priv->write(reg, ®s->mcr);
|
||||
|
||||
/* Currently we only support newer versions of this core
|
||||
|
|
|
|||
|
|
@ -237,14 +237,14 @@ static int tcan4x5x_init(struct m_can_classdev *cdev)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Zero out the MCAN buffers */
|
||||
m_can_init_ram(cdev);
|
||||
|
||||
ret = regmap_update_bits(tcan4x5x->regmap, TCAN4X5X_CONFIG,
|
||||
TCAN4X5X_MODE_SEL_MASK, TCAN4X5X_MODE_NORMAL);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Zero out the MCAN buffers */
|
||||
m_can_init_ram(cdev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -335,8 +335,6 @@ static void mcp251xfd_ring_init(struct mcp251xfd_priv *priv)
|
|||
u8 len;
|
||||
int i, j;
|
||||
|
||||
netdev_reset_queue(priv->ndev);
|
||||
|
||||
/* TEF */
|
||||
tef_ring = priv->tef;
|
||||
tef_ring->head = 0;
|
||||
|
|
@ -1249,8 +1247,7 @@ mcp251xfd_handle_tefif_recover(const struct mcp251xfd_priv *priv, const u32 seq)
|
|||
|
||||
static int
|
||||
mcp251xfd_handle_tefif_one(struct mcp251xfd_priv *priv,
|
||||
const struct mcp251xfd_hw_tef_obj *hw_tef_obj,
|
||||
unsigned int *frame_len_ptr)
|
||||
const struct mcp251xfd_hw_tef_obj *hw_tef_obj)
|
||||
{
|
||||
struct net_device_stats *stats = &priv->ndev->stats;
|
||||
u32 seq, seq_masked, tef_tail_masked;
|
||||
|
|
@ -1272,8 +1269,7 @@ mcp251xfd_handle_tefif_one(struct mcp251xfd_priv *priv,
|
|||
stats->tx_bytes +=
|
||||
can_rx_offload_get_echo_skb(&priv->offload,
|
||||
mcp251xfd_get_tef_tail(priv),
|
||||
hw_tef_obj->ts,
|
||||
frame_len_ptr);
|
||||
hw_tef_obj->ts, NULL);
|
||||
stats->tx_packets++;
|
||||
priv->tef->tail++;
|
||||
|
||||
|
|
@ -1331,7 +1327,6 @@ mcp251xfd_tef_obj_read(const struct mcp251xfd_priv *priv,
|
|||
static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
|
||||
{
|
||||
struct mcp251xfd_hw_tef_obj hw_tef_obj[MCP251XFD_TX_OBJ_NUM_MAX];
|
||||
unsigned int total_frame_len = 0;
|
||||
u8 tef_tail, len, l;
|
||||
int err, i;
|
||||
|
||||
|
|
@ -1353,9 +1348,7 @@ static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
|
|||
}
|
||||
|
||||
for (i = 0; i < len; i++) {
|
||||
unsigned int frame_len;
|
||||
|
||||
err = mcp251xfd_handle_tefif_one(priv, &hw_tef_obj[i], &frame_len);
|
||||
err = mcp251xfd_handle_tefif_one(priv, &hw_tef_obj[i]);
|
||||
/* -EAGAIN means the Sequence Number in the TEF
|
||||
* doesn't match our tef_tail. This can happen if we
|
||||
* read the TEF objects too early. Leave loop let the
|
||||
|
|
@ -1365,8 +1358,6 @@ static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
|
|||
goto out_netif_wake_queue;
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
total_frame_len += frame_len;
|
||||
}
|
||||
|
||||
out_netif_wake_queue:
|
||||
|
|
@ -1397,7 +1388,6 @@ static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
|
|||
return err;
|
||||
|
||||
tx_ring->tail += len;
|
||||
netdev_completed_queue(priv->ndev, len, total_frame_len);
|
||||
|
||||
err = mcp251xfd_check_tef_tail(priv);
|
||||
if (err)
|
||||
|
|
@ -2443,7 +2433,6 @@ static netdev_tx_t mcp251xfd_start_xmit(struct sk_buff *skb,
|
|||
struct mcp251xfd_priv *priv = netdev_priv(ndev);
|
||||
struct mcp251xfd_tx_ring *tx_ring = priv->tx;
|
||||
struct mcp251xfd_tx_obj *tx_obj;
|
||||
unsigned int frame_len;
|
||||
u8 tx_head;
|
||||
int err;
|
||||
|
||||
|
|
@ -2462,9 +2451,7 @@ static netdev_tx_t mcp251xfd_start_xmit(struct sk_buff *skb,
|
|||
if (mcp251xfd_get_tx_free(tx_ring) == 0)
|
||||
netif_stop_queue(ndev);
|
||||
|
||||
frame_len = can_skb_get_frame_len(skb);
|
||||
can_put_echo_skb(skb, ndev, tx_head, frame_len);
|
||||
netdev_sent_queue(priv->ndev, frame_len);
|
||||
can_put_echo_skb(skb, ndev, tx_head, 0);
|
||||
|
||||
err = mcp251xfd_tx_obj_write(priv, tx_obj);
|
||||
if (err)
|
||||
|
|
|
|||
|
|
@ -406,7 +406,7 @@ static int bcm_sf2_sw_rst(struct bcm_sf2_priv *priv)
|
|||
/* The watchdog reset does not work on 7278, we need to hit the
|
||||
* "external" reset line through the reset controller.
|
||||
*/
|
||||
if (priv->type == BCM7278_DEVICE_ID && !IS_ERR(priv->rcdev)) {
|
||||
if (priv->type == BCM7278_DEVICE_ID) {
|
||||
ret = reset_control_assert(priv->rcdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
|
@ -1265,7 +1265,7 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev)
|
|||
|
||||
priv->rcdev = devm_reset_control_get_optional_exclusive(&pdev->dev,
|
||||
"switch");
|
||||
if (PTR_ERR(priv->rcdev) == -EPROBE_DEFER)
|
||||
if (IS_ERR(priv->rcdev))
|
||||
return PTR_ERR(priv->rcdev);
|
||||
|
||||
/* Auto-detection using standard registers will not work, so
|
||||
|
|
@ -1426,7 +1426,7 @@ static int bcm_sf2_sw_remove(struct platform_device *pdev)
|
|||
bcm_sf2_mdio_unregister(priv);
|
||||
clk_disable_unprepare(priv->clk_mdiv);
|
||||
clk_disable_unprepare(priv->clk);
|
||||
if (priv->type == BCM7278_DEVICE_ID && !IS_ERR(priv->rcdev))
|
||||
if (priv->type == BCM7278_DEVICE_ID)
|
||||
reset_control_assert(priv->rcdev);
|
||||
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -1624,6 +1624,7 @@ mtk_get_tag_protocol(struct dsa_switch *ds, int port,
|
|||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_GPIOLIB
|
||||
static inline u32
|
||||
mt7530_gpio_to_bit(unsigned int offset)
|
||||
{
|
||||
|
|
@ -1726,6 +1727,7 @@ mt7530_setup_gpio(struct mt7530_priv *priv)
|
|||
|
||||
return devm_gpiochip_add_data(dev, gc, priv);
|
||||
}
|
||||
#endif /* CONFIG_GPIOLIB */
|
||||
|
||||
static int
|
||||
mt7530_setup(struct dsa_switch *ds)
|
||||
|
|
@ -1868,11 +1870,13 @@ mt7530_setup(struct dsa_switch *ds)
|
|||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_GPIOLIB
|
||||
if (of_property_read_bool(priv->dev->of_node, "gpio-controller")) {
|
||||
ret = mt7530_setup_gpio(priv);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
#endif /* CONFIG_GPIOLIB */
|
||||
|
||||
mt7530_setup_port5(ds, interface);
|
||||
|
||||
|
|
|
|||
|
|
@ -1922,7 +1922,7 @@ out_unlock_ptp:
|
|||
speed = SPEED_1000;
|
||||
else if (bmcr & BMCR_SPEED100)
|
||||
speed = SPEED_100;
|
||||
else if (bmcr & BMCR_SPEED10)
|
||||
else
|
||||
speed = SPEED_10;
|
||||
|
||||
sja1105_sgmii_pcs_force_speed(priv, speed);
|
||||
|
|
@ -3369,14 +3369,14 @@ static int sja1105_port_ucast_bcast_flood(struct sja1105_private *priv, int to,
|
|||
if (flags.val & BR_FLOOD)
|
||||
priv->ucast_egress_floods |= BIT(to);
|
||||
else
|
||||
priv->ucast_egress_floods |= BIT(to);
|
||||
priv->ucast_egress_floods &= ~BIT(to);
|
||||
}
|
||||
|
||||
if (flags.mask & BR_BCAST_FLOOD) {
|
||||
if (flags.val & BR_BCAST_FLOOD)
|
||||
priv->bcast_egress_floods |= BIT(to);
|
||||
else
|
||||
priv->bcast_egress_floods |= BIT(to);
|
||||
priv->bcast_egress_floods &= ~BIT(to);
|
||||
}
|
||||
|
||||
return sja1105_manage_flood_domains(priv);
|
||||
|
|
|
|||
|
|
@ -528,7 +528,10 @@ static int xrs700x_hsr_join(struct dsa_switch *ds, int port,
|
|||
return -EOPNOTSUPP;
|
||||
|
||||
dsa_hsr_foreach_port(dp, ds, hsr) {
|
||||
partner = dp;
|
||||
if (dp->index != port) {
|
||||
partner = dp;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/* We can't enable redundancy on the switch until both
|
||||
|
|
@ -582,7 +585,10 @@ static int xrs700x_hsr_leave(struct dsa_switch *ds, int port,
|
|||
unsigned int val;
|
||||
|
||||
dsa_hsr_foreach_port(dp, ds, hsr) {
|
||||
partner = dp;
|
||||
if (dp->index != port) {
|
||||
partner = dp;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!partner)
|
||||
|
|
|
|||
|
|
@ -1894,13 +1894,16 @@ static int alx_resume(struct device *dev)
|
|||
|
||||
if (!netif_running(alx->dev))
|
||||
return 0;
|
||||
netif_device_attach(alx->dev);
|
||||
|
||||
rtnl_lock();
|
||||
err = __alx_open(alx, true);
|
||||
rtnl_unlock();
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return err;
|
||||
netif_device_attach(alx->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
|
||||
|
|
|
|||
|
|
@ -592,6 +592,9 @@ static int bcm4908_enet_poll(struct napi_struct *napi, int weight)
|
|||
bcm4908_enet_intrs_on(enet);
|
||||
}
|
||||
|
||||
/* Hardware could disable ring if it run out of descriptors */
|
||||
bcm4908_enet_dma_rx_ring_enable(enet, &enet->rx_ring);
|
||||
|
||||
return handled;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -8556,10 +8556,18 @@ static void bnxt_setup_inta(struct bnxt *bp)
|
|||
bp->irq_tbl[0].handler = bnxt_inta;
|
||||
}
|
||||
|
||||
static int bnxt_init_int_mode(struct bnxt *bp);
|
||||
|
||||
static int bnxt_setup_int_mode(struct bnxt *bp)
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (!bp->irq_tbl) {
|
||||
rc = bnxt_init_int_mode(bp);
|
||||
if (rc || !bp->irq_tbl)
|
||||
return rc ?: -ENODEV;
|
||||
}
|
||||
|
||||
if (bp->flags & BNXT_FLAG_USING_MSIX)
|
||||
bnxt_setup_msix(bp);
|
||||
else
|
||||
|
|
@ -8744,7 +8752,7 @@ static int bnxt_init_inta(struct bnxt *bp)
|
|||
|
||||
static int bnxt_init_int_mode(struct bnxt *bp)
|
||||
{
|
||||
int rc = 0;
|
||||
int rc = -ENODEV;
|
||||
|
||||
if (bp->flags & BNXT_FLAG_MSIX_CAP)
|
||||
rc = bnxt_init_msix(bp);
|
||||
|
|
@ -9514,7 +9522,8 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
|
|||
{
|
||||
struct hwrm_func_drv_if_change_output *resp = bp->hwrm_cmd_resp_addr;
|
||||
struct hwrm_func_drv_if_change_input req = {0};
|
||||
bool resc_reinit = false, fw_reset = false;
|
||||
bool fw_reset = !bp->irq_tbl;
|
||||
bool resc_reinit = false;
|
||||
int rc, retry = 0;
|
||||
u32 flags = 0;
|
||||
|
||||
|
|
@ -9557,6 +9566,7 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
|
|||
|
||||
if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state) && !fw_reset) {
|
||||
netdev_err(bp->dev, "RESET_DONE not set during FW reset.\n");
|
||||
set_bit(BNXT_STATE_ABORT_ERR, &bp->state);
|
||||
return -ENODEV;
|
||||
}
|
||||
if (resc_reinit || fw_reset) {
|
||||
|
|
@ -9890,6 +9900,9 @@ static int bnxt_reinit_after_abort(struct bnxt *bp)
|
|||
if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state))
|
||||
return -EBUSY;
|
||||
|
||||
if (bp->dev->reg_state == NETREG_UNREGISTERED)
|
||||
return -ENODEV;
|
||||
|
||||
rc = bnxt_fw_init_one(bp);
|
||||
if (!rc) {
|
||||
bnxt_clear_int_mode(bp);
|
||||
|
|
|
|||
|
|
@ -3954,6 +3954,13 @@ static int macb_init(struct platform_device *pdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const struct macb_usrio_config macb_default_usrio = {
|
||||
.mii = MACB_BIT(MII),
|
||||
.rmii = MACB_BIT(RMII),
|
||||
.rgmii = GEM_BIT(RGMII),
|
||||
.refclk = MACB_BIT(CLKEN),
|
||||
};
|
||||
|
||||
#if defined(CONFIG_OF)
|
||||
/* 1518 rounded up */
|
||||
#define AT91ETHER_MAX_RBUFF_SZ 0x600
|
||||
|
|
@ -4439,13 +4446,6 @@ static int fu540_c000_init(struct platform_device *pdev)
|
|||
return macb_init(pdev);
|
||||
}
|
||||
|
||||
static const struct macb_usrio_config macb_default_usrio = {
|
||||
.mii = MACB_BIT(MII),
|
||||
.rmii = MACB_BIT(RMII),
|
||||
.rgmii = GEM_BIT(RGMII),
|
||||
.refclk = MACB_BIT(CLKEN),
|
||||
};
|
||||
|
||||
static const struct macb_usrio_config sama7g5_usrio = {
|
||||
.mii = 0,
|
||||
.rmii = 1,
|
||||
|
|
@ -4594,6 +4594,7 @@ static const struct macb_config default_gem_config = {
|
|||
.dma_burst_length = 16,
|
||||
.clk_init = macb_clk_init,
|
||||
.init = macb_init,
|
||||
.usrio = &macb_default_usrio,
|
||||
.jumbo_max_len = 10240,
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -672,7 +672,7 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
|
|||
if (tx_info->pending_close) {
|
||||
spin_unlock(&tx_info->lock);
|
||||
if (!status) {
|
||||
/* it's a late success, tcb status is establised,
|
||||
/* it's a late success, tcb status is established,
|
||||
* mark it close.
|
||||
*/
|
||||
chcr_ktls_mark_tcb_close(tx_info);
|
||||
|
|
@ -930,7 +930,7 @@ chcr_ktls_get_tx_flits(u32 nr_frags, unsigned int key_ctx_len)
|
|||
}
|
||||
|
||||
/*
|
||||
* chcr_ktls_check_tcp_options: To check if there is any TCP option availbale
|
||||
* chcr_ktls_check_tcp_options: To check if there is any TCP option available
|
||||
* other than timestamp.
|
||||
* @skb - skb contains partial record..
|
||||
* return: 1 / 0
|
||||
|
|
@ -1115,7 +1115,7 @@ static int chcr_ktls_xmit_wr_complete(struct sk_buff *skb,
|
|||
}
|
||||
|
||||
if (unlikely(credits < ETHTXQ_STOP_THRES)) {
|
||||
/* Credits are below the threshold vaues, stop the queue after
|
||||
/* Credits are below the threshold values, stop the queue after
|
||||
* injecting the Work Request for this packet.
|
||||
*/
|
||||
chcr_eth_txq_stop(q);
|
||||
|
|
@ -2006,7 +2006,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
|
||||
/* TCP segments can be in received either complete or partial.
|
||||
* chcr_end_part_handler will handle cases if complete record or end
|
||||
* part of the record is received. Incase of partial end part of record,
|
||||
* part of the record is received. In case of partial end part of record,
|
||||
* we will send the complete record again.
|
||||
*/
|
||||
|
||||
|
|
|
|||
|
|
@ -133,6 +133,8 @@ struct board_info {
|
|||
u32 wake_state;
|
||||
|
||||
int ip_summed;
|
||||
|
||||
struct regulator *power_supply;
|
||||
};
|
||||
|
||||
/* debug code */
|
||||
|
|
@ -1449,7 +1451,7 @@ dm9000_probe(struct platform_device *pdev)
|
|||
if (ret) {
|
||||
dev_err(dev, "failed to request reset gpio %d: %d\n",
|
||||
reset_gpios, ret);
|
||||
return -ENODEV;
|
||||
goto out_regulator_disable;
|
||||
}
|
||||
|
||||
/* According to manual PWRST# Low Period Min 1ms */
|
||||
|
|
@ -1461,8 +1463,10 @@ dm9000_probe(struct platform_device *pdev)
|
|||
|
||||
if (!pdata) {
|
||||
pdata = dm9000_parse_dt(&pdev->dev);
|
||||
if (IS_ERR(pdata))
|
||||
return PTR_ERR(pdata);
|
||||
if (IS_ERR(pdata)) {
|
||||
ret = PTR_ERR(pdata);
|
||||
goto out_regulator_disable;
|
||||
}
|
||||
}
|
||||
|
||||
/* Init network device */
|
||||
|
|
@ -1479,6 +1483,8 @@ dm9000_probe(struct platform_device *pdev)
|
|||
|
||||
db->dev = &pdev->dev;
|
||||
db->ndev = ndev;
|
||||
if (!IS_ERR(power))
|
||||
db->power_supply = power;
|
||||
|
||||
spin_lock_init(&db->lock);
|
||||
mutex_init(&db->addr_lock);
|
||||
|
|
@ -1501,7 +1507,7 @@ dm9000_probe(struct platform_device *pdev)
|
|||
goto out;
|
||||
}
|
||||
|
||||
db->irq_wake = platform_get_irq(pdev, 1);
|
||||
db->irq_wake = platform_get_irq_optional(pdev, 1);
|
||||
if (db->irq_wake >= 0) {
|
||||
dev_dbg(db->dev, "wakeup irq %d\n", db->irq_wake);
|
||||
|
||||
|
|
@ -1703,6 +1709,10 @@ out:
|
|||
dm9000_release_board(pdev, db);
|
||||
free_netdev(ndev);
|
||||
|
||||
out_regulator_disable:
|
||||
if (!IS_ERR(power))
|
||||
regulator_disable(power);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
@ -1760,10 +1770,13 @@ static int
|
|||
dm9000_drv_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct net_device *ndev = platform_get_drvdata(pdev);
|
||||
struct board_info *dm = to_dm9000_board(ndev);
|
||||
|
||||
unregister_netdev(ndev);
|
||||
dm9000_release_board(pdev, netdev_priv(ndev));
|
||||
dm9000_release_board(pdev, dm);
|
||||
free_netdev(ndev); /* free device structure */
|
||||
if (dm->power_supply)
|
||||
regulator_disable(dm->power_supply);
|
||||
|
||||
dev_dbg(&pdev->dev, "released and freed device\n");
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -281,6 +281,8 @@ static int enetc_poll(struct napi_struct *napi, int budget)
|
|||
int work_done;
|
||||
int i;
|
||||
|
||||
enetc_lock_mdio();
|
||||
|
||||
for (i = 0; i < v->count_tx_rings; i++)
|
||||
if (!enetc_clean_tx_ring(&v->tx_ring[i], budget))
|
||||
complete = false;
|
||||
|
|
@ -291,8 +293,10 @@ static int enetc_poll(struct napi_struct *napi, int budget)
|
|||
if (work_done)
|
||||
v->rx_napi_work = true;
|
||||
|
||||
if (!complete)
|
||||
if (!complete) {
|
||||
enetc_unlock_mdio();
|
||||
return budget;
|
||||
}
|
||||
|
||||
napi_complete_done(napi, work_done);
|
||||
|
||||
|
|
@ -301,8 +305,6 @@ static int enetc_poll(struct napi_struct *napi, int budget)
|
|||
|
||||
v->rx_napi_work = false;
|
||||
|
||||
enetc_lock_mdio();
|
||||
|
||||
/* enable interrupts */
|
||||
enetc_wr_reg_hot(v->rbier, ENETC_RBIER_RXTIE);
|
||||
|
||||
|
|
@ -327,8 +329,8 @@ static void enetc_get_tx_tstamp(struct enetc_hw *hw, union enetc_tx_bd *txbd,
|
|||
{
|
||||
u32 lo, hi, tstamp_lo;
|
||||
|
||||
lo = enetc_rd(hw, ENETC_SICTR0);
|
||||
hi = enetc_rd(hw, ENETC_SICTR1);
|
||||
lo = enetc_rd_hot(hw, ENETC_SICTR0);
|
||||
hi = enetc_rd_hot(hw, ENETC_SICTR1);
|
||||
tstamp_lo = le32_to_cpu(txbd->wb.tstamp);
|
||||
if (lo <= tstamp_lo)
|
||||
hi -= 1;
|
||||
|
|
@ -342,6 +344,12 @@ static void enetc_tstamp_tx(struct sk_buff *skb, u64 tstamp)
|
|||
if (skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) {
|
||||
memset(&shhwtstamps, 0, sizeof(shhwtstamps));
|
||||
shhwtstamps.hwtstamp = ns_to_ktime(tstamp);
|
||||
/* Ensure skb_mstamp_ns, which might have been populated with
|
||||
* the txtime, is not mistaken for a software timestamp,
|
||||
* because this will prevent the dispatch of our hardware
|
||||
* timestamp to the socket.
|
||||
*/
|
||||
skb->tstamp = ktime_set(0, 0);
|
||||
skb_tstamp_tx(skb, &shhwtstamps);
|
||||
}
|
||||
}
|
||||
|
|
@ -358,9 +366,7 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
|
|||
i = tx_ring->next_to_clean;
|
||||
tx_swbd = &tx_ring->tx_swbd[i];
|
||||
|
||||
enetc_lock_mdio();
|
||||
bds_to_clean = enetc_bd_ready_count(tx_ring, i);
|
||||
enetc_unlock_mdio();
|
||||
|
||||
do_tstamp = false;
|
||||
|
||||
|
|
@ -403,8 +409,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
|
|||
tx_swbd = tx_ring->tx_swbd;
|
||||
}
|
||||
|
||||
enetc_lock_mdio();
|
||||
|
||||
/* BD iteration loop end */
|
||||
if (is_eof) {
|
||||
tx_frm_cnt++;
|
||||
|
|
@ -415,8 +419,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
|
|||
|
||||
if (unlikely(!bds_to_clean))
|
||||
bds_to_clean = enetc_bd_ready_count(tx_ring, i);
|
||||
|
||||
enetc_unlock_mdio();
|
||||
}
|
||||
|
||||
tx_ring->next_to_clean = i;
|
||||
|
|
@ -527,9 +529,8 @@ static void enetc_get_rx_tstamp(struct net_device *ndev,
|
|||
static void enetc_get_offloads(struct enetc_bdr *rx_ring,
|
||||
union enetc_rx_bd *rxbd, struct sk_buff *skb)
|
||||
{
|
||||
#ifdef CONFIG_FSL_ENETC_PTP_CLOCK
|
||||
struct enetc_ndev_priv *priv = netdev_priv(rx_ring->ndev);
|
||||
#endif
|
||||
|
||||
/* TODO: hashing */
|
||||
if (rx_ring->ndev->features & NETIF_F_RXCSUM) {
|
||||
u16 inet_csum = le16_to_cpu(rxbd->r.inet_csum);
|
||||
|
|
@ -538,12 +539,31 @@ static void enetc_get_offloads(struct enetc_bdr *rx_ring,
|
|||
skb->ip_summed = CHECKSUM_COMPLETE;
|
||||
}
|
||||
|
||||
/* copy VLAN to skb, if one is extracted, for now we assume it's a
|
||||
* standard TPID, but HW also supports custom values
|
||||
*/
|
||||
if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN)
|
||||
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
|
||||
le16_to_cpu(rxbd->r.vlan_opt));
|
||||
if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN) {
|
||||
__be16 tpid = 0;
|
||||
|
||||
switch (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_TPID) {
|
||||
case 0:
|
||||
tpid = htons(ETH_P_8021Q);
|
||||
break;
|
||||
case 1:
|
||||
tpid = htons(ETH_P_8021AD);
|
||||
break;
|
||||
case 2:
|
||||
tpid = htons(enetc_port_rd(&priv->si->hw,
|
||||
ENETC_PCVLANR1));
|
||||
break;
|
||||
case 3:
|
||||
tpid = htons(enetc_port_rd(&priv->si->hw,
|
||||
ENETC_PCVLANR2));
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
__vlan_hwaccel_put_tag(skb, tpid, le16_to_cpu(rxbd->r.vlan_opt));
|
||||
}
|
||||
|
||||
#ifdef CONFIG_FSL_ENETC_PTP_CLOCK
|
||||
if (priv->active_offloads & ENETC_F_RX_TSTAMP)
|
||||
enetc_get_rx_tstamp(rx_ring->ndev, rxbd, skb);
|
||||
|
|
@ -660,8 +680,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|||
u32 bd_status;
|
||||
u16 size;
|
||||
|
||||
enetc_lock_mdio();
|
||||
|
||||
if (cleaned_cnt >= ENETC_RXBD_BUNDLE) {
|
||||
int count = enetc_refill_rx_ring(rx_ring, cleaned_cnt);
|
||||
|
||||
|
|
@ -672,19 +690,15 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|||
|
||||
rxbd = enetc_rxbd(rx_ring, i);
|
||||
bd_status = le32_to_cpu(rxbd->r.lstatus);
|
||||
if (!bd_status) {
|
||||
enetc_unlock_mdio();
|
||||
if (!bd_status)
|
||||
break;
|
||||
}
|
||||
|
||||
enetc_wr_reg_hot(rx_ring->idr, BIT(rx_ring->index));
|
||||
dma_rmb(); /* for reading other rxbd fields */
|
||||
size = le16_to_cpu(rxbd->r.buf_len);
|
||||
skb = enetc_map_rx_buff_to_skb(rx_ring, i, size);
|
||||
if (!skb) {
|
||||
enetc_unlock_mdio();
|
||||
if (!skb)
|
||||
break;
|
||||
}
|
||||
|
||||
enetc_get_offloads(rx_ring, rxbd, skb);
|
||||
|
||||
|
|
@ -696,7 +710,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|||
|
||||
if (unlikely(bd_status &
|
||||
ENETC_RXBD_LSTATUS(ENETC_RXBD_ERR_MASK))) {
|
||||
enetc_unlock_mdio();
|
||||
dev_kfree_skb(skb);
|
||||
while (!(bd_status & ENETC_RXBD_LSTATUS_F)) {
|
||||
dma_rmb();
|
||||
|
|
@ -736,8 +749,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|||
|
||||
enetc_process_skb(rx_ring, skb);
|
||||
|
||||
enetc_unlock_mdio();
|
||||
|
||||
napi_gro_receive(napi, skb);
|
||||
|
||||
rx_frm_cnt++;
|
||||
|
|
@ -984,7 +995,7 @@ static void enetc_free_rxtx_rings(struct enetc_ndev_priv *priv)
|
|||
enetc_free_tx_ring(priv->tx_ring[i]);
|
||||
}
|
||||
|
||||
static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
||||
int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
||||
{
|
||||
int size = cbdr->bd_count * sizeof(struct enetc_cbd);
|
||||
|
||||
|
|
@ -1005,7 +1016,7 @@ static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
||||
void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
||||
{
|
||||
int size = cbdr->bd_count * sizeof(struct enetc_cbd);
|
||||
|
||||
|
|
@ -1013,7 +1024,7 @@ static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|||
cbdr->bd_base = NULL;
|
||||
}
|
||||
|
||||
static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
||||
void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
||||
{
|
||||
/* set CBDR cache attributes */
|
||||
enetc_wr(hw, ENETC_SICAR2,
|
||||
|
|
@ -1033,7 +1044,7 @@ static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
|||
cbdr->cir = hw->reg + ENETC_SICBDRCIR;
|
||||
}
|
||||
|
||||
static void enetc_clear_cbdr(struct enetc_hw *hw)
|
||||
void enetc_clear_cbdr(struct enetc_hw *hw)
|
||||
{
|
||||
enetc_wr(hw, ENETC_SICBDRMR, 0);
|
||||
}
|
||||
|
|
@ -1058,13 +1069,12 @@ static int enetc_setup_default_rss_table(struct enetc_si *si, int num_groups)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int enetc_configure_si(struct enetc_ndev_priv *priv)
|
||||
int enetc_configure_si(struct enetc_ndev_priv *priv)
|
||||
{
|
||||
struct enetc_si *si = priv->si;
|
||||
struct enetc_hw *hw = &si->hw;
|
||||
int err;
|
||||
|
||||
enetc_setup_cbdr(hw, &si->cbd_ring);
|
||||
/* set SI cache attributes */
|
||||
enetc_wr(hw, ENETC_SICAR0,
|
||||
ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT);
|
||||
|
|
@ -1112,6 +1122,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
enetc_setup_cbdr(&si->hw, &si->cbd_ring);
|
||||
|
||||
priv->cls_rules = kcalloc(si->num_fs_entries, sizeof(*priv->cls_rules),
|
||||
GFP_KERNEL);
|
||||
if (!priv->cls_rules) {
|
||||
|
|
@ -1119,14 +1131,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
|
|||
goto err_alloc_cls;
|
||||
}
|
||||
|
||||
err = enetc_configure_si(priv);
|
||||
if (err)
|
||||
goto err_config_si;
|
||||
|
||||
return 0;
|
||||
|
||||
err_config_si:
|
||||
kfree(priv->cls_rules);
|
||||
err_alloc_cls:
|
||||
enetc_clear_cbdr(&si->hw);
|
||||
enetc_free_cbdr(priv->dev, &si->cbd_ring);
|
||||
|
|
@ -1212,7 +1218,8 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
|
|||
rx_ring->idr = hw->reg + ENETC_SIRXIDR;
|
||||
|
||||
enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring));
|
||||
enetc_wr(hw, ENETC_SIRXIDR, rx_ring->next_to_use);
|
||||
/* update ENETC's consumer index */
|
||||
enetc_rxbdr_wr(hw, idx, ENETC_RBCIR, rx_ring->next_to_use);
|
||||
|
||||
/* enable ring */
|
||||
enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr);
|
||||
|
|
|
|||
|
|
@ -292,6 +292,7 @@ void enetc_get_si_caps(struct enetc_si *si);
|
|||
void enetc_init_si_rings_params(struct enetc_ndev_priv *priv);
|
||||
int enetc_alloc_si_resources(struct enetc_ndev_priv *priv);
|
||||
void enetc_free_si_resources(struct enetc_ndev_priv *priv);
|
||||
int enetc_configure_si(struct enetc_ndev_priv *priv);
|
||||
|
||||
int enetc_open(struct net_device *ndev);
|
||||
int enetc_close(struct net_device *ndev);
|
||||
|
|
@ -309,6 +310,10 @@ int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
|
|||
void enetc_set_ethtool_ops(struct net_device *ndev);
|
||||
|
||||
/* control buffer descriptor ring (CBDR) */
|
||||
int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
|
||||
void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
|
||||
void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr);
|
||||
void enetc_clear_cbdr(struct enetc_hw *hw);
|
||||
int enetc_set_mac_flt_entry(struct enetc_si *si, int index,
|
||||
char *mac_addr, int si_map);
|
||||
int enetc_clear_mac_flt_entry(struct enetc_si *si, int index);
|
||||
|
|
|
|||
|
|
@ -172,6 +172,8 @@ enum enetc_bdr_type {TX, RX};
|
|||
#define ENETC_PSIPMAR0(n) (0x0100 + (n) * 0x8) /* n = SI index */
|
||||
#define ENETC_PSIPMAR1(n) (0x0104 + (n) * 0x8)
|
||||
#define ENETC_PVCLCTR 0x0208
|
||||
#define ENETC_PCVLANR1 0x0210
|
||||
#define ENETC_PCVLANR2 0x0214
|
||||
#define ENETC_VLAN_TYPE_C BIT(0)
|
||||
#define ENETC_VLAN_TYPE_S BIT(1)
|
||||
#define ENETC_PVCLCTR_OVTPIDL(bmp) ((bmp) & 0xff) /* VLAN_TYPE */
|
||||
|
|
@ -232,14 +234,23 @@ enum enetc_bdr_type {TX, RX};
|
|||
#define ENETC_PM0_MAXFRM 0x8014
|
||||
#define ENETC_SET_TX_MTU(val) ((val) << 16)
|
||||
#define ENETC_SET_MAXFRM(val) ((val) & 0xffff)
|
||||
#define ENETC_PM0_RX_FIFO 0x801c
|
||||
#define ENETC_PM0_RX_FIFO_VAL 1
|
||||
|
||||
#define ENETC_PM_IMDIO_BASE 0x8030
|
||||
|
||||
#define ENETC_PM0_IF_MODE 0x8300
|
||||
#define ENETC_PMO_IFM_RG BIT(2)
|
||||
#define ENETC_PM0_IFM_RG BIT(2)
|
||||
#define ENETC_PM0_IFM_RLP (BIT(5) | BIT(11))
|
||||
#define ENETC_PM0_IFM_RGAUTO (BIT(15) | ENETC_PMO_IFM_RG | BIT(1))
|
||||
#define ENETC_PM0_IFM_XGMII BIT(12)
|
||||
#define ENETC_PM0_IFM_EN_AUTO BIT(15)
|
||||
#define ENETC_PM0_IFM_SSP_MASK GENMASK(14, 13)
|
||||
#define ENETC_PM0_IFM_SSP_1000 (2 << 13)
|
||||
#define ENETC_PM0_IFM_SSP_100 (0 << 13)
|
||||
#define ENETC_PM0_IFM_SSP_10 (1 << 13)
|
||||
#define ENETC_PM0_IFM_FULL_DPX BIT(12)
|
||||
#define ENETC_PM0_IFM_IFMODE_MASK GENMASK(1, 0)
|
||||
#define ENETC_PM0_IFM_IFMODE_XGMII 0
|
||||
#define ENETC_PM0_IFM_IFMODE_GMII 2
|
||||
#define ENETC_PSIDCAPR 0x1b08
|
||||
#define ENETC_PSIDCAPR_MSK GENMASK(15, 0)
|
||||
#define ENETC_PSFCAPR 0x1b18
|
||||
|
|
@ -453,6 +464,8 @@ static inline u64 _enetc_rd_reg64_wa(void __iomem *reg)
|
|||
#define enetc_wr_reg(reg, val) _enetc_wr_reg_wa((reg), (val))
|
||||
#define enetc_rd(hw, off) enetc_rd_reg((hw)->reg + (off))
|
||||
#define enetc_wr(hw, off, val) enetc_wr_reg((hw)->reg + (off), val)
|
||||
#define enetc_rd_hot(hw, off) enetc_rd_reg_hot((hw)->reg + (off))
|
||||
#define enetc_wr_hot(hw, off, val) enetc_wr_reg_hot((hw)->reg + (off), val)
|
||||
#define enetc_rd64(hw, off) _enetc_rd_reg64_wa((hw)->reg + (off))
|
||||
/* port register accessors - PF only */
|
||||
#define enetc_port_rd(hw, off) enetc_rd_reg((hw)->port + (off))
|
||||
|
|
@ -568,6 +581,7 @@ union enetc_rx_bd {
|
|||
#define ENETC_RXBD_LSTATUS(flags) ((flags) << 16)
|
||||
#define ENETC_RXBD_FLAG_VLAN BIT(9)
|
||||
#define ENETC_RXBD_FLAG_TSTMP BIT(10)
|
||||
#define ENETC_RXBD_FLAG_TPID GENMASK(1, 0)
|
||||
|
||||
#define ENETC_MAC_ADDR_FILT_CNT 8 /* # of supported entries per port */
|
||||
#define EMETC_MAC_ADDR_FILT_RES 3 /* # of reserved entries at the beginning */
|
||||
|
|
|
|||
|
|
@ -190,7 +190,6 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev)
|
|||
{
|
||||
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
||||
struct enetc_pf *pf = enetc_si_priv(priv->si);
|
||||
char vlan_promisc_simap = pf->vlan_promisc_simap;
|
||||
struct enetc_hw *hw = &priv->si->hw;
|
||||
bool uprom = false, mprom = false;
|
||||
struct enetc_mac_filter *filter;
|
||||
|
|
@ -203,16 +202,12 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev)
|
|||
psipmr = ENETC_PSIPMR_SET_UP(0) | ENETC_PSIPMR_SET_MP(0);
|
||||
uprom = true;
|
||||
mprom = true;
|
||||
/* Enable VLAN promiscuous mode for SI0 (PF) */
|
||||
vlan_promisc_simap |= BIT(0);
|
||||
} else if (ndev->flags & IFF_ALLMULTI) {
|
||||
/* enable multi cast promisc mode for SI0 (PF) */
|
||||
psipmr = ENETC_PSIPMR_SET_MP(0);
|
||||
mprom = true;
|
||||
}
|
||||
|
||||
enetc_set_vlan_promisc(&pf->si->hw, vlan_promisc_simap);
|
||||
|
||||
/* first 2 filter entries belong to PF */
|
||||
if (!uprom) {
|
||||
/* Update unicast filters */
|
||||
|
|
@ -320,7 +315,7 @@ static void enetc_set_loopback(struct net_device *ndev, bool en)
|
|||
u32 reg;
|
||||
|
||||
reg = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
|
||||
if (reg & ENETC_PMO_IFM_RG) {
|
||||
if (reg & ENETC_PM0_IFM_RG) {
|
||||
/* RGMII mode */
|
||||
reg = (reg & ~ENETC_PM0_IFM_RLP) |
|
||||
(en ? ENETC_PM0_IFM_RLP : 0);
|
||||
|
|
@ -495,17 +490,30 @@ static void enetc_configure_port_mac(struct enetc_hw *hw)
|
|||
|
||||
enetc_port_wr(hw, ENETC_PM1_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN |
|
||||
ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC);
|
||||
|
||||
/* On LS1028A, the MAC RX FIFO defaults to 2, which is too high
|
||||
* and may lead to RX lock-up under traffic. Set it to 1 instead,
|
||||
* as recommended by the hardware team.
|
||||
*/
|
||||
enetc_port_wr(hw, ENETC_PM0_RX_FIFO, ENETC_PM0_RX_FIFO_VAL);
|
||||
}
|
||||
|
||||
static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode)
|
||||
{
|
||||
/* set auto-speed for RGMII */
|
||||
if (enetc_port_rd(hw, ENETC_PM0_IF_MODE) & ENETC_PMO_IFM_RG ||
|
||||
phy_interface_mode_is_rgmii(phy_mode))
|
||||
enetc_port_wr(hw, ENETC_PM0_IF_MODE, ENETC_PM0_IFM_RGAUTO);
|
||||
u32 val;
|
||||
|
||||
if (phy_mode == PHY_INTERFACE_MODE_USXGMII)
|
||||
enetc_port_wr(hw, ENETC_PM0_IF_MODE, ENETC_PM0_IFM_XGMII);
|
||||
if (phy_interface_mode_is_rgmii(phy_mode)) {
|
||||
val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
|
||||
val &= ~ENETC_PM0_IFM_EN_AUTO;
|
||||
val &= ENETC_PM0_IFM_IFMODE_MASK;
|
||||
val |= ENETC_PM0_IFM_IFMODE_GMII | ENETC_PM0_IFM_RG;
|
||||
enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
|
||||
}
|
||||
|
||||
if (phy_mode == PHY_INTERFACE_MODE_USXGMII) {
|
||||
val = ENETC_PM0_IFM_FULL_DPX | ENETC_PM0_IFM_IFMODE_XGMII;
|
||||
enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
|
||||
}
|
||||
}
|
||||
|
||||
static void enetc_mac_enable(struct enetc_hw *hw, bool en)
|
||||
|
|
@ -937,6 +945,34 @@ static void enetc_pl_mac_config(struct phylink_config *config,
|
|||
phylink_set_pcs(priv->phylink, &pf->pcs->pcs);
|
||||
}
|
||||
|
||||
static void enetc_force_rgmii_mac(struct enetc_hw *hw, int speed, int duplex)
|
||||
{
|
||||
u32 old_val, val;
|
||||
|
||||
old_val = val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
|
||||
|
||||
if (speed == SPEED_1000) {
|
||||
val &= ~ENETC_PM0_IFM_SSP_MASK;
|
||||
val |= ENETC_PM0_IFM_SSP_1000;
|
||||
} else if (speed == SPEED_100) {
|
||||
val &= ~ENETC_PM0_IFM_SSP_MASK;
|
||||
val |= ENETC_PM0_IFM_SSP_100;
|
||||
} else if (speed == SPEED_10) {
|
||||
val &= ~ENETC_PM0_IFM_SSP_MASK;
|
||||
val |= ENETC_PM0_IFM_SSP_10;
|
||||
}
|
||||
|
||||
if (duplex == DUPLEX_FULL)
|
||||
val |= ENETC_PM0_IFM_FULL_DPX;
|
||||
else
|
||||
val &= ~ENETC_PM0_IFM_FULL_DPX;
|
||||
|
||||
if (val == old_val)
|
||||
return;
|
||||
|
||||
enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
|
||||
}
|
||||
|
||||
static void enetc_pl_mac_link_up(struct phylink_config *config,
|
||||
struct phy_device *phy, unsigned int mode,
|
||||
phy_interface_t interface, int speed,
|
||||
|
|
@ -949,6 +985,10 @@ static void enetc_pl_mac_link_up(struct phylink_config *config,
|
|||
if (priv->active_offloads & ENETC_F_QBV)
|
||||
enetc_sched_speed_set(priv, speed);
|
||||
|
||||
if (!phylink_autoneg_inband(mode) &&
|
||||
phy_interface_mode_is_rgmii(interface))
|
||||
enetc_force_rgmii_mac(&pf->si->hw, speed, duplex);
|
||||
|
||||
enetc_mac_enable(&pf->si->hw, true);
|
||||
}
|
||||
|
||||
|
|
@ -1041,6 +1081,26 @@ static int enetc_init_port_rss_memory(struct enetc_si *si)
|
|||
return err;
|
||||
}
|
||||
|
||||
static void enetc_init_unused_port(struct enetc_si *si)
|
||||
{
|
||||
struct device *dev = &si->pdev->dev;
|
||||
struct enetc_hw *hw = &si->hw;
|
||||
int err;
|
||||
|
||||
si->cbd_ring.bd_count = ENETC_CBDR_DEFAULT_SIZE;
|
||||
err = enetc_alloc_cbdr(dev, &si->cbd_ring);
|
||||
if (err)
|
||||
return;
|
||||
|
||||
enetc_setup_cbdr(hw, &si->cbd_ring);
|
||||
|
||||
enetc_init_port_rfs_memory(si);
|
||||
enetc_init_port_rss_memory(si);
|
||||
|
||||
enetc_clear_cbdr(hw);
|
||||
enetc_free_cbdr(dev, &si->cbd_ring);
|
||||
}
|
||||
|
||||
static int enetc_pf_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *ent)
|
||||
{
|
||||
|
|
@ -1051,11 +1111,6 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
|||
struct enetc_pf *pf;
|
||||
int err;
|
||||
|
||||
if (node && !of_device_is_available(node)) {
|
||||
dev_info(&pdev->dev, "device is disabled, skipping\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
err = enetc_pci_probe(pdev, KBUILD_MODNAME, sizeof(*pf));
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "PCI probing failed\n");
|
||||
|
|
@ -1069,6 +1124,13 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
|||
goto err_map_pf_space;
|
||||
}
|
||||
|
||||
if (node && !of_device_is_available(node)) {
|
||||
enetc_init_unused_port(si);
|
||||
dev_info(&pdev->dev, "device is disabled, skipping\n");
|
||||
err = -ENODEV;
|
||||
goto err_device_disabled;
|
||||
}
|
||||
|
||||
pf = enetc_si_priv(si);
|
||||
pf->si = si;
|
||||
pf->total_vfs = pci_sriov_get_totalvfs(pdev);
|
||||
|
|
@ -1108,6 +1170,12 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
|||
goto err_init_port_rss;
|
||||
}
|
||||
|
||||
err = enetc_configure_si(priv);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "Failed to configure SI\n");
|
||||
goto err_config_si;
|
||||
}
|
||||
|
||||
err = enetc_alloc_msix(priv);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "MSIX alloc failed\n");
|
||||
|
|
@ -1136,6 +1204,7 @@ err_phylink_create:
|
|||
enetc_mdiobus_destroy(pf);
|
||||
err_mdiobus_create:
|
||||
enetc_free_msix(priv);
|
||||
err_config_si:
|
||||
err_init_port_rss:
|
||||
err_init_port_rfs:
|
||||
err_alloc_msix:
|
||||
|
|
@ -1144,6 +1213,7 @@ err_alloc_si_res:
|
|||
si->ndev = NULL;
|
||||
free_netdev(ndev);
|
||||
err_alloc_netdev:
|
||||
err_device_disabled:
|
||||
err_map_pf_space:
|
||||
enetc_pci_remove(pdev);
|
||||
|
||||
|
|
|
|||
|
|
@ -171,6 +171,12 @@ static int enetc_vf_probe(struct pci_dev *pdev,
|
|||
goto err_alloc_si_res;
|
||||
}
|
||||
|
||||
err = enetc_configure_si(priv);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "Failed to configure SI\n");
|
||||
goto err_config_si;
|
||||
}
|
||||
|
||||
err = enetc_alloc_msix(priv);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "MSIX alloc failed\n");
|
||||
|
|
@ -187,6 +193,7 @@ static int enetc_vf_probe(struct pci_dev *pdev,
|
|||
|
||||
err_reg_netdev:
|
||||
enetc_free_msix(priv);
|
||||
err_config_si:
|
||||
err_alloc_msix:
|
||||
enetc_free_si_resources(priv);
|
||||
err_alloc_si_res:
|
||||
|
|
|
|||
|
|
@ -377,9 +377,16 @@ static int fec_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
|
|||
u64 ns;
|
||||
unsigned long flags;
|
||||
|
||||
mutex_lock(&adapter->ptp_clk_mutex);
|
||||
/* Check the ptp clock */
|
||||
if (!adapter->ptp_clk_on) {
|
||||
mutex_unlock(&adapter->ptp_clk_mutex);
|
||||
return -EINVAL;
|
||||
}
|
||||
spin_lock_irqsave(&adapter->tmreg_lock, flags);
|
||||
ns = timecounter_read(&adapter->tc);
|
||||
spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
|
||||
mutex_unlock(&adapter->ptp_clk_mutex);
|
||||
|
||||
*ts = ns_to_timespec64(ns);
|
||||
|
||||
|
|
|
|||
|
|
@ -2390,6 +2390,10 @@ static bool gfar_add_rx_frag(struct gfar_rx_buff *rxb, u32 lstatus,
|
|||
if (lstatus & BD_LFLAG(RXBD_LAST))
|
||||
size -= skb->len;
|
||||
|
||||
WARN(size < 0, "gianfar: rx fragment size underflow");
|
||||
if (size < 0)
|
||||
return false;
|
||||
|
||||
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
|
||||
rxb->page_offset + RXBUF_ALIGNMENT,
|
||||
size, GFAR_RXB_TRUESIZE);
|
||||
|
|
@ -2552,6 +2556,17 @@ static int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue,
|
|||
if (lstatus & BD_LFLAG(RXBD_EMPTY))
|
||||
break;
|
||||
|
||||
/* lost RXBD_LAST descriptor due to overrun */
|
||||
if (skb &&
|
||||
(lstatus & BD_LFLAG(RXBD_FIRST))) {
|
||||
/* discard faulty buffer */
|
||||
dev_kfree_skb(skb);
|
||||
skb = NULL;
|
||||
rx_queue->stats.rx_dropped++;
|
||||
|
||||
/* can continue normally */
|
||||
}
|
||||
|
||||
/* order rx buffer descriptor reads */
|
||||
rmb();
|
||||
|
||||
|
|
|
|||
|
|
@ -1663,8 +1663,10 @@ static int hns_nic_clear_all_rx_fetch(struct net_device *ndev)
|
|||
for (j = 0; j < fetch_num; j++) {
|
||||
/* alloc one skb and init */
|
||||
skb = hns_assemble_skb(ndev);
|
||||
if (!skb)
|
||||
if (!skb) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
rd = &tx_ring_data(priv, skb->queue_mapping);
|
||||
hns_nic_net_xmit_hw(ndev, skb, rd);
|
||||
|
||||
|
|
|
|||
|
|
@ -1053,16 +1053,16 @@ struct hclge_fd_tcam_config_3_cmd {
|
|||
#define HCLGE_FD_AD_DROP_B 0
|
||||
#define HCLGE_FD_AD_DIRECT_QID_B 1
|
||||
#define HCLGE_FD_AD_QID_S 2
|
||||
#define HCLGE_FD_AD_QID_M GENMASK(12, 2)
|
||||
#define HCLGE_FD_AD_QID_M GENMASK(11, 2)
|
||||
#define HCLGE_FD_AD_USE_COUNTER_B 12
|
||||
#define HCLGE_FD_AD_COUNTER_NUM_S 13
|
||||
#define HCLGE_FD_AD_COUNTER_NUM_M GENMASK(20, 13)
|
||||
#define HCLGE_FD_AD_NXT_STEP_B 20
|
||||
#define HCLGE_FD_AD_NXT_KEY_S 21
|
||||
#define HCLGE_FD_AD_NXT_KEY_M GENMASK(26, 21)
|
||||
#define HCLGE_FD_AD_NXT_KEY_M GENMASK(25, 21)
|
||||
#define HCLGE_FD_AD_WR_RULE_ID_B 0
|
||||
#define HCLGE_FD_AD_RULE_ID_S 1
|
||||
#define HCLGE_FD_AD_RULE_ID_M GENMASK(13, 1)
|
||||
#define HCLGE_FD_AD_RULE_ID_M GENMASK(12, 1)
|
||||
#define HCLGE_FD_AD_TC_OVRD_B 16
|
||||
#define HCLGE_FD_AD_TC_SIZE_S 17
|
||||
#define HCLGE_FD_AD_TC_SIZE_M GENMASK(20, 17)
|
||||
|
|
|
|||
|
|
@ -5245,9 +5245,9 @@ static bool hclge_fd_convert_tuple(u32 tuple_bit, u8 *key_x, u8 *key_y,
|
|||
case BIT(INNER_SRC_MAC):
|
||||
for (i = 0; i < ETH_ALEN; i++) {
|
||||
calc_x(key_x[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
|
||||
rule->tuples.src_mac[i]);
|
||||
rule->tuples_mask.src_mac[i]);
|
||||
calc_y(key_y[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
|
||||
rule->tuples.src_mac[i]);
|
||||
rule->tuples_mask.src_mac[i]);
|
||||
}
|
||||
|
||||
return true;
|
||||
|
|
@ -6330,8 +6330,7 @@ static void hclge_fd_get_ext_info(struct ethtool_rx_flow_spec *fs,
|
|||
fs->h_ext.vlan_tci = cpu_to_be16(rule->tuples.vlan_tag1);
|
||||
fs->m_ext.vlan_tci =
|
||||
rule->unused_tuple & BIT(INNER_VLAN_TAG_FST) ?
|
||||
cpu_to_be16(VLAN_VID_MASK) :
|
||||
cpu_to_be16(rule->tuples_mask.vlan_tag1);
|
||||
0 : cpu_to_be16(rule->tuples_mask.vlan_tag1);
|
||||
}
|
||||
|
||||
if (fs->flow_type & FLOW_MAC_EXT) {
|
||||
|
|
|
|||
|
|
@ -1905,10 +1905,9 @@ static int ibmvnic_set_mac(struct net_device *netdev, void *p)
|
|||
if (!is_valid_ether_addr(addr->sa_data))
|
||||
return -EADDRNOTAVAIL;
|
||||
|
||||
if (adapter->state != VNIC_PROBED) {
|
||||
ether_addr_copy(adapter->mac_addr, addr->sa_data);
|
||||
ether_addr_copy(adapter->mac_addr, addr->sa_data);
|
||||
if (adapter->state != VNIC_PROBED)
|
||||
rc = __ibmvnic_set_mac(netdev, addr->sa_data);
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
|
@ -5218,16 +5217,14 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
|
|||
{
|
||||
struct device *dev = &adapter->vdev->dev;
|
||||
unsigned long timeout = msecs_to_jiffies(20000);
|
||||
u64 old_num_rx_queues, old_num_tx_queues;
|
||||
u64 old_num_rx_queues = adapter->req_rx_queues;
|
||||
u64 old_num_tx_queues = adapter->req_tx_queues;
|
||||
int rc;
|
||||
|
||||
adapter->from_passive_init = false;
|
||||
|
||||
if (reset) {
|
||||
old_num_rx_queues = adapter->req_rx_queues;
|
||||
old_num_tx_queues = adapter->req_tx_queues;
|
||||
if (reset)
|
||||
reinit_completion(&adapter->init_done);
|
||||
}
|
||||
|
||||
adapter->init_done_rc = 0;
|
||||
rc = ibmvnic_send_crq_init(adapter);
|
||||
|
|
@ -5410,9 +5407,9 @@ static void ibmvnic_remove(struct vio_dev *dev)
|
|||
* after setting state, so __ibmvnic_reset() which is called
|
||||
* from the flush_work() below, can make progress.
|
||||
*/
|
||||
spin_lock_irqsave(&adapter->rwi_lock, flags);
|
||||
spin_lock(&adapter->rwi_lock);
|
||||
adapter->state = VNIC_REMOVING;
|
||||
spin_unlock_irqrestore(&adapter->rwi_lock, flags);
|
||||
spin_unlock(&adapter->rwi_lock);
|
||||
|
||||
spin_unlock_irqrestore(&adapter->state_lock, flags);
|
||||
|
||||
|
|
|
|||
|
|
@ -1776,7 +1776,8 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter)
|
|||
goto err_alloc;
|
||||
}
|
||||
|
||||
if (iavf_process_config(adapter))
|
||||
err = iavf_process_config(adapter);
|
||||
if (err)
|
||||
goto err_alloc;
|
||||
adapter->current_op = VIRTCHNL_OP_UNKNOWN;
|
||||
|
||||
|
|
|
|||
|
|
@ -575,6 +575,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (xs->props.mode != XFRM_MODE_TRANSPORT) {
|
||||
netdev_err(dev, "Unsupported mode for ipsec offload\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (ixgbe_ipsec_check_mgmt_ip(xs)) {
|
||||
netdev_err(dev, "IPsec IP addr clash with mgmt filters\n");
|
||||
return -EINVAL;
|
||||
|
|
|
|||
|
|
@ -9565,8 +9565,10 @@ static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
|
|||
ixgbe_atr_compute_perfect_hash_82599(&input->filter, mask);
|
||||
err = ixgbe_fdir_write_perfect_filter_82599(hw, &input->filter,
|
||||
input->sw_idx, queue);
|
||||
if (!err)
|
||||
ixgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx);
|
||||
if (err)
|
||||
goto err_out_w_lock;
|
||||
|
||||
ixgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx);
|
||||
spin_unlock(&adapter->fdir_perfect_lock);
|
||||
|
||||
if ((uhtid != 0x800) && (adapter->jump_tables[uhtid]))
|
||||
|
|
|
|||
|
|
@ -272,6 +272,11 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (xs->props.mode != XFRM_MODE_TRANSPORT) {
|
||||
netdev_err(dev, "Unsupported mode for ipsec offload\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
|
||||
struct rx_sa rsa;
|
||||
|
||||
|
|
|
|||
|
|
@ -56,7 +56,9 @@ static bool is_dev_rpm(void *cgxd)
|
|||
|
||||
bool is_lmac_valid(struct cgx *cgx, int lmac_id)
|
||||
{
|
||||
return cgx && test_bit(lmac_id, &cgx->lmac_bmap);
|
||||
if (!cgx || lmac_id < 0 || lmac_id >= MAX_LMAC_PER_CGX)
|
||||
return false;
|
||||
return test_bit(lmac_id, &cgx->lmac_bmap);
|
||||
}
|
||||
|
||||
struct mac_ops *get_mac_ops(void *cgxd)
|
||||
|
|
|
|||
|
|
@ -1225,8 +1225,6 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
|
|||
goto push_new_skb;
|
||||
}
|
||||
|
||||
desc_data.dma_addr = new_dma_addr;
|
||||
|
||||
/* We can't fail anymore at this point: it's safe to unmap the skb. */
|
||||
mtk_star_dma_unmap_rx(priv, &desc_data);
|
||||
|
||||
|
|
@ -1236,6 +1234,9 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
|
|||
desc_data.skb->dev = ndev;
|
||||
netif_receive_skb(desc_data.skb);
|
||||
|
||||
/* update dma_addr for new skb */
|
||||
desc_data.dma_addr = new_dma_addr;
|
||||
|
||||
push_new_skb:
|
||||
desc_data.len = skb_tailroom(new_skb);
|
||||
desc_data.skb = new_skb;
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@
|
|||
#define EN_ETHTOOL_SHORT_MASK cpu_to_be16(0xffff)
|
||||
#define EN_ETHTOOL_WORD_MASK cpu_to_be32(0xffffffff)
|
||||
|
||||
static int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
|
||||
int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
|
||||
{
|
||||
int i, t;
|
||||
int err = 0;
|
||||
|
|
|
|||
|
|
@ -3554,6 +3554,8 @@ int mlx4_en_reset_config(struct net_device *dev,
|
|||
en_err(priv, "Failed starting port\n");
|
||||
}
|
||||
|
||||
if (!err)
|
||||
err = mlx4_en_moderation_update(priv);
|
||||
out:
|
||||
mutex_unlock(&mdev->state_lock);
|
||||
kfree(tmp);
|
||||
|
|
|
|||
|
|
@ -775,6 +775,7 @@ void mlx4_en_ptp_overflow_check(struct mlx4_en_dev *mdev);
|
|||
#define DEV_FEATURE_CHANGED(dev, new_features, feature) \
|
||||
((dev->features & feature) ^ (new_features & feature))
|
||||
|
||||
int mlx4_en_moderation_update(struct mlx4_en_priv *priv);
|
||||
int mlx4_en_reset_config(struct net_device *dev,
|
||||
struct hwtstamp_config ts_config,
|
||||
netdev_features_t new_features);
|
||||
|
|
|
|||
|
|
@ -4430,6 +4430,7 @@ MLXSW_ITEM32(reg, ptys, ext_eth_proto_cap, 0x08, 0, 32);
|
|||
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 BIT(20)
|
||||
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 BIT(21)
|
||||
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 BIT(22)
|
||||
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4 BIT(23)
|
||||
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_CR BIT(27)
|
||||
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_KR BIT(28)
|
||||
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_SR BIT(29)
|
||||
|
|
|
|||
|
|
@ -1169,6 +1169,11 @@ static const struct mlxsw_sp1_port_link_mode mlxsw_sp1_port_link_mode[] = {
|
|||
.mask_ethtool = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
|
||||
.speed = SPEED_100000,
|
||||
},
|
||||
{
|
||||
.mask = MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4,
|
||||
.mask_ethtool = ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
|
||||
.speed = SPEED_100000,
|
||||
},
|
||||
};
|
||||
|
||||
#define MLXSW_SP1_PORT_LINK_MODE_LEN ARRAY_SIZE(mlxsw_sp1_port_link_mode)
|
||||
|
|
|
|||
|
|
@ -5951,6 +5951,10 @@ mlxsw_sp_router_fib4_replace(struct mlxsw_sp *mlxsw_sp,
|
|||
if (mlxsw_sp->router->aborted)
|
||||
return 0;
|
||||
|
||||
if (fen_info->fi->nh &&
|
||||
!mlxsw_sp_nexthop_obj_group_lookup(mlxsw_sp, fen_info->fi->nh->id))
|
||||
return 0;
|
||||
|
||||
fib_node = mlxsw_sp_fib_node_get(mlxsw_sp, fen_info->tb_id,
|
||||
&fen_info->dst, sizeof(fen_info->dst),
|
||||
fen_info->dst_len,
|
||||
|
|
@ -6601,6 +6605,9 @@ static int mlxsw_sp_router_fib6_replace(struct mlxsw_sp *mlxsw_sp,
|
|||
if (mlxsw_sp_fib6_rt_should_ignore(rt))
|
||||
return 0;
|
||||
|
||||
if (rt->nh && !mlxsw_sp_nexthop_obj_group_lookup(mlxsw_sp, rt->nh->id))
|
||||
return 0;
|
||||
|
||||
fib_node = mlxsw_sp_fib_node_get(mlxsw_sp, rt->fib6_table->tb6_id,
|
||||
&rt->fib6_dst.addr,
|
||||
sizeof(rt->fib6_dst.addr),
|
||||
|
|
|
|||
|
|
@ -613,7 +613,8 @@ static const struct mlxsw_sx_port_link_mode mlxsw_sx_port_link_mode[] = {
|
|||
{
|
||||
.mask = MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 |
|
||||
MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 |
|
||||
MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4,
|
||||
MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 |
|
||||
MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4,
|
||||
.speed = 100000,
|
||||
},
|
||||
};
|
||||
|
|
|
|||
|
|
@ -2040,7 +2040,7 @@ lan743x_rx_trim_skb(struct sk_buff *skb, int frame_length)
|
|||
dev_kfree_skb_irq(skb);
|
||||
return NULL;
|
||||
}
|
||||
frame_length = max_t(int, 0, frame_length - RX_HEAD_PADDING - 2);
|
||||
frame_length = max_t(int, 0, frame_length - RX_HEAD_PADDING - 4);
|
||||
if (skb->len > frame_length) {
|
||||
skb->tail -= skb->len - frame_length;
|
||||
skb->len = frame_length;
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@ if NET_VENDOR_MICROSEMI
|
|||
|
||||
# Users should depend on NET_SWITCHDEV, HAS_IOMEM
|
||||
config MSCC_OCELOT_SWITCH_LIB
|
||||
select NET_DEVLINK
|
||||
select REGMAP_MMIO
|
||||
select PACKING
|
||||
select PHYLIB
|
||||
|
|
|
|||
|
|
@ -540,13 +540,14 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
|
|||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
flow_rule_match_ipv4_addrs(rule, &match);
|
||||
|
||||
if (filter->block_id == VCAP_IS1 && *(u32 *)&match.mask->dst) {
|
||||
NL_SET_ERR_MSG_MOD(extack,
|
||||
"Key type S1_NORMAL cannot match on destination IP");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
flow_rule_match_ipv4_addrs(rule, &match);
|
||||
tmp = &filter->key.ipv4.sip.value.addr[0];
|
||||
memcpy(tmp, &match.key->src, 4);
|
||||
|
||||
|
|
|
|||
|
|
@ -767,7 +767,7 @@ static void r8168fp_adjust_ocp_cmd(struct rtl8169_private *tp, u32 *cmd, int typ
|
|||
if (type == ERIAR_OOB &&
|
||||
(tp->mac_version == RTL_GIGA_MAC_VER_52 ||
|
||||
tp->mac_version == RTL_GIGA_MAC_VER_53))
|
||||
*cmd |= 0x7f0 << 18;
|
||||
*cmd |= 0xf70 << 18;
|
||||
}
|
||||
|
||||
DECLARE_RTL_COND(rtl_eriar_cond)
|
||||
|
|
|
|||
|
|
@ -560,6 +560,8 @@ static struct sh_eth_cpu_data r7s72100_data = {
|
|||
EESR_TDE,
|
||||
.fdr_value = 0x0000070f,
|
||||
|
||||
.trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
|
||||
|
||||
.no_psr = 1,
|
||||
.apr = 1,
|
||||
.mpr = 1,
|
||||
|
|
@ -780,6 +782,8 @@ static struct sh_eth_cpu_data r7s9210_data = {
|
|||
|
||||
.fdr_value = 0x0000070f,
|
||||
|
||||
.trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
|
||||
|
||||
.apr = 1,
|
||||
.mpr = 1,
|
||||
.tpauser = 1,
|
||||
|
|
@ -1089,6 +1093,9 @@ static struct sh_eth_cpu_data sh771x_data = {
|
|||
EESIPR_CEEFIP | EESIPR_CELFIP |
|
||||
EESIPR_RRFIP | EESIPR_RTLFIP | EESIPR_RTSFIP |
|
||||
EESIPR_PREIP | EESIPR_CERFIP,
|
||||
|
||||
.trscer_err_mask = DESC_I_RINT8,
|
||||
|
||||
.tsu = 1,
|
||||
.dual_port = 1,
|
||||
};
|
||||
|
|
|
|||
|
|
@ -233,6 +233,7 @@ static void common_default_data(struct plat_stmmacenet_data *plat)
|
|||
static int intel_mgbe_common_data(struct pci_dev *pdev,
|
||||
struct plat_stmmacenet_data *plat)
|
||||
{
|
||||
char clk_name[20];
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
|
|
@ -301,8 +302,10 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
|
|||
plat->eee_usecs_rate = plat->clk_ptp_rate;
|
||||
|
||||
/* Set system clock */
|
||||
sprintf(clk_name, "%s-%s", "stmmac", pci_name(pdev));
|
||||
|
||||
plat->stmmac_clk = clk_register_fixed_rate(&pdev->dev,
|
||||
"stmmac-clk", NULL, 0,
|
||||
clk_name, NULL, 0,
|
||||
plat->clk_ptp_rate);
|
||||
|
||||
if (IS_ERR(plat->stmmac_clk)) {
|
||||
|
|
@ -446,8 +449,8 @@ static int tgl_common_data(struct pci_dev *pdev,
|
|||
return intel_mgbe_common_data(pdev, plat);
|
||||
}
|
||||
|
||||
static int tgl_sgmii_data(struct pci_dev *pdev,
|
||||
struct plat_stmmacenet_data *plat)
|
||||
static int tgl_sgmii_phy0_data(struct pci_dev *pdev,
|
||||
struct plat_stmmacenet_data *plat)
|
||||
{
|
||||
plat->bus_id = 1;
|
||||
plat->phy_interface = PHY_INTERFACE_MODE_SGMII;
|
||||
|
|
@ -456,12 +459,26 @@ static int tgl_sgmii_data(struct pci_dev *pdev,
|
|||
return tgl_common_data(pdev, plat);
|
||||
}
|
||||
|
||||
static struct stmmac_pci_info tgl_sgmii1g_info = {
|
||||
.setup = tgl_sgmii_data,
|
||||
static struct stmmac_pci_info tgl_sgmii1g_phy0_info = {
|
||||
.setup = tgl_sgmii_phy0_data,
|
||||
};
|
||||
|
||||
static int adls_sgmii_data(struct pci_dev *pdev,
|
||||
struct plat_stmmacenet_data *plat)
|
||||
static int tgl_sgmii_phy1_data(struct pci_dev *pdev,
|
||||
struct plat_stmmacenet_data *plat)
|
||||
{
|
||||
plat->bus_id = 2;
|
||||
plat->phy_interface = PHY_INTERFACE_MODE_SGMII;
|
||||
plat->serdes_powerup = intel_serdes_powerup;
|
||||
plat->serdes_powerdown = intel_serdes_powerdown;
|
||||
return tgl_common_data(pdev, plat);
|
||||
}
|
||||
|
||||
static struct stmmac_pci_info tgl_sgmii1g_phy1_info = {
|
||||
.setup = tgl_sgmii_phy1_data,
|
||||
};
|
||||
|
||||
static int adls_sgmii_phy0_data(struct pci_dev *pdev,
|
||||
struct plat_stmmacenet_data *plat)
|
||||
{
|
||||
plat->bus_id = 1;
|
||||
plat->phy_interface = PHY_INTERFACE_MODE_SGMII;
|
||||
|
|
@ -471,10 +488,24 @@ static int adls_sgmii_data(struct pci_dev *pdev,
|
|||
return tgl_common_data(pdev, plat);
|
||||
}
|
||||
|
||||
static struct stmmac_pci_info adls_sgmii1g_info = {
|
||||
.setup = adls_sgmii_data,
|
||||
static struct stmmac_pci_info adls_sgmii1g_phy0_info = {
|
||||
.setup = adls_sgmii_phy0_data,
|
||||
};
|
||||
|
||||
static int adls_sgmii_phy1_data(struct pci_dev *pdev,
|
||||
struct plat_stmmacenet_data *plat)
|
||||
{
|
||||
plat->bus_id = 2;
|
||||
plat->phy_interface = PHY_INTERFACE_MODE_SGMII;
|
||||
|
||||
/* SerDes power up and power down are done in BIOS for ADL */
|
||||
|
||||
return tgl_common_data(pdev, plat);
|
||||
}
|
||||
|
||||
static struct stmmac_pci_info adls_sgmii1g_phy1_info = {
|
||||
.setup = adls_sgmii_phy1_data,
|
||||
};
|
||||
static const struct stmmac_pci_func_data galileo_stmmac_func_data[] = {
|
||||
{
|
||||
.func = 6,
|
||||
|
|
@ -756,11 +787,11 @@ static const struct pci_device_id intel_eth_pci_id_table[] = {
|
|||
{ PCI_DEVICE_DATA(INTEL, EHL_PSE1_RGMII1G_ID, &ehl_pse1_rgmii1g_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII1G_ID, &ehl_pse1_sgmii1g_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII2G5_ID, &ehl_pse1_sgmii1g_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, TGL_SGMII1G_ID, &tgl_sgmii1g_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_0_ID, &tgl_sgmii1g_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_1_ID, &tgl_sgmii1g_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_0_ID, &adls_sgmii1g_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_1_ID, &adls_sgmii1g_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, TGL_SGMII1G_ID, &tgl_sgmii1g_phy0_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_0_ID, &tgl_sgmii1g_phy0_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_1_ID, &tgl_sgmii1g_phy1_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_0_ID, &adls_sgmii1g_phy0_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_1_ID, &adls_sgmii1g_phy1_info) },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, intel_eth_pci_id_table);
|
||||
|
|
|
|||
|
|
@ -402,19 +402,53 @@ static void dwmac4_rd_set_tx_ic(struct dma_desc *p)
|
|||
p->des2 |= cpu_to_le32(TDES2_INTERRUPT_ON_COMPLETION);
|
||||
}
|
||||
|
||||
static void dwmac4_display_ring(void *head, unsigned int size, bool rx)
|
||||
static void dwmac4_display_ring(void *head, unsigned int size, bool rx,
|
||||
dma_addr_t dma_rx_phy, unsigned int desc_size)
|
||||
{
|
||||
struct dma_desc *p = (struct dma_desc *)head;
|
||||
dma_addr_t dma_addr;
|
||||
int i;
|
||||
|
||||
pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
pr_info("%03d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, (unsigned int)virt_to_phys(p),
|
||||
le32_to_cpu(p->des0), le32_to_cpu(p->des1),
|
||||
le32_to_cpu(p->des2), le32_to_cpu(p->des3));
|
||||
p++;
|
||||
if (desc_size == sizeof(struct dma_desc)) {
|
||||
struct dma_desc *p = (struct dma_desc *)head;
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
dma_addr = dma_rx_phy + i * sizeof(*p);
|
||||
pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, &dma_addr,
|
||||
le32_to_cpu(p->des0), le32_to_cpu(p->des1),
|
||||
le32_to_cpu(p->des2), le32_to_cpu(p->des3));
|
||||
p++;
|
||||
}
|
||||
} else if (desc_size == sizeof(struct dma_extended_desc)) {
|
||||
struct dma_extended_desc *extp = (struct dma_extended_desc *)head;
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
dma_addr = dma_rx_phy + i * sizeof(*extp);
|
||||
pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, &dma_addr,
|
||||
le32_to_cpu(extp->basic.des0), le32_to_cpu(extp->basic.des1),
|
||||
le32_to_cpu(extp->basic.des2), le32_to_cpu(extp->basic.des3),
|
||||
le32_to_cpu(extp->des4), le32_to_cpu(extp->des5),
|
||||
le32_to_cpu(extp->des6), le32_to_cpu(extp->des7));
|
||||
extp++;
|
||||
}
|
||||
} else if (desc_size == sizeof(struct dma_edesc)) {
|
||||
struct dma_edesc *ep = (struct dma_edesc *)head;
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
dma_addr = dma_rx_phy + i * sizeof(*ep);
|
||||
pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, &dma_addr,
|
||||
le32_to_cpu(ep->des4), le32_to_cpu(ep->des5),
|
||||
le32_to_cpu(ep->des6), le32_to_cpu(ep->des7),
|
||||
le32_to_cpu(ep->basic.des0), le32_to_cpu(ep->basic.des1),
|
||||
le32_to_cpu(ep->basic.des2), le32_to_cpu(ep->basic.des3));
|
||||
ep++;
|
||||
}
|
||||
} else {
|
||||
pr_err("unsupported descriptor!");
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -499,10 +533,15 @@ static void dwmac4_get_rx_header_len(struct dma_desc *p, unsigned int *len)
|
|||
*len = le32_to_cpu(p->des2) & RDES2_HL;
|
||||
}
|
||||
|
||||
static void dwmac4_set_sec_addr(struct dma_desc *p, dma_addr_t addr)
|
||||
static void dwmac4_set_sec_addr(struct dma_desc *p, dma_addr_t addr, bool buf2_valid)
|
||||
{
|
||||
p->des2 = cpu_to_le32(lower_32_bits(addr));
|
||||
p->des3 = cpu_to_le32(upper_32_bits(addr) | RDES3_BUFFER2_VALID_ADDR);
|
||||
p->des3 = cpu_to_le32(upper_32_bits(addr));
|
||||
|
||||
if (buf2_valid)
|
||||
p->des3 |= cpu_to_le32(RDES3_BUFFER2_VALID_ADDR);
|
||||
else
|
||||
p->des3 &= cpu_to_le32(~RDES3_BUFFER2_VALID_ADDR);
|
||||
}
|
||||
|
||||
static void dwmac4_set_tbs(struct dma_edesc *p, u32 sec, u32 nsec)
|
||||
|
|
|
|||
|
|
@ -124,6 +124,23 @@ static void dwmac4_dma_init_channel(void __iomem *ioaddr,
|
|||
ioaddr + DMA_CHAN_INTR_ENA(chan));
|
||||
}
|
||||
|
||||
static void dwmac410_dma_init_channel(void __iomem *ioaddr,
|
||||
struct stmmac_dma_cfg *dma_cfg, u32 chan)
|
||||
{
|
||||
u32 value;
|
||||
|
||||
/* common channel control register config */
|
||||
value = readl(ioaddr + DMA_CHAN_CONTROL(chan));
|
||||
if (dma_cfg->pblx8)
|
||||
value = value | DMA_BUS_MODE_PBL;
|
||||
|
||||
writel(value, ioaddr + DMA_CHAN_CONTROL(chan));
|
||||
|
||||
/* Mask interrupts by writing to CSR7 */
|
||||
writel(DMA_CHAN_INTR_DEFAULT_MASK_4_10,
|
||||
ioaddr + DMA_CHAN_INTR_ENA(chan));
|
||||
}
|
||||
|
||||
static void dwmac4_dma_init(void __iomem *ioaddr,
|
||||
struct stmmac_dma_cfg *dma_cfg, int atds)
|
||||
{
|
||||
|
|
@ -523,7 +540,7 @@ const struct stmmac_dma_ops dwmac4_dma_ops = {
|
|||
const struct stmmac_dma_ops dwmac410_dma_ops = {
|
||||
.reset = dwmac4_dma_reset,
|
||||
.init = dwmac4_dma_init,
|
||||
.init_chan = dwmac4_dma_init_channel,
|
||||
.init_chan = dwmac410_dma_init_channel,
|
||||
.init_rx_chan = dwmac4_dma_init_rx_chan,
|
||||
.init_tx_chan = dwmac4_dma_init_tx_chan,
|
||||
.axi = dwmac4_dma_axi,
|
||||
|
|
|
|||
|
|
@ -53,10 +53,6 @@ void dwmac4_dma_stop_tx(void __iomem *ioaddr, u32 chan)
|
|||
|
||||
value &= ~DMA_CONTROL_ST;
|
||||
writel(value, ioaddr + DMA_CHAN_TX_CONTROL(chan));
|
||||
|
||||
value = readl(ioaddr + GMAC_CONFIG);
|
||||
value &= ~GMAC_CONFIG_TE;
|
||||
writel(value, ioaddr + GMAC_CONFIG);
|
||||
}
|
||||
|
||||
void dwmac4_dma_start_rx(void __iomem *ioaddr, u32 chan)
|
||||
|
|
|
|||
|
|
@ -292,7 +292,7 @@ static void dwxgmac2_get_rx_header_len(struct dma_desc *p, unsigned int *len)
|
|||
*len = le32_to_cpu(p->des2) & XGMAC_RDES2_HL;
|
||||
}
|
||||
|
||||
static void dwxgmac2_set_sec_addr(struct dma_desc *p, dma_addr_t addr)
|
||||
static void dwxgmac2_set_sec_addr(struct dma_desc *p, dma_addr_t addr, bool is_valid)
|
||||
{
|
||||
p->des2 = cpu_to_le32(lower_32_bits(addr));
|
||||
p->des3 = cpu_to_le32(upper_32_bits(addr));
|
||||
|
|
|
|||
|
|
@ -417,19 +417,22 @@ static int enh_desc_get_rx_timestamp_status(void *desc, void *next_desc,
|
|||
}
|
||||
}
|
||||
|
||||
static void enh_desc_display_ring(void *head, unsigned int size, bool rx)
|
||||
static void enh_desc_display_ring(void *head, unsigned int size, bool rx,
|
||||
dma_addr_t dma_rx_phy, unsigned int desc_size)
|
||||
{
|
||||
struct dma_extended_desc *ep = (struct dma_extended_desc *)head;
|
||||
dma_addr_t dma_addr;
|
||||
int i;
|
||||
|
||||
pr_info("Extended %s descriptor ring:\n", rx ? "RX" : "TX");
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
u64 x;
|
||||
dma_addr = dma_rx_phy + i * sizeof(*ep);
|
||||
|
||||
x = *(u64 *)ep;
|
||||
pr_info("%03d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, (unsigned int)virt_to_phys(ep),
|
||||
pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, &dma_addr,
|
||||
(unsigned int)x, (unsigned int)(x >> 32),
|
||||
ep->basic.des2, ep->basic.des3);
|
||||
ep++;
|
||||
|
|
|
|||
|
|
@ -78,7 +78,8 @@ struct stmmac_desc_ops {
|
|||
/* get rx timestamp status */
|
||||
int (*get_rx_timestamp_status)(void *desc, void *next_desc, u32 ats);
|
||||
/* Display ring */
|
||||
void (*display_ring)(void *head, unsigned int size, bool rx);
|
||||
void (*display_ring)(void *head, unsigned int size, bool rx,
|
||||
dma_addr_t dma_rx_phy, unsigned int desc_size);
|
||||
/* set MSS via context descriptor */
|
||||
void (*set_mss)(struct dma_desc *p, unsigned int mss);
|
||||
/* get descriptor skbuff address */
|
||||
|
|
@ -91,7 +92,7 @@ struct stmmac_desc_ops {
|
|||
int (*get_rx_hash)(struct dma_desc *p, u32 *hash,
|
||||
enum pkt_hash_types *type);
|
||||
void (*get_rx_header_len)(struct dma_desc *p, unsigned int *len);
|
||||
void (*set_sec_addr)(struct dma_desc *p, dma_addr_t addr);
|
||||
void (*set_sec_addr)(struct dma_desc *p, dma_addr_t addr, bool buf2_valid);
|
||||
void (*set_sarc)(struct dma_desc *p, u32 sarc_type);
|
||||
void (*set_vlan_tag)(struct dma_desc *p, u16 tag, u16 inner_tag,
|
||||
u32 inner_type);
|
||||
|
|
|
|||
|
|
@ -269,19 +269,22 @@ static int ndesc_get_rx_timestamp_status(void *desc, void *next_desc, u32 ats)
|
|||
return 1;
|
||||
}
|
||||
|
||||
static void ndesc_display_ring(void *head, unsigned int size, bool rx)
|
||||
static void ndesc_display_ring(void *head, unsigned int size, bool rx,
|
||||
dma_addr_t dma_rx_phy, unsigned int desc_size)
|
||||
{
|
||||
struct dma_desc *p = (struct dma_desc *)head;
|
||||
dma_addr_t dma_addr;
|
||||
int i;
|
||||
|
||||
pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
u64 x;
|
||||
dma_addr = dma_rx_phy + i * sizeof(*p);
|
||||
|
||||
x = *(u64 *)p;
|
||||
pr_info("%03d [0x%x]: 0x%x 0x%x 0x%x 0x%x",
|
||||
i, (unsigned int)virt_to_phys(p),
|
||||
pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x",
|
||||
i, &dma_addr,
|
||||
(unsigned int)x, (unsigned int)(x >> 32),
|
||||
p->des2, p->des3);
|
||||
p++;
|
||||
|
|
|
|||
|
|
@ -1133,6 +1133,7 @@ static int stmmac_phy_setup(struct stmmac_priv *priv)
|
|||
static void stmmac_display_rx_rings(struct stmmac_priv *priv)
|
||||
{
|
||||
u32 rx_cnt = priv->plat->rx_queues_to_use;
|
||||
unsigned int desc_size;
|
||||
void *head_rx;
|
||||
u32 queue;
|
||||
|
||||
|
|
@ -1142,19 +1143,24 @@ static void stmmac_display_rx_rings(struct stmmac_priv *priv)
|
|||
|
||||
pr_info("\tRX Queue %u rings\n", queue);
|
||||
|
||||
if (priv->extend_desc)
|
||||
if (priv->extend_desc) {
|
||||
head_rx = (void *)rx_q->dma_erx;
|
||||
else
|
||||
desc_size = sizeof(struct dma_extended_desc);
|
||||
} else {
|
||||
head_rx = (void *)rx_q->dma_rx;
|
||||
desc_size = sizeof(struct dma_desc);
|
||||
}
|
||||
|
||||
/* Display RX ring */
|
||||
stmmac_display_ring(priv, head_rx, priv->dma_rx_size, true);
|
||||
stmmac_display_ring(priv, head_rx, priv->dma_rx_size, true,
|
||||
rx_q->dma_rx_phy, desc_size);
|
||||
}
|
||||
}
|
||||
|
||||
static void stmmac_display_tx_rings(struct stmmac_priv *priv)
|
||||
{
|
||||
u32 tx_cnt = priv->plat->tx_queues_to_use;
|
||||
unsigned int desc_size;
|
||||
void *head_tx;
|
||||
u32 queue;
|
||||
|
||||
|
|
@ -1164,14 +1170,19 @@ static void stmmac_display_tx_rings(struct stmmac_priv *priv)
|
|||
|
||||
pr_info("\tTX Queue %d rings\n", queue);
|
||||
|
||||
if (priv->extend_desc)
|
||||
if (priv->extend_desc) {
|
||||
head_tx = (void *)tx_q->dma_etx;
|
||||
else if (tx_q->tbs & STMMAC_TBS_AVAIL)
|
||||
desc_size = sizeof(struct dma_extended_desc);
|
||||
} else if (tx_q->tbs & STMMAC_TBS_AVAIL) {
|
||||
head_tx = (void *)tx_q->dma_entx;
|
||||
else
|
||||
desc_size = sizeof(struct dma_edesc);
|
||||
} else {
|
||||
head_tx = (void *)tx_q->dma_tx;
|
||||
desc_size = sizeof(struct dma_desc);
|
||||
}
|
||||
|
||||
stmmac_display_ring(priv, head_tx, priv->dma_tx_size, false);
|
||||
stmmac_display_ring(priv, head_tx, priv->dma_tx_size, false,
|
||||
tx_q->dma_tx_phy, desc_size);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -1303,9 +1314,10 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
|
|||
return -ENOMEM;
|
||||
|
||||
buf->sec_addr = page_pool_get_dma_addr(buf->sec_page);
|
||||
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr);
|
||||
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
|
||||
} else {
|
||||
buf->sec_page = NULL;
|
||||
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
|
||||
}
|
||||
|
||||
buf->addr = page_pool_get_dma_addr(buf->page);
|
||||
|
|
@ -1367,6 +1379,88 @@ static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* stmmac_reinit_rx_buffers - reinit the RX descriptor buffer.
|
||||
* @priv: driver private structure
|
||||
* Description: this function is called to re-allocate a receive buffer, perform
|
||||
* the DMA mapping and init the descriptor.
|
||||
*/
|
||||
static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv)
|
||||
{
|
||||
u32 rx_count = priv->plat->rx_queues_to_use;
|
||||
u32 queue;
|
||||
int i;
|
||||
|
||||
for (queue = 0; queue < rx_count; queue++) {
|
||||
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
|
||||
|
||||
for (i = 0; i < priv->dma_rx_size; i++) {
|
||||
struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
|
||||
|
||||
if (buf->page) {
|
||||
page_pool_recycle_direct(rx_q->page_pool, buf->page);
|
||||
buf->page = NULL;
|
||||
}
|
||||
|
||||
if (priv->sph && buf->sec_page) {
|
||||
page_pool_recycle_direct(rx_q->page_pool, buf->sec_page);
|
||||
buf->sec_page = NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (queue = 0; queue < rx_count; queue++) {
|
||||
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
|
||||
|
||||
for (i = 0; i < priv->dma_rx_size; i++) {
|
||||
struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
|
||||
struct dma_desc *p;
|
||||
|
||||
if (priv->extend_desc)
|
||||
p = &((rx_q->dma_erx + i)->basic);
|
||||
else
|
||||
p = rx_q->dma_rx + i;
|
||||
|
||||
if (!buf->page) {
|
||||
buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
|
||||
if (!buf->page)
|
||||
goto err_reinit_rx_buffers;
|
||||
|
||||
buf->addr = page_pool_get_dma_addr(buf->page);
|
||||
}
|
||||
|
||||
if (priv->sph && !buf->sec_page) {
|
||||
buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool);
|
||||
if (!buf->sec_page)
|
||||
goto err_reinit_rx_buffers;
|
||||
|
||||
buf->sec_addr = page_pool_get_dma_addr(buf->sec_page);
|
||||
}
|
||||
|
||||
stmmac_set_desc_addr(priv, p, buf->addr);
|
||||
if (priv->sph)
|
||||
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
|
||||
else
|
||||
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
|
||||
if (priv->dma_buf_sz == BUF_SIZE_16KiB)
|
||||
stmmac_init_desc3(priv, p);
|
||||
}
|
||||
}
|
||||
|
||||
return;
|
||||
|
||||
err_reinit_rx_buffers:
|
||||
do {
|
||||
while (--i >= 0)
|
||||
stmmac_free_rx_buffer(priv, queue, i);
|
||||
|
||||
if (queue == 0)
|
||||
break;
|
||||
|
||||
i = priv->dma_rx_size;
|
||||
} while (queue-- > 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* init_dma_rx_desc_rings - init the RX descriptor rings
|
||||
* @dev: net device structure
|
||||
|
|
@ -3648,7 +3742,10 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
|
|||
DMA_FROM_DEVICE);
|
||||
|
||||
stmmac_set_desc_addr(priv, p, buf->addr);
|
||||
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr);
|
||||
if (priv->sph)
|
||||
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
|
||||
else
|
||||
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
|
||||
stmmac_refill_desc3(priv, rx_q, p);
|
||||
|
||||
rx_q->rx_count_frames++;
|
||||
|
|
@ -3736,18 +3833,23 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
|
|||
unsigned int count = 0, error = 0, len = 0;
|
||||
int status = 0, coe = priv->hw->rx_csum;
|
||||
unsigned int next_entry = rx_q->cur_rx;
|
||||
unsigned int desc_size;
|
||||
struct sk_buff *skb = NULL;
|
||||
|
||||
if (netif_msg_rx_status(priv)) {
|
||||
void *rx_head;
|
||||
|
||||
netdev_dbg(priv->dev, "%s: descriptor ring:\n", __func__);
|
||||
if (priv->extend_desc)
|
||||
if (priv->extend_desc) {
|
||||
rx_head = (void *)rx_q->dma_erx;
|
||||
else
|
||||
desc_size = sizeof(struct dma_extended_desc);
|
||||
} else {
|
||||
rx_head = (void *)rx_q->dma_rx;
|
||||
desc_size = sizeof(struct dma_desc);
|
||||
}
|
||||
|
||||
stmmac_display_ring(priv, rx_head, priv->dma_rx_size, true);
|
||||
stmmac_display_ring(priv, rx_head, priv->dma_rx_size, true,
|
||||
rx_q->dma_rx_phy, desc_size);
|
||||
}
|
||||
while (count < limit) {
|
||||
unsigned int buf1_len = 0, buf2_len = 0;
|
||||
|
|
@ -4315,24 +4417,27 @@ static int stmmac_set_mac_address(struct net_device *ndev, void *addr)
|
|||
static struct dentry *stmmac_fs_dir;
|
||||
|
||||
static void sysfs_display_ring(void *head, int size, int extend_desc,
|
||||
struct seq_file *seq)
|
||||
struct seq_file *seq, dma_addr_t dma_phy_addr)
|
||||
{
|
||||
int i;
|
||||
struct dma_extended_desc *ep = (struct dma_extended_desc *)head;
|
||||
struct dma_desc *p = (struct dma_desc *)head;
|
||||
dma_addr_t dma_addr;
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
if (extend_desc) {
|
||||
seq_printf(seq, "%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, (unsigned int)virt_to_phys(ep),
|
||||
dma_addr = dma_phy_addr + i * sizeof(*ep);
|
||||
seq_printf(seq, "%d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, &dma_addr,
|
||||
le32_to_cpu(ep->basic.des0),
|
||||
le32_to_cpu(ep->basic.des1),
|
||||
le32_to_cpu(ep->basic.des2),
|
||||
le32_to_cpu(ep->basic.des3));
|
||||
ep++;
|
||||
} else {
|
||||
seq_printf(seq, "%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, (unsigned int)virt_to_phys(p),
|
||||
dma_addr = dma_phy_addr + i * sizeof(*p);
|
||||
seq_printf(seq, "%d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, &dma_addr,
|
||||
le32_to_cpu(p->des0), le32_to_cpu(p->des1),
|
||||
le32_to_cpu(p->des2), le32_to_cpu(p->des3));
|
||||
p++;
|
||||
|
|
@ -4360,11 +4465,11 @@ static int stmmac_rings_status_show(struct seq_file *seq, void *v)
|
|||
if (priv->extend_desc) {
|
||||
seq_printf(seq, "Extended descriptor ring:\n");
|
||||
sysfs_display_ring((void *)rx_q->dma_erx,
|
||||
priv->dma_rx_size, 1, seq);
|
||||
priv->dma_rx_size, 1, seq, rx_q->dma_rx_phy);
|
||||
} else {
|
||||
seq_printf(seq, "Descriptor ring:\n");
|
||||
sysfs_display_ring((void *)rx_q->dma_rx,
|
||||
priv->dma_rx_size, 0, seq);
|
||||
priv->dma_rx_size, 0, seq, rx_q->dma_rx_phy);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -4376,11 +4481,11 @@ static int stmmac_rings_status_show(struct seq_file *seq, void *v)
|
|||
if (priv->extend_desc) {
|
||||
seq_printf(seq, "Extended descriptor ring:\n");
|
||||
sysfs_display_ring((void *)tx_q->dma_etx,
|
||||
priv->dma_tx_size, 1, seq);
|
||||
priv->dma_tx_size, 1, seq, tx_q->dma_tx_phy);
|
||||
} else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) {
|
||||
seq_printf(seq, "Descriptor ring:\n");
|
||||
sysfs_display_ring((void *)tx_q->dma_tx,
|
||||
priv->dma_tx_size, 0, seq);
|
||||
priv->dma_tx_size, 0, seq, tx_q->dma_tx_phy);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -5144,13 +5249,16 @@ int stmmac_dvr_remove(struct device *dev)
|
|||
netdev_info(priv->dev, "%s: removing driver", __func__);
|
||||
|
||||
stmmac_stop_all_dma(priv);
|
||||
|
||||
if (priv->plat->serdes_powerdown)
|
||||
priv->plat->serdes_powerdown(ndev, priv->plat->bsp_priv);
|
||||
|
||||
stmmac_mac_set(priv, priv->ioaddr, false);
|
||||
netif_carrier_off(ndev);
|
||||
unregister_netdev(ndev);
|
||||
|
||||
/* Serdes power down needs to happen after VLAN filter
|
||||
* is deleted that is triggered by unregister_netdev().
|
||||
*/
|
||||
if (priv->plat->serdes_powerdown)
|
||||
priv->plat->serdes_powerdown(ndev, priv->plat->bsp_priv);
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
stmmac_exit_fs(ndev);
|
||||
#endif
|
||||
|
|
@ -5257,6 +5365,8 @@ static void stmmac_reset_queues_param(struct stmmac_priv *priv)
|
|||
tx_q->cur_tx = 0;
|
||||
tx_q->dirty_tx = 0;
|
||||
tx_q->mss = 0;
|
||||
|
||||
netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue));
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -5318,7 +5428,7 @@ int stmmac_resume(struct device *dev)
|
|||
mutex_lock(&priv->lock);
|
||||
|
||||
stmmac_reset_queues_param(priv);
|
||||
|
||||
stmmac_reinit_rx_buffers(priv);
|
||||
stmmac_free_tx_skbufs(priv);
|
||||
stmmac_clear_descriptors(priv);
|
||||
|
||||
|
|
|
|||
|
|
@ -3931,8 +3931,6 @@ static void niu_xmac_interrupt(struct niu *np)
|
|||
mp->rx_mcasts += RXMAC_MC_FRM_CNT_COUNT;
|
||||
if (val & XRXMAC_STATUS_RXBCAST_CNT_EXP)
|
||||
mp->rx_bcasts += RXMAC_BC_FRM_CNT_COUNT;
|
||||
if (val & XRXMAC_STATUS_RXBCAST_CNT_EXP)
|
||||
mp->rx_bcasts += RXMAC_BC_FRM_CNT_COUNT;
|
||||
if (val & XRXMAC_STATUS_RXHIST1_CNT_EXP)
|
||||
mp->rx_hist_cnt1 += RXMAC_HIST_CNT1_COUNT;
|
||||
if (val & XRXMAC_STATUS_RXHIST2_CNT_EXP)
|
||||
|
|
|
|||
|
|
@ -2044,6 +2044,7 @@ bdx_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
/*bdx_hw_reset(priv); */
|
||||
if (bdx_read_mac(priv)) {
|
||||
pr_err("load MAC address failed\n");
|
||||
err = -EFAULT;
|
||||
goto err_out_iomap;
|
||||
}
|
||||
SET_NETDEV_DEV(ndev, &pdev->dev);
|
||||
|
|
|
|||
|
|
@ -171,11 +171,6 @@ static void sp_encaps(struct sixpack *sp, unsigned char *icp, int len)
|
|||
goto out_drop;
|
||||
}
|
||||
|
||||
if (len > sp->mtu) { /* sp->mtu = AX25_MTU = max. PACLEN = 256 */
|
||||
msg = "oversized transmit packet!";
|
||||
goto out_drop;
|
||||
}
|
||||
|
||||
if (p[0] > 5) {
|
||||
msg = "invalid KISS command";
|
||||
goto out_drop;
|
||||
|
|
|
|||
|
|
@ -229,7 +229,7 @@ int netvsc_send(struct net_device *net,
|
|||
bool xdp_tx);
|
||||
void netvsc_linkstatus_callback(struct net_device *net,
|
||||
struct rndis_message *resp,
|
||||
void *data);
|
||||
void *data, u32 data_buflen);
|
||||
int netvsc_recv_callback(struct net_device *net,
|
||||
struct netvsc_device *nvdev,
|
||||
struct netvsc_channel *nvchan);
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue