This simplifies __intel_set_mode() a little.
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When using DMA, dwc2 requires buffers to be 4 bytes aligned. Use
bounce buffers if they are not.
Tested-by: Robert Baldyga <r.baldyga@samsung.com>
Acked-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Mian Yousaf Kaukab <yousaf.kaukab@intel.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Remove dead code as well.
Tested-by: Robert Baldyga <r.baldyga@samsung.com>
Acked-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Mian Yousaf Kaukab <yousaf.kaukab@intel.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
s3c_hsotg_process_req_feature comments was not correct
Tested-by: Robert Baldyga <r.baldyga@samsung.com>
Acked-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Gregory Herrero <gregory.herrero@intel.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Handle SET_FEATURE TEST_MODE request sent by the host.
Slightly rework FEATURE request handling to allow parsing
other request types than Endpoint.
Also add a debugfs to change test mode value from user space.
Tested-by: Robert Baldyga <r.baldyga@samsung.com>
Acked-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Gregory Herrero <gregory.herrero@intel.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
When clearing HALT on an endpoint, req->complete of in progress
requests must be called with locks off. New request should only be
started if there is not already a pending request on the endpoint.
Tested-by: Robert Baldyga <r.baldyga@samsung.com>
Acked-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Gregory Herrero <gregory.herrero@intel.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
When a remote wakeup happens during bus_suspend, hcd needs to resume
its root hub.
Acked-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Gregory Herrero <gregory.herrero@intel.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
The checking for ack and also any subsequent mmio access
will serialize with setting the forcewake bit. Drop the
posting read as superfluous.
Note that in the put side we still want to keep the posting read
as it will ensure that the hw sees our forcewake release in a
timely manner and doesn't keep the hw powered up.
Comment from Chris:
On Wed, Jan 28, 2015 at 05:54:14PM +0200, Mika Kuoppala wrote:
> Ville Syrjälä <ville.syrjala@linux.intel.com> writes:
> > IIRC the posting read from same cache line actually fixed real bugs. So
> > I'm a bit worried about dropping them. But I suppose it's possible only
> > the _put side was important for those bugs.
>
> I found these:
>
> commit 6af2d180f8
> Author: Daniel Vetter <daniel.vetter@ffwll.ch>
> Date: Thu Jul 26 16:24:50 2012 +0200
>
> drm/i915: fix forcewake related hangs on snb
>
> commit 8dee3eea3c
> Author: Ben Widawsky <ben@bwidawsk.net>
> Date: Sat Sep 1 22:59:50 2012 -0700
>
> drm/i915: Never read FORCEWAKE
>
> https://bugs.freedesktop.org/show_bug.cgi?id=51738
> https://bugs.freedesktop.org/show_bug.cgi?id=52424
>
> The snb here seems to survive gem_dummy_reloc_loop and
> gem_ring_sync_loop in here with the get side posting removed.
Note that we kept the once associated with #52424, but judging by my
comments in #51738 the posting read is just a band aid anyway as a full
mb() itself was not adequate.
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: paste relevant review discussion in.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
intel_uncore_early_sanitize() will reset the forcewake registers. When
forcewake domains were introduced, the domain init was done after the
sanitization of the forcewake registers. And as the resetting of
registers use the domain accessors, we tried to reset the forcewake
registers with unitialized forcewake domains and failed.
Fix this by sanitizing after all the domains have been initialized. Do
per domain clearing of forcewake register on domain init so that
IVB can do early access to ECOBUS do determine the final configuration.
This regression was introduced in
commit 05a2fb157e
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Mon Jan 19 16:20:43 2015 +0200
drm/i915: Consolidate forcewake code
v2: Carve out ellc detect, fw_domain_reset for ivb/ecobus (Chris)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88805
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reported-by: Olof Johansson <olof@lixom.net>
Tested-by: Darren Hart <dvhart@linux.intel.com> (v1)
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Move the CHV check into vlv_set_rps_idle() to simplify the caller a bit.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If edac_mc_add_mc() fails then we should preserve the error code, but
instead the current code returns success.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Link: http://lkml.kernel.org/r/20150128191351.GC10259@mwanda
Signed-off-by: Borislav Petkov <bp@suse.de>
rpcrdma_{de}register_internal() are used only in verbs.c now.
MAX_RPCRDMAHDR is no longer used and can be removed.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Use the new rpcrdma_alloc_regbuf() API to shrink the amount of
contiguous memory needed for a buffer pool by moving the zero
pad buffer into a regbuf.
This is for consistency with the other uses of internally
registered memory.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
The rr_base field is currently the buffer where RPC replies land.
An RPC/RDMA reply header lands in this buffer. In some cases an RPC
reply header also lands in this buffer, just after the RPC/RDMA
header.
The inline threshold is an agreed-on size limit for RDMA SEND
operations that pass from server and client. The sum of the
RPC/RDMA reply header size and the RPC reply header size must be
less than this threshold.
The largest RDMA RECV that the client should have to handle is the
size of the inline threshold. The receive buffer should thus be the
size of the inline threshold, and not related to RPCRDMA_MAX_SEGS.
RPC replies received via RDMA WRITE (long replies) are caught in
rq_rcv_buf, which is the second half of the RPC send buffer. Ie,
such replies are not involved in any way with rr_base.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
The rl_base field is currently the buffer where each RPC/RDMA call
header is built.
The inline threshold is an agreed-on size limit to for RDMA SEND
operations that pass between client and server. The sum of the
RPC/RDMA header size and the RPC header size must be less than or
equal to this threshold.
Increasing the r/wsize maximum will require MAX_SEGS to grow
significantly, but the inline threshold size won't change (both
sides agree on it). The server's inline threshold doesn't change.
Since an RPC/RDMA header can never be larger than the inline
threshold, make all RPC/RDMA header buffers the size of the
inline threshold.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Because internal memory registration is an expensive and synchronous
operation, xprtrdma pre-registers send and receive buffers at mount
time, and then re-uses them for each RPC.
A "hardway" allocation is a memory allocation and registration that
replaces a send buffer during the processing of an RPC. Hardway must
be done if the RPC send buffer is too small to accommodate an RPC's
call and reply headers.
For xprtrdma, each RPC send buffer is currently part of struct
rpcrdma_req so that xprt_rdma_free(), which is passed nothing but
the address of an RPC send buffer, can find its matching struct
rpcrdma_req and rpcrdma_rep quickly via container_of / offsetof.
That means that hardway currently has to replace a whole rpcrmda_req
when it replaces an RPC send buffer. This is often a fairly hefty
chunk of contiguous memory due to the size of the rl_segments array
and the fact that both the send and receive buffers are part of
struct rpcrdma_req.
Some obscure re-use of fields in rpcrdma_req is done so that
xprt_rdma_free() can detect replaced rpcrdma_req structs, and
restore the original.
This commit breaks apart the RPC send buffer and struct rpcrdma_req
so that increasing the size of the rl_segments array does not change
the alignment of each RPC send buffer. (Increasing rl_segments is
needed to bump up the maximum r/wsize for NFS/RDMA).
This change opens up some interesting possibilities for improving
the design of xprt_rdma_allocate().
xprt_rdma_allocate() is now the one place where RPC send buffers
are allocated or re-allocated, and they are now always left in place
by xprt_rdma_free().
A large re-allocation that includes both the rl_segments array and
the RPC send buffer is no longer needed. Send buffer re-allocation
becomes quite rare. Good send buffer alignment is guaranteed no
matter what the size of the rl_segments array is.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
There are several spots that allocate a buffer via kmalloc (usually
contiguously with another data structure) and then register that
buffer internally. I'd like to split the buffers out of these data
structures to allow the data structures to scale.
Start by adding functions that can kmalloc and register a buffer,
and can manage/preserve the buffer's associated ib_sge and ib_mr
fields.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Move the details of how to create and destroy rpcrdma_req and
rpcrdma_rep structures into helper functions.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Clean up: There is one call site for rpcrdma_buffer_create(). All of
the arguments there are fields of an rpcrdma_xprt.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Reduce stack footprint of the connection upcall handler function.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Device attributes are large, and are used in more than one place.
Stash a copy in dynamically allocated memory.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
If ib_query_qp() fails or the memory registration mode isn't
supported, don't leak the PD. An orphaned IB/core resource will
cause IB module removal to hang.
Fixes: bd7ed1d133 ("RPC/RDMA: check selected memory registration ...")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Clean up: The rep_func field always refers to rpcrdma_conn_func().
rep_func should have been removed by commit b45ccfd25d ("xprtrdma:
Remove MEMWINDOWS registration modes").
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Reduce work in the receive CQ handler, which can be run at hardware
interrupt level, by moving the RPC/RDMA credit update logic to the
RPC reply handler.
This has some additional benefits: More header sanity checking is
done before trusting the incoming credit value, and the receive CQ
handler no longer touches the RPC/RDMA header (the CPU stalls while
waiting for the header contents to be brought into the cache).
This further extends work begun by commit e7ce710a88 ("xprtrdma:
Avoid deadlock when credit window is reset").
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Clean up: Since commit 0ac531c183 ("xprtrdma: Remove REGISTER
memory registration mode"), the rl_mr pointer is no longer used
anywhere.
After removal, there's only a single member of the mr_chunk union,
so mr_chunk can be removed as well, in favor of a single pointer
field.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Clean up: This field is not used.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Clean up: Use consistent field names in struct rpcrdma_xprt.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Clean up: Replace naked integers with a documenting macro.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
xprtsock.c and the backchannel code display XIDs in host byte order.
Follow suit in xprtrdma.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Clean up: Replace htonl and ntohl with the be32 equivalents.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Make it easier to grep the system log for specific error conditions.
The wc.opcode field is not included because opcode numbers are
sparse, and because wc.opcode is not necessarily valid when
completion reports an error.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
A function pointer was not NULLed, causing kvm_vcpu_reload_apic_access_page to
go down the wrong path and OOPS when doing put_page(NULL).
This did not happen on old processors, only when setting the module option
explicitly.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
gpiolib uses a fixed string for the suffixes and defines it at 32 bytes.
Later in the code snprintf is used with this fixed value of 32. Using
sizeof() is safer in case the size for the suffixes is ever changed.
Signed-off-by: Olliver Schinagl <oliver@schinagl.nl>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Also use goto labels for all failure paths in
edac_create_sysfs_mci_device and update meaningless labels.
Signed-off-by: Junjie Mao <junjie.mao@hotmail.com>
Link: http://lkml.kernel.org/r/BLU436-SMTP25291B6B612942A212AEBFE95300@phx.gbl
[ Boris: Use ! for 0 checks and add newlines for less crammed code. ]
Signed-off-by: Borislav Petkov <bp@suse.de>
The .pin_config_get/set operation are not supported in qcom pinctrl
driver. As the pinconf core is smart enough it doesn't complain
about that.
Signed-off-by: Stanimir Varbanov <svarbanov@mm-sol.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
On newer TLMM hardware blocks the registers are spread and
we need an offsets upper than 16 bits to address them. Increase
the register offset variables to 32 bits size.
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Stanimir Varbanov <svarbanov@mm-sol.com>
Reviewed-by: Bjorn Andersson <bjorn.andersson@sonymobile.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
The zynq and qcom-spmi pinctrl drivers both use pin_config_item arrays
to provide extra interfaces in debugfs. This structure and the
PCONFDUMP macro are not defined if CONFIG_DEBUG_FS is turned off,
so we get build errors like:
pinctrl/qcom/pinctrl-spmi-gpio.c:139:37: error: array type has incomplete element type
static const struct pin_config_item pmic_conf_items[ARRAY_SIZE(pmic_gpio_bindings)] = {
^
pinctrl/qcom/pinctrl-spmi-gpio.c:140:2: error: implicit declaration of function 'PCONFDUMP' [-Werror=implicit-function-declaration]
PCONFDUMP(PMIC_GPIO_CONF_PULL_UP, "pull up strength", NULL, true),
^
pinctrl/qcom/pinctrl-spmi-gpio.c:139:37: warning: 'pmic_conf_items' defined but not used [-Wunused-variable]
static const struct pin_config_item pmic_conf_items[ARRAY_SIZE(pmic_gpio_bindings)] = {
Lacking any better idea to solve this nicely, this patch uses #ifdef
to hide the structures, just like the pinctrl core does.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
In startup function, enable ssc clock and in shutdown function,
disable clock. And also remove disable ssc in shutdown function,
as ssc is disabled in prepare function.
Signed-off-by: Bo Shen <voice.shen@atmel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
When SCC work in DSP A mode, the data outputs/inputs are shift out on
falling edge, the frame sync are sample on the rising edge.
Reported-by: Songjun Wu <songjun.wu@atmel.com>
Signed-off-by: Bo Shen <voice.shen@atmel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Shift the I2S mode value by the necessary amount before writing the
registers. This makes Right Justified and PCM mode work in addition to
the default Left Justified mode.
Signed-off-by: Filip Brozovic <fbrozovic@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Fix the issue introduced by:
3684940933 ASoC: tlv320aic3x: Add TDM support
The CTRLC register were not receiving the correct delay configuration,
which will corrupt DSP_A audio mode.
Fixes: 3684940933 (ASoC: tlv320aic3x: Add TDM support)
Reported-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: stable@vger.kernel.org
Fixes this compile warning:
drivers/iommu/omap-iommu.c: In function 'omap_iommu_map':
drivers/iommu/omap-iommu.c:1139:2: warning: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'phys_addr_t' [-Wformat=]
dev_dbg(dev, "mapping da 0x%lx to pa 0x%x size 0x%x\n", da, pa, bytes);
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
"Mic Bias" is DAPM_SUPPLY so it has to be connected in the route
accordingly.
Fixes audio capture on the board.
Reported-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Mark Brown <broonie@kernel.org>
When adding a new device the driver loops over all registered IOMMUs and
calls the ipmmu_find_utlbs() function to parse the DT iommus attribute.
The function returns an error when the IOMMU referenced in DT doesn't
match the current IOMMU. The caller incorrectly breaks from the loop
immediately when the error is reported, resulting in only the first
IOMMU being considered.
Fix this, and while at it move code that isn't specific to an IOMMU
instance out of the loop.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>