Add the BFQ-v7r8 I/O scheduler to 3.18.0.
The general structure is borrowed from CFQ, as much of the code for
handling I/O contexts. Over time, several useful features have been
ported from CFQ as well (details in the changelog in README.BFQ). A
(bfq_)queue is associated to each task doing I/O on a device, and each
time a scheduling decision has to be made a queue is selected and served
until it expires.
- Slices are given in the service domain: tasks are assigned
budgets, measured in number of sectors. Once got the disk, a task
must however consume its assigned budget within a configurable
maximum time (by default, the maximum possible value of the
budgets is automatically computed to comply with this timeout).
This allows the desired latency vs "throughput boosting" tradeoff
to be set.
- Budgets are scheduled according to a variant of WF2Q+, implemented
using an augmented rb-tree to take eligibility into account while
preserving an O(log N) overall complexity.
- A low-latency tunable is provided; if enabled, both interactive
and soft real-time applications are guaranteed a very low latency.
- Latency guarantees are preserved also in the presence of NCQ.
- Also with flash-based devices, a high throughput is achieved
while still preserving latency guarantees.
- BFQ features Early Queue Merge (EQM), a sort of fusion of the
cooperating-queue-merging and the preemption mechanisms present
in CFQ. EQM is in fact a unified mechanism that tries to get a
sequential read pattern, and hence a high throughput, with any
set of processes performing interleaved I/O over a contiguous
sequence of sectors.
- BFQ supports full hierarchical scheduling, exporting a cgroups
interface. Since each node has a full scheduler, each group can
be assigned its own weight.
- If the cgroups interface is not used, only I/O priorities can be
assigned to processes, with ioprio values mapped to weights
with the relation weight = IOPRIO_BE_NR - ioprio.
- ioprio classes are served in strict priority order, i.e., lower
priority queues are not served as long as there are higher
priority queues. Among queues in the same class the bandwidth is
distributed in proportion to the weight of each queue. A very
thin extra bandwidth is however guaranteed to the Idle class, to
prevent it from starving.
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
Signed-off-by: Tkkg1994 <luca.grifo@outlook.com>
Signed-off-by: djb77 <dwayne.bakewell@gmail.com>
Squashed commit of the following:
commit f49e14ccdcb6694ed27754e020057d27a8fcca07
Author: Andrei F <luxneb@gmail.com>
Date: Thu Nov 26 22:40:38 2015 +0100
elevator: Fix a race in elevator switching
commit d50235b7bc upstream.
There's a race between elevator switching and normal io operation.
Because the allocation of struct elevator_queue and struct elevator_data
don't in a atomic operation.So there are have chance to use NULL
->elevator_data.
For example:
Thread A: Thread B
blk_queu_bio elevator_switch
spin_lock_irq(q->queue_block) elevator_alloc
elv_merge elevator_init_fn
Because call elevator_alloc, it can't hold queue_lock and the
->elevator_data is NULL.So at the same time, threadA call elv_merge and
nedd some info of elevator_data.So the crash happened.
Move the elevator_alloc into func elevator_init_fn, it make the
operations in a atomic operation.
Using the follow method can easy reproduce this bug
1:dd if=/dev/sdb of=/dev/null
2:while true;do echo noop > scheduler;echo deadline > scheduler;done
The test method also use this method.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Cc: Jonghwan Choi <jhbird.choi@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit daf22a727e64f1277b074442efb821366015ca72
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Jul 25 13:45:21 2013 +0300
block: row: Remove warning massage from add_request
Regular priority queues is marked as "starved" if it skipped a dispatch
due to being empty. When a new request is added to a "starved" queue
it will be marked as urgent.
The removed WARN_ON was warning about an impossible case when a regular
priority (read) queue was marked as starved but wasn't empty. This is
a possible case due to the bellow:
If the device driver fetched a read request that is pending for
transmission and an URGENT request arrives, the fetched read will be
reinserted back to the scheduler. Its possible that the queue it will be
reinserted to was marked as "starved" in the meanwhile due to being empty.
CRs-fixed: 517800
Change-Id: Iaae642ea0ed9c817c41745b0e8ae2217cc684f0c
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit dca47e75f1413d58e4f97ef638e5d4456c55bdce
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Tue Jul 2 14:43:13 2013 +0300
block: row: change hrtimer_cancel to hrtimer_try_to_cancel
Calling hrtimer_cancel with interrupts disabled can result in a livelock.
When flushing plug list in the block layer interrupts are disabled and an
hrtimer is used when adding requests from that plug list to the scheduler.
In this code flow, if the hrtimer (which is used for idling) is set, it's
being canceled by calling hrtimer_cancel. hrtimer_cancel will perform
the following in an endless loop:
1. try cancel the timer
2. if fails - rest_cpu
the cancellation can fail if the timer function already started. Since
interrupts are disabled it can never complete.
This patch reduced the number of times the hrtimer lock is taken while
interrupts are disabled by calling hrtimer_try_co_cancel. the later will
try to cancel the timer just once and return with an error code if fails.
CRs-fixed: 499887
Change-Id: I25f79c357426d72ad67c261ce7cb503ae97dc7b9
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit a6047b9d808eaa787e4df3107bea7536334856cd
Author: Lee Susman <lsusman@codeaurora.org>
Date: Sun Jun 23 16:27:40 2013 +0300
block: row-iosched idling triggered by readahead pages
In the current implementation idling is triggered only by request
insertion frequency. This heuristic is not very accurate and may hit
random requests that shouldn't trigger idling. This patch uses the
PG_readahead flag in struct page's flags, which indicates that the page
is part of a readahead window, to start idling upon dispatch of a request
associated with a readahead page.
The above readehead flag is used together with the existing
insertion-frequency trigger. The frequency timer will catch read requests
which are not part of a readahead window, but are still part of a
sequential stream (and therefore dispatched in small time intervals).
Change-Id: Icb7145199c007408de3f267645ccb842e051fd00
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
commit e70e4e8e1d1f111023dd2b2d0fc9237240cab9ab
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Wed May 1 14:35:20 2013 +0300
block: urgent: Fix dispatching of URGENT mechanism
There are cases when blk_peek_request is called not from blk_fetch_request
thus the URGENT request may be started but the flag q->dispatched_urgent is
not updated.
Change-Id: I4fb588823f1b2949160cbd3907f4729767932e12
CRs-fixed: 471736
CRs-fixed: 473036
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit 0e36870f6a436840eed1782d0e85b4adb300b59f
Author: Maya Erez <merez@codeaurora.org>
Date: Sun Apr 14 15:19:52 2013 +0300
block: row: Fix starvation tolerance values
The current starvation tolerance values increase the boot time
since high priority SW requests are delayed by regular priority requests.
In order to overcome this, increase the starvation tolerance values.
Change-Id: I9947fca9927cbd39a1d41d4bd87069df679d3103
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
Signed-off-by: Maya Erez <merez@codeaurora.org>
commit 3cab8d28e735fdad300eda3bed703129ba05d70a
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Apr 11 14:57:15 2013 +0300
block: urgent request: Update dispatch_urgent in case of requeue/reinsert
The block layer implements a mechanism for verifying that the device
driver won't be notified of an URGENT request if there is already an
URGENT request in flight. This is due to the fact that interrupting an
URGENT request isn't efficient.
This patch fixes the above described mechanism in case the URGENT request
was returned back to the block layer from some reason: by requeue or
reinsert.
CRs-fixed: 473376, 473036, 471736
Change-Id: Ie8b8208230a302d4526068531616984825f1050d
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit e052e4574bb928b44e660b9679d23e14011b0b9d
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Mar 21 11:04:02 2013 +0200
block: row: Update sysfs functions
All ROW (time related) configurable parameters are stored in ms so there
is no need to convert from/to ms when reading/updating them via sysfs.
Change-Id: Ib6a1de54140b5d25696743da944c076dd6fc02ae
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
Conflicts:
block/row-iosched.c
commit 2c3203650c2109c18abb3b17a5114d54bb22e683
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Mar 21 13:02:07 2013 +0200
block: row: Prevent starvation of regular priority by high priority
At the moment all REGULAR and LOW priority requests are starved as long as
there are HIGH priority requests to dispatch.
This patch prevents the above starvation by setting a starvation limit the
REGULAR\LOW priority requests can tolerate.
Change-Id: Ibe24207982c2c55d75c0b0230f67e013d1106017
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit a5434f618d395a03fe19ef430a8c5747bad069f9
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Tue Mar 12 21:02:33 2013 +0200
block: urgent request: remove unnecessary urgent marking
An urgent request is marked by the scheduler in rq->cmd_flags with the
REQ_URGENT flag. There is no need to add an additional marking by
the block layer.
Change-Id: I05d5e9539d2f6c1bfa80240b0671db197a5d3b3f
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit 3928fb74c2f78578c57913938644acb704b77586
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Tue Mar 12 21:17:18 2013 +0200
block: row: Re-design urgent request notification mechanism
When ROW scheduler reports to the block layer that there is an urgent
request pending, the device driver may decide to stop the transmission
of the current request in order to handle the urgent one. This is done
in order to reduce the latency of an urgent request. For example:
long WRITE may be stopped to handle an urgent READ.
This patch updates the ROW URGENT notification policy to apply with the
below:
- Don't notify URGENT if there is an un-completed URGENT request in driver
- After notifying that URGENT request is present, the next request
dispatched is the URGENT one.
- At every given moment only 1 request can be marked as URGENT.
Independent of it's location (driver or scheduler)
Other changes to URGENT policy:
- Only READ queues are allowed to notify of an URGENT request pending.
CR fix:
If a pending urgent request (A) gets merged with another request (B)
A is removed from scheduler queue but is not removed from
rd->pending_urgent_rq.
CRs-Fixed: 453712
Change-Id: I321e8cf58e12a05b82edd2a03f52fcce7bc9a900
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit 8912aa92e3d919ceabc72b2eddc829fc5e4bd7eb
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Jan 24 16:17:27 2013 +0200
block: row: Update initial values of ROW data structures
This patch sets the initial values of internal ROW
parameters.
Change-Id: I38132062a7fcbe2e58b9cc757e55caac64d013dc
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
[smuckle@codeaurora.org: ported from msm-3.7]
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
commit b709e1a8a56784cb83c2c31a4e7df574a6b29802
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Jan 24 15:08:40 2013 +0200
block: row: Don't notify URGENT if there are un-completed urgent req
When ROW scheduler reports to the block layer that there is an urgent
request pending, the device driver may decide to stop the transmission
of the current request in order to handle the urgent one. If the current
transmitted request is an urgent request - we don't want it to be
stopped.
Due to the above ROW scheduler won't notify of an urgent request if
there are urgent requests in flight.
Change-Id: I2fa186d911b908ec7611682b378b9cdc48637ac7
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit eba966603cc8e6f8fb418bf702f5a6eca5f56f34
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Jan 24 04:01:59 2013 +0200
block: add REQ_URGENT to request flags
This patch adds a new flag to be used in cmd_flags field of struct request
for marking request as urgent.
Urgent request is the one that should be given priority currently handled
(regular) request by the device driver. The decision of a request urgency
is taken by the scheduler.
Change-Id: Ic20470987ef23410f1d0324f96f00578f7df8717
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
Conflicts:
include/linux/blk_types.h
commit 7c865ab1a9ae626d023d0b03ed7fbe5c57bcbe7c
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Jan 17 20:56:07 2013 +0200
block: row: Idling mechanism re-factoring
At the moment idling in ROW is implemented by delayed work that uses
jiffies granularity which is not very accurate. This patch replaces
current idling mechanism implementation with hrtime API, which gives
nanosecond resolution (instead of jiffies).
Change-Id: I86c7b1776d035e1d81571894b300228c8b8f2d92
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit 72ea1d39c04734bf5eb52117968704148d2da42f
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Wed Jan 23 17:15:49 2013 +0200
block: row: Dispatch requests according to their io-priority
This patch implements "application-hints" which is a way the issuing
application can notify the scheduler on the priority of its request.
This is done by setting the io-priority of the request.
This patch reuses an already existing mechanism of io-priorities developed
for CFQ. Please refer to kernel/Documentation/block/ioprio.txt for
usage example and explanations.
Change-Id: I228ec8e52161b424242bb7bb133418dc8b73925a
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit 9f8f3d2757788477656b1d25a3055ae11d97cee4
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Sat Jan 12 16:23:18 2013 +0200
block: row: Aggregate row_queue parameters to one structure
Each ROW queues has several parameters which default values are defined
in separate arrays. This patch aggregates all default values into one
array.
The values in question are:
- is idling enabled for the queue
- queue quantum
- can the queue notify on urgent request
Change-Id: I3821b0a042542295069b340406a16b1000873ec6
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit d84ad45f3077661cab5984cd2fb7d5ef2ff06e39
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Sat Jan 12 16:21:47 2013 +0200
block: row: fix sysfs functions - idle_time conversion
idle_time was updated to be stored in msec instead of jiffies.
So there is no need to convert the value when reading from user or
displaying the value to him.
Change-Id: I58e074b204e90a90536d32199ac668112966e9cf
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit 202b21e9daf7b8a097f97f764bb4ad4712c75fa7
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Sat Jan 12 16:21:12 2013 +0200
block: row: Insert dispatch_quantum into struct row_queue
There is really no point in keeping the dispatch quantum
of a queue outside of it. By inserting it to the row_queue
structure we spare extra level in accessing it.
Change-Id: Ic77571818b643e71f9aafbb2ca93d0a92158b199
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit 58ca84f091faa6ff8c4f567b158be5d38f9a5c58
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Sun Jan 13 22:04:59 2013 +0200
block: row: Add some debug information on ROW queues
1. Add a counter for number of requests on queue.
2. Add function to print queues status (number requests
currently on queue and number of already dispatched requests
in current dispatch cycle).
Change-Id: I1e98b9ca33853e6e6a8ddc53240f6cd6981e6024
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit 1bbb2c7ada5a647cab1f2306458d6cf9b821ddf7
Author: Subhash Jadavani <subhashj@codeaurora.org>
Date: Thu Jan 10 02:15:13 2013 +0530
block: blk-merge: don't merge the pages with non-contiguous descriptors
blk_rq_map_sg() function merges the physically contiguous pages to use same
scatter-gather node without checking if their page descriptors are
contiguous or not.
Now when dma_map_sg() is called on the scatter gather list, it would
take the base page pointer from each node (one by one) and iterates
through all of the pages in same sg node by keep incrementing the base
page pointer with the assumption that physically contiguous pages will
have their page descriptor address contiguous which may not be true
if SPARSEMEM config is enabled. So here we may end referring to invalid
page descriptor.
Following table shows the example of physically contiguous pages but
their page descriptor addresses non-contiguous.
-------------------------------------------
| Page Descriptor | Physical Address |
------------------------------------------
| 0xc1e43fdc | 0xdffff000 |
| 0xc2052000 | 0xe0000000 |
-------------------------------------------
With this patch, relevant blk-merge functions will also check if the
physically contiguous pages are having page descriptors address contiguous
or not? If not then, these pages are separated to be in different
scatter-gather nodes.
CRs-Fixed: 392141
Change-Id: I3601565e5569a69f06fb3af99061c4d4c23af241
Signed-off-by: Subhash Jadavani <subhashj@codeaurora.org>
Conflicts:
block/blk-merge.c
commit 9a9b428480c932ef8434d8b9bd3b7bafdcac3f84
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Dec 20 19:23:58 2012 +0200
row: Add support for urgent request handling
This patch adds support for handling urgent requests.
ROW queue can be marked as "urgent" so if it was un-served in last
dispatch cycle and a request was added to it - it will trigger
issuing an urgent-request-notification to the block device driver.
The block device driver may choose at stop the transmission of current
ongoing request to handle the urgent one. Foe example: long WRITE may
be stopped to handle an urgent READ. This decreases READ latency.
Change-Id: I84954c13f5e3b1b5caeadc9fe1f9aa21208cb35e
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit 8d5ec526b7e70307d3c4ce587b714349f44c0be8
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Dec 6 13:17:19 2012 +0200
block:row: fix idling mechanism in ROW
This patch addresses the following issues found in the ROW idling
mechanism:
1. Fix the delay passed to queue_delayed_work (pass actual delay
and not the time when to start the work)
2. Change the idle time and the idling-trigger frequency to be
HZ dependent (instead of using msec_to_jiffies())
3. Destroy idle_workqueue() in queue_exit
Change-Id: If86513ad6b4be44fb7a860f29bd2127197d8d5bf
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
Conflicts:
block/row-iosched.c
commit c26a95811462b9ba8eca23b4ba2150e7b660ca40
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Tue Oct 30 08:33:06 2012 +0200
row: Adding support for reinsert already dispatched req
Add support for reinserting already dispatched request back to the
schedulers internal data structures.
The request will be reinserted back to the queue (head) it was
dispatched from as if it was never dispatched.
Change-Id: I70954df300774409c25b5821465fb3aa33d8feb5
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit a1a6f09cae0149d935bcea3f20d4acb6556d68f9
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Tue Dec 4 16:04:15 2012 +0200
block: Add API for urgent request handling
This patch add support in block & elevator layers for handling
urgent requests. The decision if a request is urgent or not is taken
by the scheduler. Urgent request notification is passed to the underlying
block device driver (eMMC for example). Block device driver may decide to
interrupt the currently running low priority request to serve the new
urgent request. By doing so READ latency is greatly reduced in read&write
collision scenarios.
Note that if the current scheduler doesn't implement the urgent request
mechanism, this code path is never activated.
Change-Id: I8aa74b9b45c0d3a2221bd4e82ea76eb4103e7cfa
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
Conflicts:
block/blk-core.c
commit 4e907d9d6079629d6ce61fbdfb1a629d3587e176
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Tue Dec 4 15:54:43 2012 +0200
block: Add support for reinsert a dispatched req
Add support for reinserting a dispatched request back to the
scheduler's internal data structures.
This capability is used by the device driver when it chooses to
interrupt the current request transmission and execute another (more
urgent) pending request. For example: interrupting long write in order
to handle pending read. The device driver re-inserts the
remaining write request back to the scheduler, to be rescheduled
for transmission later on.
Add API for verifying whether the current scheduler
supports reinserting requests mechanism. If reinsert mechanism isn't
supported by the scheduler, this code path will never be activated.
Change-Id: I5c982a66b651ebf544aae60063ac8a340d79e67f
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit 0675c27faab797f7149893b84cc357aadb37c697
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Mon Oct 15 20:56:02 2012 +0200
block: ROW: Fix forced dispatch
This patch fixes forced dispatch in the ROW scheduling algorithm.
When the dispatch function is called with the forced flag on, we
can't delay the dispatch of the requests that are in scheduler queues.
Thus, when dispatch is called with forced turned on, we need to cancel
idling, or not to idle at all.
Change-Id: I3aa0da33ad7b59c0731c696f1392b48525b52ddc
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
commit ce6acf59662d1bbe5663a64aef9fe1695b8bbe1b
Author: Tatyana Brokhman <tlinder@codeaurora.org>
Date: Thu Sep 20 10:46:10 2012 +0300
block: Adding ROW scheduling algorithm
This patch adds the implementation of a new scheduling algorithm - ROW.
The policy of this algorithm is to prioritize READ requests over WRITE
as much as possible without starving the WRITE requests.
Change-Id: I4ed52ea21d43b0e7c0769b2599779a3d3869c519
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
Signed-off-by: Tkkg1994 <luca.grifo@outlook.com>
Signed-off-by: djb77 <dwayne.bakewell@gmail.com>
cpufreq: cultivation: bring in initial release
-based off of caf 4.4 commits
-uses per-cpu timers
-use display_state for screen off timer
with option to set different timer rate when screen off
-improrted fastlane with threshold from blu_active
Signed-off-by: mydongistiny <jaysonedson@gmail.com>
cpufreq: cultivation: update with a few new tuneables
added in:
go_lowspeed_load
validate above_hispeed_delay
check hispeed_freq is within the policy limits
powersave_bias
version bump to 1.5
Signed-off-by: mydongistiny <jaysonedson@gmail.com>
cpufreq_notify_utilization - notify CPU userspace about CPU utilization change
This function is called everytime the CPU load is evaluated by the
ondemand governor. It notifies userspace of cpu load changes via sysfs.
Some modules can benefit from getting additional information cpufreq
governors use to make frequency switch decisions.
This change lays down a basic framework that the governors can use
to report additional information (Eg: CPU's load) information to
the clients that subscribe to cpufreq govinfo notifier chain.
Change-Id: I511b4bdb7d12394a31ce5352ae47553861e49303
Signed-off-by: Rohit Gupta <rohgup@codeaurora.org>
[imaund@codeaurora.org: resolved context conflicts]
Signed-off-by: Ian Maund <imaund@codeaurora.org>
sync_file_range(2) is documented to issue writeback only for pages that
are not currently being written. After all the system call has been
created for userspace to be able to issue background writeout and so
waiting for in-flight IO is undesirable there. However commit
ee53a89 ("mm: do_sync_mapping_range integrity fix") switched
do_sync_mapping_range() and thus sync_file_range() to issue writeback in
WB_SYNC_ALL mode since do_sync_mapping_range() was used by other code
relying on WB_SYNC_ALL semantics.
These days do_sync_mapping_range() went away and we can switch
sync_file_range(2) back to issuing WB_SYNC_NONE writeback. That should
help PostgreSQL avoid large latency spikes when flushing data in the
background.
Andres measured a 20% increase in transactions per second on an SSD
disk.
Signed-off-by: Jan Kara <jack@suse.com>
Reported-by: Andres Freund <andres@anarazel.de>
Tested-By: Andres Freund <andres@anarazel.de>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Dave Jones reported RCU stalls, overly long hrtimer interrupts, and
amazingly long NMI handlers from a trinity-induced workload involving
lots of concurrent sync() calls (https://lkml.org/lkml/2013/7/23/369).
There are any number of things that one might do to make sync() behave
better under high levels of contention, but it is also the case that
multiple concurrent sync() system calls can be satisfied by a single
sys_sync() invocation.
Given that this situation is reminiscent of rcu_barrier(), this commit
applies the rcu_barrier() approach to sys_sync(). This approach uses
a global mutex and a sequence counter. The mutex is held across the
sync() operation, which eliminates contention between concurrent sync()
operations. The counter is incremented at the beginning and end of
each sync() operation, so that it is odd while a sync() operation is in
progress and even otherwise, just like sequence locks.
The code that used to be in sys_sync() is now in do_sync(), and
sys_sync()
now handles the concurrency. The sys_sync() function first takes a
snapshot of the counter, then acquires the mutex, and then takes another
snapshot of the counter. If the values of the two snapshots indicate
that
a full do_sync() executed during the mutex acquisition, the sys_sync()
function releases the mutex and returns ("Our work is done!").
Otherwise,
sys_sync() increments the counter, invokes do_sync(), and increments
the counter again.
This approach allows a single call to do_sync() to satisfy an
arbitrarily
large number of sync() system calls, which should eliminate issues due
to large numbers of concurrent invocations of the sync() system call.
Changes since v1 (https://lkml.org/lkml/2013/7/24/683):
o Add a pair of memory barriers to keep the increments from
bleeding into the do_sync() code. (The failure probability
is insanely low, but when you have several hundred million
devices running Linux, you can expect several hundred instances
of one-in-a-million failures.)
o Actually CC some people who have experience in this area.
Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>
Cc: Curt Wohlgemuth <curtw@google.com>
Cc: Jens Axboe <jaxboe@fusionio.com>
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Paul Reioux <reioux@gmail.com>
Enabling software CRCs on the data blocks can be a significant (30%) performance cost, and for other reasons may not always be desired. CRC is a mechanism aiming to prevent data corruption when enabled (reduce the performance around 30%). So if you disable it (improve the performance) but your data can be corrupted. Use it at your risk.
* ***** SysFs interface :
*
* /sys/module/mmc_core/parameters/crc
*
* Enable / Disable CRC
*
* echo N > /sys/module/mmc_core/parameters/crc (Disabled) or
* echo 0 > /sys/module/mmc_core/parameters/crc (Disabled)
*
* echo Y > /sys/module/mmc_core/parameters/crc (Enabled) or
* echo 1 > /sys/module/mmc_core/parameters/crc (Enabled)
*
*
* ***** (default = Enabled)
Signed-off-by: Pafcholini <pafcholini@gmail.com>
Signed-off-by: Pafcholini <nadyaivanova14@gmail.com>
Asynchronous I/O latency to a solid-state disk greatly increased between the 2.6.32 and 3.0 kernels.
By removing the plug from do_io_submit(), we observed a 34% improvement in the I/O latency.
Unfortunately, at this level, we don't know if the request is to
a rotating disk or not.
Change-Id: I7101df956473ed9fd5dcff18e473dd93b688a5c1
Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Cc: linux-aio@kvack.org
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: ahmedradaideh <ahmed.radaideh@gmail.com>
Sleeping for an entire tick adds unnecessary latency to
hotplugging a cpu (cpu_up).
Change-Id: Iab323a79f4048bc9101ecfd368e0f275827ed4ab
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
[rameezmustafa@codeaurora.org]: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
add_random was implemented for spinning hard disks. It only slows SSDs down. Read here http://wiki.samat.org/SSD for more info.
Signed-off-by: Chester Kener <Cl3Kener@gmail.com>
Signed-off-by: engstk <eng.stk@sapo.pt>
This commit allows to enable/disable via config @jesec Fake Enforcing Status patch or @jcadduono SELinux Force Enforcing/Permissive patch
Signed-off-by: BlackMesa123 <brother12@hotmail.it>
proc: update or inserted cmdline-flags required for proper SafetyNet-support
proc: cmdline: set warranty bit-flags to 0
proc: cmdline: fix invalid handling of value-changing
Change-Id: Idec862bcf9125a79ac9505a938495f114636b4fc