Home
last modified time | relevance | path

Searched refs:requests (Results 1 – 25 of 575) sorted by relevance

12345678910>>...23

/linux/Documentation/block/
A Dstat.rst29 read I/Os requests number of read I/Os processed
32 read ticks milliseconds total wait time for read requests
33 write I/Os requests number of write I/Os processed
36 write ticks milliseconds total wait time for write requests
37 in_flight requests number of I/Os currently in flight
39 time_in_queue milliseconds total wait time for all requests
40 discard I/Os requests number of discard I/Os processed
43 discard ticks milliseconds total wait time for discard requests
44 flush I/Os requests number of flush I/Os processed
45 flush ticks milliseconds total wait time for flush requests
[all …]
A Dblk-mq.rst9 through queueing and submitting IO requests to block devices simultaneously,
53 layer or if we want to try to merge requests. In both cases, requests will be
58 to process those requests. However, if the hardware does not have enough
59 resources to accept more requests, blk-mq will places requests on a temporary
65 The block IO subsystem adds requests in the software staging queues
73 The staging queue can be used to merge requests for adjacent sectors. For
77 number of individual requests. This technique of merging requests is called
113 added to a linked list (``hctx->dispatch``) of requests. Then,
116 requests that were ready to be sent first. The number of hardware queues
120 hardware queues to send requests for.
[all …]
A Dwriteback_cache_control.rst17 a forced cache flush, and the Force Unit Access (FUA) flag for requests.
26 guarantees that previously completed write requests are on non-volatile
58 on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without
68 support required, the block layer completes empty REQ_PREFLUSH requests before
70 requests that have a payload. For devices with volatile write caches the
76 and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn. Note that
77 REQ_PREFLUSH requests with a payload are automatically turned into a sequence
84 and the driver must handle write requests that have the REQ_FUA bit set
/linux/Documentation/virt/acrn/
A Dio-request.rst14 For each User VM, there is a shared 4-KByte memory region used for I/O requests
26 An I/O client is responsible for handling User VM I/O requests whose accessed
29 default client, that handles all I/O requests that do not fit into the range of
33 Below illustration shows the relationship between I/O requests shared buffer,
34 I/O requests and I/O clients.
84 4. Processing flow of I/O requests
91 c. The upcall handler schedules a worker to dispatch I/O requests.
92 d. The worker looks for the PENDING I/O requests, assigns them to different
95 e. The notified client handles the assigned I/O requests.
96 f. The HSM updates I/O requests states to COMPLETE and notifies the hypervisor
/linux/Documentation/devicetree/bindings/dma/
A Dlpc1850-dmamux.txt11 - dma-requests: Number of DMA requests for the mux
15 - dma-requests: Number of DMA requests the controller can handle
28 dma-requests = <16>;
40 dma-requests = <64>;
A Dti-dma-crossbar.txt9 - dma-requests: Number of DMA requests the crossbar can receive
13 - dma-requests: Number of DMA requests the controller can handle
43 dma-requests = <127>;
51 dma-requests = <205>;
A Dmtk-uart-apdma.txt12 One interrupt per dma-requests, or 8 if no dma-requests property is present
14 - dma-requests: The number of DMA channels
51 dma-requests = <12>;
A Dfsl-imx-dma.txt17 - #dma-requests : Number of DMA requests supported.
32 Clients have to specify the DMA requests with phandles in a list.
38 - dma-names: List of string identifiers for the DMA requests. For the correct
A Ddma-router.yaml18 have more peripherals integrated with DMA requests than what the DMA
31 dma-requests:
47 dma-requests = <205>;
/linux/drivers/gpu/drm/i915/gt/
A Dintel_gt_requests.c21 list_for_each_entry_safe(rq, rn, &tl->requests, link) in retire_requests()
31 return !list_empty(&engine->kernel_context->timeline->requests); in engine_active()
208 container_of(work, typeof(*gt), requests.retire_work.work); in retire_work_handler()
210 schedule_delayed_work(&gt->requests.retire_work, in retire_work_handler()
217 INIT_DELAYED_WORK(&gt->requests.retire_work, retire_work_handler); in intel_gt_init_requests()
222 cancel_delayed_work(&gt->requests.retire_work); in intel_gt_park_requests()
227 schedule_delayed_work(&gt->requests.retire_work, in intel_gt_unpark_requests()
234 cancel_delayed_work_sync(&gt->requests.retire_work); in intel_gt_fini_requests()
/linux/Documentation/admin-guide/device-mapper/
A Dwritecache.rst34 throughput, but it may improve latency of read requests
81 5. the number of read requests
82 6. the number of read requests that hit the cache
83 7. the number of write requests
84 8. the number of write requests that hit uncommitted block
85 9. the number of write requests that hit committed block
86 10. the number of write requests that bypass the cache
87 11. the number of write requests that are allocated in the cache
88 12. the number of write requests that are blocked on the freelist
89 13. the number of flush requests
[all …]
A Dlog-writes.rst10 that is in the WRITE requests is copied into the log to make the replay happen
17 cache. This means that normal WRITE requests are not actually logged until the
22 This works by attaching all WRITE requests to a list once the write completes.
39 Any REQ_FUA requests bypass this flushing mechanism and are logged as soon as
40 they complete as those requests will obviously bypass the device cache.
42 Any REQ_OP_DISCARD requests are treated like WRITE requests. Otherwise we would
43 have all the DISCARD requests, and then the WRITE requests and then the FLUSH
/linux/drivers/gpu/drm/i915/gt/uc/
A Dintel_guc_ct.c107 spin_lock_init(&ct->requests.lock); in intel_guc_ct_init_early()
108 INIT_LIST_HEAD(&ct->requests.pending); in intel_guc_ct_init_early()
109 INIT_LIST_HEAD(&ct->requests.incoming); in intel_guc_ct_init_early()
383 return ++ct->requests.last_fence; in ct_get_next_fence()
690 spin_lock(&ct->requests.lock); in ct_send()
692 spin_unlock(&ct->requests.lock); in ct_send()
730 spin_lock_irqsave(&ct->requests.lock, flags); in ct_send()
909 spin_lock_irqsave(&ct->requests.lock, flags); in ct_handle_response()
932 ct->requests.last_fence); in ct_handle_response()
1002 spin_lock_irqsave(&ct->requests.lock, flags); in ct_process_incoming_requests()
[all …]
/linux/Documentation/virt/kvm/
A Dvcpu-requests.rst12 /* Check if any requests are pending for VCPU @vcpu. */
38 as possible after making the request. This means most requests
94 VCPU requests are simply bit indices of the ``vcpu->requests`` bitmap.
98 clear_bit(KVM_REQ_UNHALT & KVM_REQUEST_MASK, &vcpu->requests);
103 dependent requests.
152 This flag is applied to requests that only need immediate attention
154 to be awaken for these requests. Sleeping VCPUs will handle the
155 requests when they are awaken later for some other reason.
165 Acknowledgements" for more information about requests with
188 When making requests to VCPUs, we want to avoid the receiving VCPU
[all …]
/linux/Documentation/filesystems/
A Dvirtiofs.rst58 Since the virtio-fs device uses the FUSE protocol for file system requests, the
64 FUSE requests are placed into a virtqueue and processed by the host. The
71 prioritize certain requests over others. Virtqueues have queue semantics and
72 it is not possible to change the order of requests that have been enqueued.
74 impossible to add high priority requests. In order to address this difference,
75 the virtio-fs device uses a "hiprio" virtqueue specifically for requests that
76 have priority over normal requests.
A Dgfs2-glocks.rst19 The gl_holders list contains all the queued lock requests (not
164 1. DLM lock time (non-blocking requests)
165 2. DLM lock time (blocking requests)
170 currently means any requests when (a) the current state of
174 lock requests.
177 how many lock requests have been made, and thus how much data
181 of dlm lock requests issued.
199 the average time between lock requests for a glock means we
226 srtt Smoothed round trip time for non blocking dlm requests
230 sirt Smoothed inter request time (for dlm requests)
[all …]
/linux/arch/powerpc/kvm/
A Dtrace.h106 __field( __u32, requests )
111 __entry->requests = vcpu->requests;
115 __entry->cpu_nr, __entry->requests)
/linux/Documentation/ABI/stable/
A Dsysfs-bus-xen-backend39 Number of flush requests from the frontend.
46 Number of requests delayed because the backend was too
47 busy processing previous requests.
54 Number of read requests from the frontend.
68 Number of write requests from the frontend.
/linux/Documentation/scsi/
A Dhptiop.rst110 All queued requests are handled via inbound/outbound queue port.
125 - Post the packet to IOP by writing it to inbound queue. For requests
127 requests allocated in host memory, write (0x80000000|(bus_addr>>5))
134 For requests allocated in IOP memory, the request offset is posted to
137 For requests allocated in host memory, (0x80000000|(bus_addr>>5))
144 For requests allocated in IOP memory, the host driver free the request
147 Non-queued requests (reset/flush etc) can be sent via inbound message
155 All queued requests are handled via inbound/outbound list.
169 round to 0 if the index reaches the supported count of requests.
186 Non-queued requests (reset communication/reset/flush etc) can be sent via PCIe
/linux/Documentation/ABI/testing/
A Dsysfs-class-scsi_tape33 The number of I/O requests issued to the tape drive other
34 than SCSI read/write requests.
54 Shows the total number of read requests issued to the tape
65 read I/O requests to complete.
85 Shows the total number of write requests issued to the tape
96 write I/O requests to complete.
/linux/Documentation/vm/
A Dbalance.rst16 allocation requests that have order-0 fallback options. In such cases,
19 __GFP_IO allocation requests are made to prevent file system deadlocks.
21 In the absence of non sleepable allocation requests, it seems detrimental
26 That being said, the kernel should try to fulfill requests for direct
28 the dma pool, so as to keep the dma pool filled for dma requests (atomic
31 regular memory requests by allocating one from the dma pool, instead
76 probably because all allocation requests are coming from intr context
90 watermark[WMARK_HIGH]. When low_on_memory is set, page allocation requests will
99 1. Dynamic experience should influence balancing: number of failed requests
/linux/drivers/gpu/drm/i915/gem/
A Di915_gem_execbuffer.c1875 if (eb->requests[i]) in eb_find_first_request_added()
1876 return eb->requests[i]; in eb_find_first_request_added()
1901 if (!eb->requests[j]) in eb_move_to_gpu()
1952 if (!eb->requests[j]) in eb_move_to_gpu()
1997 if (!eb->requests[j]) in eb_move_to_gpu()
2266 if (!eb->requests[i]) in eb_submit()
2978 if (!eb->requests[i]) in eb_requests_get()
2990 if (!eb->requests[i]) in eb_requests_put()
3118 eb->requests[i] = NULL; in eb_requests_create()
3145 eb->requests[i]); in eb_requests_create()
[all …]
/linux/Documentation/hid/
A Dhid-transport.rst108 events or answers to host requests on this channel.
112 SET_REPORT requests.
123 to device and may include LED requests, rumble requests or more. Output
131 Feature reports are never sent without requests. A host must explicitly set
142 channel provides synchronous GET/SET_REPORT requests. Plain reports are only
150 simultaneous GET_REPORT requests.
159 GET_REPORT requests can be sent for any of the 3 report types and shall
173 multiple synchronous SET_REPORT requests.
175 Other ctrl-channel requests are supported by USB-HID but are not available
310 it to wait for any pending requests to complete if only one request is
[all …]
/linux/Documentation/driver-api/firmware/
A Drequest_firmware.rst12 Synchronous firmware requests
15 Synchronous firmware requests will wait until the firmware is found or until
43 Asynchronous firmware requests
46 Asynchronous firmware requests allow driver code to not have to wait
/linux/drivers/media/v4l2-core/
A Dv4l2-ctrls-request.c21 INIT_LIST_HEAD(&hdl->requests); in v4l2_ctrl_handler_init_request()
39 if (hdl->req_obj.ops || list_empty(&hdl->requests)) in v4l2_ctrl_handler_free_request()
47 list_for_each_entry_safe(req, next_req, &hdl->requests, requests) { in v4l2_ctrl_handler_free_request()
102 list_del_init(&hdl->requests); in v4l2_ctrl_request_unbind()
163 list_add_tail(&hdl->requests, &from->requests); in v4l2_ctrl_request_bind()

Completed in 36 milliseconds

12345678910>>...23