[BLOCK] limit request_fn recursion

Don't recurse back into the driver even if the unplug threshold is met,
when the driver asks for a requeue. This is both silly from a logical
point of view (requeues typically happen due to driver/hardware
shortage), and also dangerous since we could hit an endless request_fn
-> requeue -> unplug -> request_fn loop and crash on stack overrun.

Also limit blk_run_queue() to one level of recursion, similar to how
blk_start_queue() works.

This patch fixed a real problem with SLES10 and lpfc, and it could hit
any SCSI lld that returns non-zero from it's ->queuecommand() handler.

Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:
Jens Axboe 2006-05-11 08:20:16 +02:00 committed by Linus Torvalds
parent f358166a94
commit dac07ec121
2 changed files with 22 additions and 3 deletions

View file

@ -333,6 +333,7 @@ void elv_insert(request_queue_t *q, struct request *rq, int where)
{
struct list_head *pos;
unsigned ordseq;
int unplug_it = 1;
blk_add_trace_rq(q, rq, BLK_TA_INSERT);
@ -399,6 +400,11 @@ void elv_insert(request_queue_t *q, struct request *rq, int where)
}
list_add_tail(&rq->queuelist, pos);
/*
* most requeues happen because of a busy condition, don't
* force unplug of the queue for that case.
*/
unplug_it = 0;
break;
default:
@ -407,7 +413,7 @@ void elv_insert(request_queue_t *q, struct request *rq, int where)
BUG();
}
if (blk_queue_plugged(q)) {
if (unplug_it && blk_queue_plugged(q)) {
int nrq = q->rq.count[READ] + q->rq.count[WRITE]
- q->in_flight;

View file

@ -1732,8 +1732,21 @@ void blk_run_queue(struct request_queue *q)
spin_lock_irqsave(q->queue_lock, flags);
blk_remove_plug(q);
if (!elv_queue_empty(q))
q->request_fn(q);
/*
* Only recurse once to avoid overrunning the stack, let the unplug
* handling reinvoke the handler shortly if we already got there.
*/
if (!elv_queue_empty(q)) {
if (!test_and_set_bit(QUEUE_FLAG_REENTER, &q->queue_flags)) {
q->request_fn(q);
clear_bit(QUEUE_FLAG_REENTER, &q->queue_flags);
} else {
blk_plug_device(q);
kblockd_schedule_work(&q->unplug_work);
}
}
spin_unlock_irqrestore(q->queue_lock, flags);
}
EXPORT_SYMBOL(blk_run_queue);