1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102
|
struct bbr_private {
struct dm_dev *dev;
struct bbr_table *bbr_table;
struct bbr_runtime_remap *remap_root;
spinlock_t remap_root_lock;
struct work_struct remap_work;
struct bio_list remap_ios;
spinlock_t remap_ios_lock;
u64 offset;
u64 lba_table1;
u64 lba_table2;
u64 nr_sects_bbr_table;
u64 start_replacement_sect;
u64 nr_replacement_blks;
u32 blksize_in_sects;
atomic_t in_use_replacement_blks;
};
static void bbr_remap_handler(struct work_struct *work);
static struct bbr_private *bbr_alloc_private(void)
{
struct bbr_private *bbr_id;
bbr_id = kzalloc(sizeof(*bbr_id), GFP_KERNEL);
if (bbr_id == NULL)
return NULL;
INIT_WORK(&bbr_id->remap_work, bbr_remap_handler);
spin_lock_init(&bbr_id->remap_root_lock);
spin_lock_init(&bbr_id->remap_ios_lock);
bbr_id->in_use_replacement_blks = (atomic_t) ATOMIC_INIT(0);
return bbr_id;
}
/**
* bbr_remap_handler
*
* This is the handler for the bbr work-queue.
*
* I/O requests should only be sent to this handler if we know that:
* a) the request contains at least one remapped sector.
* or
* b) the request caused an error on the normal I/O path.
*
* This function uses synchronous I/O, so sending a request to this
* thread that doesn't need special processing will cause severe
* performance degredation.
**/
static void bbr_remap_handler(struct work_struct *work)
{
struct bbr_private *bbr_id =
container_of(work, struct bbr_private, remap_work);
struct bio *bio;
unsigned long flags;
spin_lock_irqsave(&bbr_id->remap_ios_lock, flags);
bio = bio_list_get(&bbr_id->remap_ios);
spin_unlock_irqrestore(&bbr_id->remap_ios_lock, flags);
bbr_io_process_requests(bbr_id, bio);
}
static int bbr_endio(struct dm_target *ti, struct bio *bio,
int error, union map_info *map_context)
{
struct bbr_private *bbr_id = ti->private;
struct dm_bio_details *bbr_io = map_context->ptr;
if (error && bbr_io) {
unsigned long flags;
char b[32];
dm_bio_restore(bbr_io, bio);
map_context->ptr = NULL;
DMERR("device %s: I/O failure on sector %lu. "
"Scheduling for retry.",
format_dev_t(b, bbr_id->dev->bdev->bd_dev),
(unsigned long)bio->bi_sector);
spin_lock_irqsave(&bbr_id->remap_ios_lock, flags);
bio_list_add(&bbr_id->remap_ios, bio);
spin_unlock_irqrestore(&bbr_id->remap_ios_lock, flags);
queue_work(dm_bbr_wq, &bbr_id->remap_work);
error = 1;
}
if (bbr_io)
mempool_free(bbr_io, bbr_io_pool);
return error;
} |
Partager