Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free.

I am performing multiple read operations on the same resource stored on disk.

Sometimes the read operation itself takes longer than the time between requests for that same resource. In those cases it would make sense to batch the read operations into one read request from the disk and then return the same result to the various requester.

Initially I tried caching the result from the initial fetch resource request - but this didn't work because the time it took to read the resource was too long, and new requests came in - meaning that they would try to fetch the resource as well.

Is it possible to "append" the additional requests to the ones that are already in progress?

The code I have now follows this basic structure (which isn't good enough):

-(void)fileForKey:(NSString *)key completion:(void(^)(NSData *data) {
    NSData *data = [self.cache threadSafeObjectForKey:key];
    if (data) {
        // resource is cached - so return it - no need to read from the disk
        completion(data);
        return;
    }
    // need to read the resource from disk
    dispatch_async(self.resourceFetchQueue, ^{
        // this could happen multiple times for the same key - because it could take a long time to fetch the resource - all the completion handlers should wait for the resource that is fetched the first time
        NSData *fetchedData = [self fetchResourceForKey:key];
        [self.cache threadSafeSetObject:fetchedData forKey:key];
        dispatch_async(self.completionQueue, ^{
            completion(fetchedData);
            return;
        });
    });
}
share|improve this question

1 Answer 1

I think you want to introduce a helper object

@interface CacheHelper{
  @property (nonatomic, copy) NSData *data;
  @property (nonatomic, assign) dispatch_semaphore_t dataReadSemaphore;
}

Your reader method now becomes something like

CacheHelper *cacheHelper = [self.cache threadSafeObjectForKey:key]
if (cacheHelper && cacheHelper.data) 
{
   completion(cacheHelper.data);
   return;
}
if (cacheHelper)
{
   dispatch_semaphore_wait(cacheHelper.dataReadSemaphore, DISPATCH_TIME_FOREVER);
   dispatch_semaphore_signal(cacheHelper.dataReadSemaphore);
   completion(cacheHelper.data)
   return;
}
cacheHelper = [[CacheHelper alloc] init]
cacheHelper.dataReadSemaphore = dispatch_semaphore_create(0);
cacheHelper.data = [self fetchResourceForKey:key];
[self.cache threadSafeSetObject:cacheHelper forKey:key];
dispatch_semaphore_signal(cacheHelper.dataReadSemaphore);
completion(cacheHelper.data)

This is uncompiled code, so please check the spelling and logic, but I hope it explains the idea. I like this post if you want an introduction to semaphores.

share|improve this answer
    
This would block the cache for additional operations (on different keys) so I don't think it is a good option –  Avner Barr Aug 18 '14 at 8:47
    
You could move the wait, the part inside if (cacheHelper), to a background thread if you don't want to block the UI. –  Lev Landau Aug 18 '14 at 10:06

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.