Tell me more ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.

I have a serious problem: I have an array with several UIImage objects. What I now want to do, is create movie from those images. But I don't have any idea how to do so.

I hope someone can help me or send me a code snippet which does something like I want.

Thx!

share|improve this question
@zoul: Tags should cover what the question is about, not possible solutions. – Georg Fritzsche Sep 18 '10 at 15:23
3  
Why not? There’s already a post for both AVFoundation and FFmpeg. If you were looking for some AVFoundation related info, wouldn’t you like to see this thread? (Or is that a consensus from Meta?) – zoul Sep 18 '10 at 15:25
@zoul: The tags narrow the question down ( "A tag is a keyword or label that categorizes your question" ), with adding those two you'd be changing the context. I thought this to be obvious but if i stumble about something on meta i'll let you know. Alternatively start a discussion there. – Georg Fritzsche Sep 18 '10 at 15:45
Maybe will be useful for someone - my code on github github.com/sakrist/One-minute – SAKrisT Apr 23 at 20:45

2 Answers

up vote 75 down vote accepted

Take a look at AVAssetWriter and the rest of the AVFoundation framework. The writer has an input of type AVAssetWriterInput, which in turn has a method called appendSampleBuffer: that lets you add individual frames to a video stream. Essentially you’ll have to:

1) Wire the writer:

NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
    [NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
    error:&error];
NSParameterAssert(videoWriter);

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
    AVVideoCodecH264, AVVideoCodecKey,
    [NSNumber numberWithInt:640], AVVideoWidthKey,
    [NSNumber numberWithInt:480], AVVideoHeightKey,
    nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
    assetWriterInputWithMediaType:AVMediaTypeVideo
    outputSettings:videoSettings] retain];

NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];

2) Start a session:

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:…]

3) Write some samples:

// Or you can use AVAssetWriterInputPixelBufferAdaptor.
// That lets you feed the writer input data from a CVPixelBuffer
// that’s quite easy to create from a CGImage.
[writerInput appendSampleBuffer:sampleBuffer];

4) Finish the session:

[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:…];
[videoWriter finishWriting];

You’ll still have to fill-in a lot of blanks, but I think that the only really hard remaining part is getting a pixel buffer from a CGImage:

- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
{
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
        [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
        [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
        nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
        frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
        &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
        frameSize.height, 8, 4*frameSize.width, rgbColorSpace, 
        kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, frameTransform);
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
        CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

frameSize is a CGSize describing your target frame size and frameTransform is a CGAffineTransform that lets you transform the images when you draw them into frames.

share|improve this answer
Wow! Great! Thank you so much! This was exactly the hint i needed. For some reasion I didn't find anything in the Web or the Apple documentation, directing me in the correct direction... Perhaps it's because the AV Foundation is totally new since iOS 4... Thank you! – Nuker Sep 20 '10 at 6:57
1  
Though this does work, drawing into a CGImage only to draw that into a CGBitmapContext backed by CVPixelBuffer is wasteful. Similarly, instead of creating a CVPixelBuffer each time, AVAssetWriterInputPixelBufferAdaptor's pixelBufferPool should be used to recycle buffers. – rpetrich Oct 25 '10 at 7:44
5  
Well what should you do then, when you have the source data as regular image files? – zoul Nov 26 '10 at 10:19
1  
After calling appendSampleBuffer:, if I remember correctly. – zoul Apr 27 '11 at 5:00
2  
@huesforalice: I think that background rendering is simply not supported, as the video hardware is probably needed for something else. I think you’ll have to cancel the rendering job and start it from scratch when the app returns to foreground. – zoul Oct 5 '11 at 5:57
show 14 more comments

Well this is a bit hard to be implemented in pure Objective-C....If you are developing for jailbroken devices , a good idea is to use the command-line tool ffmpeg from inside your app. it's quite easy to create a movie from images with a command like:

ffmpeg -r 10 -b 1800 -i %03d.jpg test1800.mp4

Note that the images have to be named sequentially , and also be placed in the same directory. For more information take a look at: http://electron.mit.edu/~gsteele/ffmpeg/

share|improve this answer
Thanks for your comment. I already had an eye on ffmpeg, but I can't use for some reasons: First of all I want to make a app which should be sold using the Apple App Store, so no way to use the command version of ffmpeg. The next reasion why I didn't had a further look on ffmpeg was the license. If I want to use ffmpeg within my application, I will have to release the source code of my app. That's a thing I don't want to do. – Nuker Sep 20 '10 at 6:59
5  
ffmpeg would be super slow. better to use the hardware accelerated AVFoundation classes. – Rhythmic Fistman Feb 15 '11 at 16:31

protected by zoul Feb 22 '11 at 10:36

This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.

Not the answer you're looking for? Browse other questions tagged or ask your own question.