ios 6 photo effect codes

Beginning Core Image in iOS 6
This is the eighth iOS 6 tutorial in the iOS 6 Feast! In this tutorial, you’re updating one of our older tutorials to iOS 6 so it’s fully up-to-date with the latest features like the new Core Image filters in iOS 6.Parts of this tutorial come from Jake Gundersen‘s three Core Image chapters in iOS 5 by Tutorials and iOS 6 by Tutorials. Enjoy!This is a blog post by iOS Tutorial Team member Jacob Gundersen, an indie game developer who runs the Indie Ambitions blog. Check out his latest app – Factor Samurai!Core Image is a powerful framework that lets you easily apply filters to images, such as modifying the vibrance, hue, or exposure. It uses the GPU (or CPU, user definable) to process the image data and is very fast. Fast enough to do real time processing of video frames!
Core Image filters can stacked together to apply multiple effects to an image or video frame at once. When multiple filters are stacked together they are efficient because they create a modified single filter that is applied to the image, instead of processing the image through each filter, one at a time.
Each filter has it’s own parameters and can be queried in code to provide information about the filter, it’s purpose, and input parameters. The system can also be queried to find out what filters are available. At this time, only a subset of the Core Image filters available on the Mac are available on iOS. However, as more become available the API can be used to discover the new filter attributes.
In this tutorial, you will get hands-on experience playing around with Core Image. You’ll apply a few different filters, and you’ll see how easy it is to apply cool effects to images in real time!
Core Image Overview
Before you get started, let’s discuss some of the most important classes in the Core Image framework:
  • CIContext. All of the processing of a core image is done in a CIContext. This is somewhat similar to a Core Graphics or OpenGL context.
  • CIImage. This class hold the image data. It can be creating from a UIImage, from an image file, or from pixel data.
  • CIFilter. The filter class has a dictionary that defines the attributes of the particular filter that it represents. Examples of filters are vibrance filters, color inversion filters, cropping filters, and much more.
You’ll be using each of these classes as you create your project.
Getting Started
Open up Xcode and create a new project with the iOS\Application\Single View Application template. Enter CoreImageFun for the Product Name, select iPhone for the device family, and make sure that Use Storyboards and Use Automatic Reference Counting are checked (but leave the other checkboxes unchecked).
First things first, let’s add the Core Image framework. On the Mac this is part of the QuartzCore framework, but on iOS it’s a standalone framework. Go to the project container in the file view on the left hand side. Choose the Build Phases tab, expand the Link Binaries with Library group and press the +. Navigate to the CoreImage framework and double-click on it.
Second, download the resources for this tutorial, add the included image.png to your project. Done with setup!
Next open MainStoryboard.storyboard, drag an image view into the view controller, and set its mode to Aspect Fit. The position and dimensions should roughly match the following image:
Placing an image view into the view controller
Also, open the Assistant Editor, make sure it’s displaying ViewController.h, and control-drag from the UIImageView to below the @interface. Set the Connection to Outlet, name it imageView, and click Connect.
Compile and run just to make sure everything is good so far – you should just see an empty screen. The initial setup is complete – now onto Core Image!
Basic Image Filtering
You’re going to get started by simply running your image through a CIFilter and displaying it on the screen.
Every time you want to apply a CIFilter to an image you need to do four things:
  1. Create a CIImage object. CIImage has the following initialization methods: imageWithURL:, imageWithData:, imageWithCVPixelBuffer:, and imageWithBitmapData:bytesPerRow:size:format:colorSpace:. You’ll most likely be working with imageWithURL: most of the time.
  1. Create a CIContext. A CIContext can be CPU or GPU based. A CIContext can be reused, so you needn’t create it over and over, but you will always need one when outputting the CIImage object.
  1. Create a CIFilter. When you create the filter, you configure a number of properties on it that depend on the filter you’re using.
  1. Get the filter output. The filter gives you an output image as a CIImage – you can convert this to a UIImage using the CIContext, as you’ll see below.
Let’s see how this works. Add the following code to ViewController.m inside viewDidLoad:
// 1
NSString *filePath =
  [[NSBundle mainBundle] pathForResource:@"image" ofType:@"png"];
NSURL *fileNameAndPath = [NSURL fileURLWithPath:filePath];
 
// 2
CIImage *beginImage =
  [CIImage imageWithContentsOfURL:fileNameAndPath];
 
// 3
CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"
                              keysAndValues: kCIInputImageKey, beginImage,
                    @"inputIntensity", @0.8, nil];
CIImage *outputImage = [filter outputImage];
 
// 4
UIImage *newImage = [UIImage imageWithCIImage:outputImage];
self.imageView.image = newImage;
Let’s go over this section by section:
  1. The first two lines create an NSURL object that holds the path to your image file.
  1. Next you create your CIImage with the imageWithContentsOfURL method.
Next you’ll create your CIFilter object. A CIFilter constructor takes the name of the filter, and a dictionary that specifies the keys and values for that filter. Each filter will have its own unique keys and set of valid values.
The CISepiaTone filter takes only two values, the KCIInputImageKey (a CIImage) and the @”inputIntensity”, a float value, wrapped in an NSNumber (using the new literal syntax), between 0 and 1. Here you give that value 0.8. Most of the filters have default values that will be used if no values are supplied. One exception is the CIImage, this must be provided as there is no default.
Getting a CIImage back out of a filter is easy. You just use the outputImage property.
  1. Once you have an output CIImage, you will need to convert it into a UIImage. New in iOS 6 is the UIImage method +imageWithCIImage: method. This method creates a UIImage from a CIImage. Once we’ve converted it to a UIImage, you just display it in the image view you added earlier.
Compile and run the project, and you’l see your image filtered by the sepia tone filter. Congratulations, you have successfully used CIImage and CIFilters!
Hello, Core Image!
Putting It Into Context
Before you move forward, there’s an optimization that you should know about.
I mentioned earlier that you need a CIContext in order to perform a CIFilter, yet there’s no mention of this object in the above example. It turns out that the the UIImage method you called (imageWithCIImage) does all the work for you. It creates a CIContext and uses it to perform the work of filtering the image. This makes using the Core Image API very easy.
There is one major drawback – it creates a new CIContext every time it’s used. CIContexts are meant to be reusable to increase performance. If you want to use a slider to update the filter value, like you’ll be doing in this tutorial, creating new CIContexts each time you change the filter would be way too slow.
Let’s do this properly. Delete the code you added to viewDidLoad and replace it with the following:
CIImage *beginImage =
  [CIImage imageWithContentsOfURL:fileNameAndPath];
 
// 1
CIContext *context = [CIContext contextWithOptions:nil];
 
CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"
                              keysAndValues: kCIInputImageKey, beginImage,
                    @"inputIntensity", @0.8, nil];
CIImage *outputImage = [filter outputImage];
 
// 2
CGImageRef cgimg =
  [context createCGImage:outputImage fromRect:[outputImage extent]];
 
// 3
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
self.imageView.image = newImage;
 
// 4
CGImageRelease(cgimg);
Again, let’s go over this section by section.
  1. Here you set up the CIContext object. The CIContext constructor takes an NSDictionary that specifies options including the color format and whether the context should run on the CPU or GPU. For this app, the default values are fine and so you pass in nil for that argument.
  1. Here you use a method on the context object to draw a CGImage. Calling the createCGImage:fromRect: on the context with the supplied CIImage will produce a CGImageRef.
  1. Next, you use UIImage +imageWithCGImage to create a UIImage from the CGImage.
  1. Finally, release the CGImageRef. CGImage is a C API that requires that you do your own memory management, even with ARC.
Compile and run, and make sure it works just as before.
In this example, adding the CIContext creation and handling that yourself doesn’t make too much difference. But in the next section, you’ll see why this is important performance, as you implement the ability to change the filter dynamically!
Changing Filter Values
This is great, but this is just the beginning of what you can do with Core Image filters. Lets add a slider and set it up so you can adjust the image settings in real time.
Open MainStoryboard.storyboard and drag a slider in below the image view like so:
Adding a slider in the Storyboard editor
Make sure the Assistant Editor is visible and displaying ViewController.h, then control-drag from the slider down below the @interface. Set the Connection to Action, the name to amountSliderValueChanged, make sure that the Event is set to Value Changed, and click Connect.
While you’re at it let’s connect the slider to an outlet as well. Again control-drag from the slider down below the @interface, but this time set the Connection to Outlet, the name to amountSlider, and click Connect.
Every time the slider changes, you need to redo the image filter with a different value. However, you don’t want to redo the whole process, that would be very inefficient and would take too long. You’ll need to change a few things in your class so that you hold on to some of the objects you create in your viewDidLoad method.
The biggest thing you want to do is reuse the CIContext whenever you need to use it. If you recreate it each time, your program will run very slow. The other things you can hold onto are the CIFilter and the CIImage that holds your beginning image. You’ll need a new CIImage for every output, but the image you start with will stay constant.
You need to add some instance variables to accomplish this task.
Add the following three instance variables to your private @implementation in ViewController.m:
@implementation ViewController {
    CIContext *context;
    CIFilter *filter;
    CIImage *beginImage;
}
Also, change the variables in your viewDidLoad method so they use the instance variables instead of declaring new local variables:
beginImage = [CIImage imageWithContentsOfURL:fileNameAndPath];
context = [CIContext contextWithOptions:nil];
 
filter = [CIFilter filterWithName:@"CISepiaTone" 
  keysAndValues:kCIInputImageKey, beginImage, @"inputIntensity", 
  @0.8, nil];
Now you’ll implement the changeValue method. What you’ll be doing in this method is altering the value of the @”inputIntensity” key in your CIFilter dictionary. Once we’ve altered this value you’ll need to repeat a few steps:
  • Get the output CIImage from the CIFilter.
  • Convert the CIImage to a CGImageRef.
  • Convert the CGImageRef to a UIImage, and display it in the image view.
So replace the amountSliderValueChanged: method with the following:
- (IBAction)amountSliderValueChanged:(UISlider *)slider {
    float slideValue = slider.value;
 
    [filter setValue:@(slideValue)
              forKey:@"inputIntensity"];
    CIImage *outputImage = [filter outputImage];
 
    CGImageRef cgimg = [context createCGImage:outputImage
                                     fromRect:[outputImage extent]];
 
    UIImage *newImage = [UIImage imageWithCGImage:cgimg];
    self.imageView.image = newImage;
 
    CGImageRelease(cgimg);
}
You’ll notice that you’ve changed the variable type from (id)sender to (UISlider *)sender in the method definition. You know you’ll only be using this method to retrieve values from your UISlider, so you can go ahead and make this change. If we’d left it as (id), we’d need to cast it to a UISlider or the next line would throw an error. Make sure that the method declaration in the header file matches the changes we’ve made here.
You retrieve the float value from the slider. Your slider is set to the default values – min 0, max 0, default 0.5. These happen to be the right values for this CIFilter, how convenient!
The CIFilter has methods that will allow us to set the values for the different keys in its dictionary. Here, you’re just setting the @”inputIntensity” to an NSNumber object with a float value of whatever you get from your slider.
The rest of the code should look familiar, as it follows the same logic as your viewDidLoad method. You’re going to be using this code over and over again. From now on, you’ll use the changeSlider method to render the output of a CIFilter to your UIImageView.
Compile and run, and you should have a functioning live slider that will alter the sepia value for your image in real time!
Dynamically filtering images with Core Image
Getting Photos from the Photo Album
Now that you can change the values of the filter on the fly, things are starting to get real interesting! But what if you don’t care for this image of flowers? Let’s set up a UIImagePickerController so you can get pictures from out of the photo album and into your program so you can play with them.
You need to create a button that will bring up the photo album view, so open up ViewController.xib and drag in a button to the right of the slider and label it “Photo Album”.
Adding a button in the Storyboard editor
Then make sure the Assistant Editor is visible and displaying ViewController.h, then control-drag from the button down below the @interface. Set the Connection to Action, the name to loadPhoto, make sure that the Event is set to Touch Up Inside, and click Connect.
Next switch to ViewController.m, and implement the loadPhoto method as follows:
- (IBAction)loadPhoto:(id)sender {
    UIImagePickerController *pickerC = 
      [[UIImagePickerController alloc] init];
    pickerC.delegate = self;
    [self presentViewController:pickerC animated:YES completion:nil];
}
The first line of code instantiates a new UIImagePickerController. You then set the delegate of the image picker to self (our ViewController).
You get a warning here. You need to setup your ViewController as an UIImagePickerControllerDelegate and UINaviationControllerDelegate and then implement the methods in that delegates protocol.
Still in ViewController.m, change the class extension as follows:
@interface ViewController () <UIImagePickerControllerDelegate, UINavigationBarDelegate>
@end
Now implement the following two methods:
- (void)imagePickerController:(UIImagePickerController *)picker 
  didFinishPickingMediaWithInfo:(NSDictionary *)info {
    [self dismissViewControllerAnimated:YES completion:nil];
    NSLog(@"%@", info);
}
 
- (void)imagePickerControllerDidCancel:
  (UIImagePickerController *)picker {
    [self dismissViewControllerAnimated:YES completion:nil];
}
In both cases, you dismiss the UIPickerController. That’s the delegate’s job, if you don’t do it there, then you just stare at the image picker forever!
The first method isn’t completed yet – it’s just a placeholder to log out some information about chosen image. The cancel method just gets rid of the picker controller, and is fine as-is.
Compile and run and tap the button, and it will bring up the image picker with the photos in your photo album. If you are running this in the simulator, you probably won’t get any photos. On the simulator or on a device without a camera, you can use Safari to save images to your photo album. Open safari, find an image, tap and hold, and you’ll get a dialog to save that image. Next time you run your app, you’ll have it!
Here’s what you should see in the console after you’ve selected an image (something like this):
2012-09-20 17:30:52.561 CoreImageFun[3766:c07] {
UIImagePickerControllerMediaType = "public.image";
UIImagePickerControllerOriginalImage = ""; UIImagePickerControllerReferenceURL = "assets-library://asset/asset.JPG? id=253312C6-A454-45B4-A9DA-649126A76CA5&ext=JPG"; } 
UIImagePickerControllerReferenceURL = "assets-library://asset/asset.JPG?
id=253312C6-A454-45B4-A9DA-649126A76CA5&ext=JPG";
}
Note that it has an entry in the dictionary for the “original image” selected by the user. This is what you want to pull out and filter!
Now that we’ve got a way to select an image, how do you set your CIImage beganImage to use that image?
Simple, just change the delegate method to look like this:
- (void)imagePickerController:(UIImagePickerController *)picker
  didFinishPickingMediaWithInfo:(NSDictionary *)info {
    [self dismissViewControllerAnimated:YES completion:nil];
    UIImage *gotImage =
      [info objectForKey:UIImagePickerControllerOriginalImage];
    beginImage = [CIImage imageWithCGImage:gotImage.CGImage];
    [filter setValue:beginImage forKey:kCIInputImageKey];
    [self amountSliderValueChanged:self.amountSlider];
}
You need to create a new CIImage from your selected photo. You can get the UIImage representation of the photo by finding it in the dictionary of values, under the UIImagePickerControllerOriginalImage key constant. Note it’s better to use a constant rather than a hardcoded string, because Apple could change the name of the key in the future. For a full list of key constants, see the UIImagePickerController Delegate Protocol Reference.
You need to convert this into a CIImage, but you don’t have a method to convert a UIImage into a CIImage. However, you do have [CIImage imageWithCGImage:] method. You can get a CIImage from your UIImage by calling UIImage.CGImage, so you do exactly that!
You then set the key in the filter dictionary so that the input image is your new CIImage you just created.
The last line may seem odd. Remember how I pointed out that the code in the changeValue ran the filter with the latest value and updated the image view with the result?
Well you need to do that again, so you can just call the changeValue method. Even though the slider value hasn’t changed, you can still use that method’s code to get the job done. You could break that code into it’s own method, and if you were going to be working with more complexity, you would to avoid confusion. But, in this case your purpose here is served using the changeValue method. You pass in the amountSlider so that it has the correct value to use.
Compile and run, and now you’ll be able to update the image from your photo album!
Filtering a photo album image
What if you create the perfect sepia image, how do you hold on to it? You could take a screenshot, but you’re not that ghetto! Let’s learn how to save your photos back to the photo album.
Saving to Photo Album
To save to the photo album, you need to use the AssetsLibrary framework. To add it to your project, go to the project container, choose the Build Phases tab, expand the Link Binaries with Libraries group and click the + button. Find the AssetsLibrary framework, and add it.
Then add the following #import statement to the top of ViewController.m:
#import <AssetsLibrary/AssetsLibrary.h>
One thing you should know is that when you save a photo to the album, it’s a process that could continue even after you leave the app.
This could be a problem as the GPU stops whatever it’s doinng when you switch from one app to another. If the photo isn’t finished being saved, it won’t be there when you go looking for it later!
The solution to this is to use the CPU CIRendering context. The default is the GPU, and the GPU is much faster. You can create a second CIContext just for the purpose of saving this file.
Let’s add a new button to your app that will let us save the photo you are currently modifying with all the changes we’ve made. Open MainStoryboard add a new button labeled “Save to Album”:
Adding a new button for saving to the photo album
Then connect it to a new savePhoto: method, like you did last time.
Then switch to ViewController.m and implement the method as follows:
- (IBAction)savePhoto:(id)sender {
    // 1
    CIImage *saveToSave = [filter outputImage];
    // 2
    CIContext *softwareContext = [CIContext
                                  contextWithOptions:@{kCIContextUseSoftwareRenderer : @(YES)} ];
    // 3
    CGImageRef cgImg = [softwareContext createCGImage:saveToSave
                                             fromRect:[saveToSave extent]];
    // 4
    ALAssetsLibrary* library = [[ALAssetsLibrary alloc] init];
    [library writeImageToSavedPhotosAlbum:cgImg
                                 metadata:[saveToSave properties]
                          completionBlock:^(NSURL *assetURL, NSError *error) {
                              // 5
                              CGImageRelease(cgImg);
                          }];
}
In this code block you:
  1. Get the CIImage output from the filter.
  1. Create a new, software based CIContext
  1. Generate the CGImageRef.
  1. Save the CGImageRef to the photo library.
  1. Release the CGImage. That last step happens in a callback block so that it only fires after you’re done using it.
Compile and run the app (remember, on an actual device since you’re using software rendering), and now you can save that “perfect image” to your photo library so it’s preserved forever!
What About Image Metadata?
Let’s talk about image metadata for a moment. Image files taken on mobile phones have a variety of data associated with them, such as GPS coordinates, image format, and orientation. Specifically orientation is something that you need to preserve. The process of loading into a CIImage, rendering to a CGImage, and converting to a UIImage strips the metadata from the image. In order to preserve orientation, you’ll need to record it and then put it back into the UIImage.
Start by adding a new private instance variable to ViewController.m:
@implementation ViewController {
    CIContext *context;
    CIFilter *filter;
    CIImage *beginImage;
    UIImageOrientation orientation; // New!
}
Next, set the value when you load the image from the photo library in the -imagePickerController: didFinishPickingMediaWithInfo: method. Add the following line before the “beginImage = [CIImage imageWithCGImage:gotImage.CGImage]” line:
orientation = gotImage.imageOrientation;
Finally, alter the line in amountSliderChanged: creates the UIImage that you set to the imageView object:
UIImage *newImage = [UIImage imageWithCGImage:cgimg scale:1.0 orientation:orientation];
Now, if you take a picture taken in something other than the default orientation, it will be preserved.
What Other Filters are Available?
The CIFilter API has 130 filters on the Mac OS plus the ability to create custom filters. In iOS 6, it has 93 or more. Currently there isn’t a way to build custom filters on the iOS platform, but it’s possible that it will come.
In order to find out what filters are available, you can use the [CIFilter filterNamesInCategory:kCICategoryBuiltIn] method. This method will return an array of filter names. In addition, each filter has an attributes method that will return a dictionary containing information about that filter. This information includes the filter’s name, the kinds of inputs the filter takes, the default and acceptable values for the inputs, and the filter’s category.
Let’s put together a method for your class that will print all the information for all the currently available filters to the log. Add this method right above viewDidLoad:
 
-(void)logAllFilters {
    NSArray *properties = [CIFilter filterNamesInCategory:
      kCICategoryBuiltIn];
    NSLog(@"%@", properties);
    for (NSString *filterName in properties) {
        CIFilter *fltr = [CIFilter filterWithName:filterName];
        NSLog(@"%@", [fltr attributes]);
    }
}
This method simply gets the arrary of filters from the filterNamesInCategory method. It prints the list of names first. Then, for each name in the list, it creates that filter and logs the attributes dictionary from that filter.
Then call this method at the end of viewDidLoad:
[self logAllFilters];
You will see the following in the log output:
Logging the Core Image filters available on iOS
Wow, that’s a lot of filters!
More Intricate Filter Chains
Now that we’ve looked at all the filters that are available on the iOS 6 platform, it’s time to create a more intricate filter chain. In order to do this, you’ll create a dedicated method to process the CIImage. It will take in a CIImage, filter it, and return a CIImage. Add the following method:
-(CIImage *)oldPhoto:(CIImage *)img withAmount:(float)intensity {
 
    // 1
    CIFilter *sepia = [CIFilter filterWithName:@"CISepiaTone"];
    [sepia setValue:img forKey:kCIInputImageKey];
    [sepia setValue:@(intensity) forKey:@"inputIntensity"];
 
    // 2
    CIFilter *random = [CIFilter filterWithName:@"CIRandomGenerator"];
 
    // 3
    CIFilter *lighten = [CIFilter filterWithName:@"CIColorControls"];
    [lighten setValue:random.outputImage forKey:kCIInputImageKey];
    [lighten setValue:@(1 - intensity) forKey:@"inputBrightness"];
    [lighten setValue:@0.0 forKey:@"inputSaturation"];
 
    // 4
    CIImage *croppedImage = [lighten.outputImage imageByCroppingToRect:[beginImage extent]];
 
    // 5
    CIFilter *composite = [CIFilter filterWithName:@"CIHardLightBlendMode"];
    [composite setValue:sepia.outputImage forKey:kCIInputImageKey];
    [composite setValue:croppedImage forKey:kCIInputBackgroundImageKey];
 
    // 6
    CIFilter *vignette = [CIFilter filterWithName:@"CIVignette"];
    [vignette setValue:composite.outputImage forKey:kCIInputImageKey];
    [vignette setValue:@(intensity * 2) forKey:@"inputIntensity"];
    [vignette setValue:@(intensity * 30) forKey:@"inputRadius"];
 
    // 7
    return vignette.outputImage;
}
Let’s go over this section by section:
  1. In section one you set up the sepia filter the same way you did in the simpler scenario. You’re passing in the float in the method to set the intensity of the sepia. This value will be provided by the slider.
  1. In the second section you set up a filter that is new to iOS 6 (though not new on the Mac). The random filter creates a random noise pattern, it looks like this:

It doesn’t take any parameters. You’ll use this noise pattern to add texture to your final old photo look.
  1. In section three, you are altering the output of the random noise generator. You want to change it to grey and lighten it up a little bit so the effect is less dramatic. You’ll notice that the input image key is set to the .outputImage property of the random filter. This is a convenient way to chain the output of one filter into the input of the next.
  1. The fourth section you make use of a convenient method on CIImage. The imageByCroppingToRect method takes an output CIImage and crops it to the provided rect. In this case, you need to crop the output of the CIRandomGenerator filter because it is infinite. As a generated CIImage, goes on infinitely. If you don’t crop it at some point, you’ll get an error saying that the filters have ‘an infinte extent’. CIImages don’t actually contain data, they describe it. It’s not until you call a method on the CIContext that data is actually processed.
  1. In section five you are combining the output of the sepia filter with the output of the alter CIRandom filter. This filter does the exact same operation as the ‘Hard Light’ setting does in a photoshop layer. Most of (if not all, I’m not sure) the options in photoshop are available in Core Image.
  1. In the sixth section, you run a vignette filter on this composited output that darkens the edges of the photo. You’re using the value from the slider to set the radius and intensity of this effect.
  1. Finally, you return the output of the last filter.
That’s all for this filter. You can get an idea of how complex these filter chains may become. By combining Core Image filters into these kinds of chains, you can achieve endless different effects.
The next thing to do is implement this method in amountSliderValueChanged:. Change these two lines:
[filter setValue:@(slideValue) forKey:@"inputIntensity"];
CIImage *outputImage = [filter outputImage];
To this one line:
CIImage *outputImage = [self oldPhoto:beginImage withAmount:slideValue];
This just replaces the previous method of setting the outputImage variable to your new method. You pass in the slider value for the intensity and you use the beginImage, which you set in the viewDidLoad method as the input CIImage. Build and run now and you should get a more refined old photo effect.
An example of chaining filters with Core Image
That noise could probably be more subtle, but I’ll leave that experiment to you, dear reader. Now you have the power of Core Image. Go crazy!

Photo Editor Pro - Fotolr v2.0.2 for Android

Photo Editor Pro - Fotolr 2.0.2 - Use this app to make some amazing picture effects in less than one minute. Fotolr Photo Studio (Fotolr PS) is a photo processing App which has many powerful and useful functions,.


Photo Editor Pro - Fotolr

This software includes 22 functions that are often used in image processing, and has almost all the photo editing functions and photo effects.
No matter you are a professional or a novice, you can use this app to make some amazing picture effects in less than one minute.
This app also has photo album, so you can sort through your photos and transfer your photo.

5 major functions:
Picture editing, Portrait processing , Photo effects, Photo album, Photo sharing via mail or other SNS

1)Picture editing
* Rotation
* Cut
* Resize image
* Draw
* Adjust image color and brightness

2)Image Effects
* Photo Effects , more than 80
* color splash
* Picture Frame
* Picture Sense
* Add text to the pictures

3)Makeover
* Face triming
* Acne Removing
* Whitening Effect
* Blusher
* Lipstick
* Wig
* Hair Dyeing

4)Album function,
* store, photo description, clone,
preview photos, export photos, preview pictures, ordering

5)photo sharing
* Twitter,Facebook,Tumblr and Sina Weibo.

others:
photoshop,ps,instagram,fxcamera,picsay,camera fx,camera360,fxcamera,photo effect,camera+,photo magic,hipstamatic 


Photo Editor Pro - Fotolr v2.0.2 for Android


Photo Editor Pro - Fotolr v2.0.2

Version: 2.0.2
Size: 7.0 MB
Required: Android 2.1 and up

Download Photo Editor Pro - Fotolr v2.0.2 for Android

Sendspace

photo effects with writing


During a Q&A session after the Canadian Video Game Awards, the Mass Effect series’ lead writer, Mac Walter’s spoke about Mass Effect 3′s controversial ending.
It’s been months since the controversy initially blew up and it seems like Walter’s has had some time to think about the situation. Despite the fan backlash, Mass Effect 3 won best RPG at the Video Game Awards and console game of the year at the Canadian Video Game Awards.
“I would say it’s vindication because, I was at Fan Expo in Vancouver today and I’ve been to several expos since the ending, by and large, the fans that talked to me are people that either enjoyed the ending or are not necessarily that unhappy with the ending at all,” said Walter’s during the Q&A.
Mass Effect 3 live action trailer
Mass Effect 3′s live-action launch trailer was very impressive.
Even though fans are still upset months later over the ending of Mass Effect’s initial trilogy, Walter’s still seems to be under the impression that only a vocal minority was upset with Mass Effect 3′s controversial ending.
“There is a vocal minority and we did want to see what we could do to help that but at the same time I think we also did what we thought was best for the series,” said Walters.
He also feels Mass Effect 3′s Citadel downloadable content (DLC) and extended ending DLC helped appease hardcore fans of the series who were still be upset over how the franchise ended.
“When you take it as a whole now and you look at the Citadel (DLC), in there as well you have those fond farewells and those moments people want. They fit much better into the game than they would have if we tried to put them into the end of Mass Effect 3.
Marc Walter's accepts an award on behalf of the Mass Effect 3 development team at the Canadian Video Game Awards.
Marc Walter’s accepts an award on behalf of the Mass Effect 3 development team at the Canadian Video Game Awards.
Towards the end of the Q&A he back-pedaled a little bit, stating that winning the award for best console game wasn’t really vindication after all.
“I wouldn’t call it vindication, I guess, but it’s great that people are recognizing it despite that (all of the controversy).”

photo effects with writing


Create A Portrait From Text In Photoshop
how to create a text portrait effect. In other words, we’ll create the illusion that the image seen in the photo is actually being created by multiple lines of type. I’ve seen this effect used with many celebrity photos, from Andy Warhol and Marilyn Monroe to Michael Jackson, David Beckham, even Barack Obama. Of course, you don’t need a photo of someone famous to create this effect. In fact, the more you know about the person in the photo, the more interesting the effect can become because you can add more personalized text. You may want to write about what the person in the photo means to you, or share a funny story, or describe something they’ve accomplished. Or, you can just grab some random text from somewhere and paste it in. It’s completely up to you. I’ll be using Photoshop CS4 for this tutorial, but any version of Photoshop should work.
Here’s the image I’ll be starting with:
The original photo. Image licensed from iStockphoto by Photoshop Essentials.com.
The original image
Here’s how it will look after we’ve cropped it and then converted it to text:
Photoshop text portrait effect. Image © 2009 Photoshop Essentials.com.
The final “text portrait” effect.
Let’s get started!

Step 1: Crop The Image Around The Person’s Face

Before we begin, I should mention that you’ll probably want to work on a copy of your photo for this effect rather than on the original image, since the first thing we’ll be doing is cropping some of it away. To save a copy of the image, go up to the File menu at the top of the screen and choose Save As. Give the document a different name, such as “text-portrait-effect” or whatever makes sense to you, and save it as a Photoshop .PSD file. This way, you can do whatever you like to the image and not worry about damaging the original.
Let’s begin by cropping the image so we get a nice close-up view of the person’s face. Photoshop’s official tool for cropping images is the Crop Tool, but for simple crops like this, you’ll often find that the Rectangular Marquee Tool is all you really need. I’m going to grab the Rectangular Marquee Tool from the top of the Tools panel (panels are called "palettes" in earlier versions of Photoshop). I could also press the letter M on my keyboard to select it with the shortcut:
The Rectangular Marquee Tool in Photoshop. Image © 2009 Photoshop Essentials.com.
The Rectangular Marquee Tool works great for simple crops.
Then, with the Rectangular Marquee Tool selected, I’ll click and drag out a selection around the man’s face, beginning in the top left and dragging towards the bottom right. If you need to reposition your selection as you’re dragging it, hold down your spacebar, drag the selection to a new location with your mouse, then release your spacebar and continue dragging out the selection. I want my selection to be a perfect square, so I’ll hold down myShift key as I’m dragging, which will force the shape of the selection into a square. When you’re done, you should have a selection that looks something like this:
Dragging a selection with the Rectangular Marquee Tool in Photoshop. Image © 2009 Photoshop Essentials.com.
Everything outside of the selection will be cropped away in a moment.
With the selection in place, go up to the Image menu in the Menu Bar at the top of the screen and select the Cropcommand:
Selecting the Crop command in Photoshop. Image © 2009 Photoshop Essentials.com.
Go to Image > Crop.
As soon as you select the Crop command, Photoshop crops away everything outside of the selection outline, leaving us with our close-up portrait:
The image is now cropped. Image © 2009 Photoshop Essentials.com.
Only the area inside the selection remains.

Step 2: Add A New Blank Layer

If we look in our Layers panel (palette), we see that we currently have just one layer in our Photoshop document. This layer, named Background, is the layer that contains our image. We need to add a new blank layer above the Background layer, and we can do that by clicking on the New Layer icon at the bottom of the Layers panel:
Clicking the New Layer icon in the Layers palette in Photoshop. Image © 2009 Photoshop Essentials.com.
Click on the New Layer icon in the Layers panel (palette).
Nothing will seem to have happened in the document window, but the Layers panel is now showing a new layer sitting above the Background layer. Photoshop automatically names the new layer “Layer 1″. If we look in the layer’spreview thumbnail to the left of the layer’s name, we see a gray and white checkerboard pattern. This is how Photoshop represents transparency, and since the preview window is filled with nothing but this checkerboard pattern, we know the layer is currently blank (transparent):
The layer preview thumbnail in the Layers palette. Image © 2009 Photoshop Essentials.com.
The preview thumbnail for each layer shows us what’s currently on the layer.

Step 3: Fill The New Layer With Black

Next, we need to fill our new layer with black. Go up to the Edit menu at the top of the screen and select the Fillcommand:
Selecting the Fill command in Photoshop. Image © 2009 Photoshop Essentials.com.
Select the Fill command from the Edit menu.
This brings up Photoshop’s Fill dialog box, giving us an easy way to fill a layer or a selection with either a solid color or a pattern. Since we no longer have a selection active on the layer, the entire layer will be filled with whatever color we choose. Select Black from the list to the right of the word Use in the Contents section at the top of the dialog box:
Choosing Black for the fill color in the Fill dialog box. Image © 2009 Photoshop Essentials.com.
Choose Black for the fill color.
Click OK to exit out of the dialog box and Photoshop fills “Layer 1″ with black. Since “Layer 1″ is sitting above the Background layer, our image is now blocked from view in the document window by the fill color:
The Photoshop document is now filled with black. Image © 2009 Photoshop Essentials.com.
The photo temporarily disappears behind the solid black color.

Step 4: Select The Type Tool

We’re ready to add our text. We’ll need Photoshop’s Type Tool for that, so select it from the Tools panel, or press the letter T on your keyboard to quickly select it with the shortcut:
The Type Tool in Photoshop. Image © 2009 Photoshop Essentials.com.
Any time you want to add text to a Photoshop document, you’ll need the Type Tool.
Photoshop gives us the option to add either point type or area type to our documents. Point type is your basic single line of text, usually either a heading or a short caption. Adding point type is as easy as clicking with the Type Tool at the point in the document where you want the line of text to appear and then adding your text. As long as the text you’re adding is short enough that you’re not worried about it extending out beyond the edge of the document, point type is usually the way to go.
Area type, on the other hand, is used when you have large amounts of text, say one or more paragraphs, and you need to make sure that all of the text stays within the boundaries of the document or within a certain area of the document. Since we need to fill our entire document with text, we’ll need to use area type.
To add area type, we first need to define the boundaries for the text, and we do that by dragging out a text frame, which looks very similar to the same sort of basic selection we dragged out earlier with the Rectangular Marquee Tool. Once we have the text frame in place, any text we add will be confined within the frame.
With the Type Tool selected, click in the very top left corner of the document, then drag down to the very bottom right corner of the document so that the text frame covers the entire document area when you’re done. As you drag, you’ll see the outline of your text frame appearing. Just as when dragging out a selection with the Rectangular Marquee Tool, you can reposition the text frame as you’re dragging it out if needed by holding down yourspacebar, dragging the frame to a new location, then releasing your spacebar and continuing to drag. When you’re done, release your mouse button and you should see your text frame surrounding the entire document, although it may be a little difficult to see in the small screenshot:
An area type frame added to the Photoshop document. Image © 2009 Photoshop Essentials.com.
Any text we add will now be confined within the boundaries of the document thanks to the text frame.

Step 5: Select Your Font Options In The Options Bar

Now that we have our text frame in place, we can add our text. Before we do though, we’ll need to choose which font we want to use. Any time the Type Tool is selected, the Options Bar at the top of the screen will show various options for working with text in Photoshop, including options for choosing a font, font style, font size, text color, and so on. The exact fonts you have to choose from will depend on whichever ones you currently have installed on your computer. You’ll probably need to experiment a few times with this since the font you choose, especially the font size, will have a large impact on the overall look of the effect. To preserve as much detail in the portrait as possible, you’ll want to use a small font size. Of course, the smaller the font, the more text you’ll need to add to fill up the entire document area.
I’m going to stick with something simple, like Arial Black, and I’ll choose 12 pt for my font size to keep it small enough to maintain lots of detail in the portrait:
The type options in the Options Bar in Photoshop. Image © 2009 Photoshop Essentials.com.
Select your font, style and size from the Options Bar.
We’ll need our text color to be white, so if yours is currently set to some other color, click on the color swatch in the Options Bar, which will bring up Photoshop’s Color Picker, and choose white. Click OK when you’re done to exit out of the Color Picker. The color swatch in the Options Bar should now be filled with white:
The type color swatch in the Options Bar in Photoshop. Image © 2009 Photoshop Essentials.com.
Click on the color swatch in the Options Bar and select white from the Color Picker if your text color is not already set to white.

Step 6: Add Your Text To The Document

All we need to do now is to add the text. As I mentioned at the beginning of the tutorial, you can personalize the text portrait effect by writing something specific about the person in the photo, or you can simply copy and paste enough text from somewhere to fill up the document. Since I’m using a stock photo for this tutorial and I don’t actually know the person in the image (although I’m sure he’s a nice guy with lots of good stories to share), I’ll simply add some standard “lorem ipsum” page filler text. When you’re done, you’re entire document should be filled with white text:
Filling the Photoshop document with lorem ipsum text. Image © 2009 Photoshop Essentials.com.
Add enough text to fill the entire document from top to bottom.
To accept the text and exit out of text editing mode, click on the small checkmark in the Options Bar:
Clicking the checkmark to accept the text in Photoshop. Image © 2009 Photoshop Essentials.com.
Click on the checkmark in the Options Bar to accept the text.

Step 7: Add A Layer Mask To The Type Layer

To turn our Photoshop document full of text into our text portrait effect, we’ll need to add a layer mask to the text layer. If we look in the Layers panel, we see that we now have three layers, with our text layer sitting above the other two layers. We know that it’s a text layer because the layer’s preview thumbnail shows a capital letter T in the center of it. To add a layer mask to the layer, click on the Layer Mask icon at the bottom of the Layers panel:
Clicking the Layer Mask icon in the Layers panel in Photoshop. Image © 2009 Photoshop Essentials.com.
Make sure the text layer is selected (highlighted in blue) in the Layers panel, then click on the Layer Mask icon.
Nothing will happen yet in the document window, but a layer mask thumbnail will appear to the right of the layer’s preview thumbnail:
A layer mask thumbnail appears in the Layers panel. Image © 2009 Photoshop Essentials.com.
Layer masks are filled with white by default, which means everything on the layer is fully visible in the document.

Step 8: Copy The Original Photo On The Background Layer

We’re now going to create our effect by copying and pasting the portrait photo directly into the layer mask we just added. Click on the Background layer in the Layers panel to select it. You’ll see it become highlighted in blue, telling us that it’s now the currently selected layer:
Selecting the Background layer in Photoshop. Image © 2009 Photoshop Essentials.com.
Layer masks are filled with white by default, which means everything on the layer is fully visible in the document.
Press Ctrl+A (Win) / Command+A (Mac) to quickly select the entire layer. You’ll see a selection outline appear around the edges of the document, indicating that the entire layer is now selected. Even though we can still see our white text against the solid black fill color in the document window, we’re actually selecting the contents of the Background layer because that’s the layer we currently have selected in the Layers panel. Then, press Ctrl+C(Win) / Command+C (Mac) to copy the contents of the layer (the portrait photo) temporarily into your computer’s memory.

Step 9: Paste The Photo Directly Into The Layer Mask

Hold down your Alt (Win) / Option (Mac) key and click on the layer mask thumbnail on the text layer in the Layers panel:
Selecting the layer mask in the Layers panel. Image © 2009 Photoshop Essentials.com.
Click on the layer mask thumbnail while holding down Alt (Win) / Option (Mac).
By holding down Alt / Option as we click on the layer mask thumbnail, not only do we select the layer mask, we make it visible inside the document window, allowing us to paste our image directly into it. Since the mask is currently filled with white, your document window will appear filled with white. Press Ctrl+V (Win) / Command+V (Mac) to paste the portrait photo directly into the layer mask. Since layer masks deal only with black, white and shades of gray, the image will appear as a black and white image in the document window:
Pasting the photo directly into the layer mask in Photoshop. Image © 2009 Photoshop Essentials.com.
The image has now been pasted directly into the layer mask on the text layer.
To exit out of the layer mask and switch our view back to normal in the document window, simply hold down Alt(Win) / Option (Mac) once again and click on the layer mask thumbnail, just as we did a moment ago. Notice that the portrait photo is now visible inside the layer mask thumbnail:
Switching out of the layer mask view mode. Image © 2009 Photoshop Essentials.com.
Hold down Alt (Win) / Option (Mac) and click again on the layer mask thumbnail to exit out of the layer mask.
Press Ctrl+D (Win) / Command+D (Mac) to remove the selection outline from around the edges of the document window. We’re now back to our normal view mode inside the document, and the text is now being masked by the photo that we pasted directly into the layer mask, creating our “text portrait” effect:
The text is now masked by the portrait. Image © 2009 Photoshop Essentials.com.
The text is now being masked by the photo.

Step 10: Duplicate The Type Layer

If you’re happy with the results at this point, you can skip these last couple of steps, but if you find that the effect looks a little too dark, make sure the text layer is selected in the Layers panel, then press Ctrl+J (Win) /Command+J (Mac) to quickly duplicate the layer. A copy of the text layer will appear above the original:
Creating a copy of the type layer in Photoshop. Image © 2009 Photoshop Essentials.com.
You can also copy layers by going up to the Layer menu, choosing New, then choosing Layer via Copy, but the keyboard shortcut is much faster.
The image will now appear brighter:
The effect now appears brighter. Image © 2009 Photoshop Essentials.com.
The effect appears brighter after duplicating the text layer.

Step 11: Adjust The Layer Opacity To Fine Tune The Brightness

If you find that the effect is still too dark, simply duplicate the text layer a second time. Or, if you find that it’s now a bit too bright, you can fine tune the results by lowering the layer’s opacity. You’ll find the Opacity option at the top of the Layers panel. The lower you set the opacity of the top layer, the more you allow the layers below it to show through, which in this case will have the effect of darkening the image. I’m going to lower the opacity of my copied text layer down to around 65% just to darken the effect slightly:
The Opacity option in the Layers panel in Photoshop. Image © 2009 Photoshop Essentials.com.
Reduce the top layer’s opacity to fine tune the brightness of the effect.
And with that, we’re done! Here, after adjusting the brightness with the Opacity option, is my final “text portrait” Photoshop effect:
Photoshop text portrait photo effect. Image © 2009 Photoshop Essentials.com.