Return to Main Page

Source Code

Download source code.

Documentation

Setup and Compilation
Un-tar the distribution in a local directory, cd to it, start MATLAB and call init to set up paths. Call compileFF and compilePI to compile the libSVM executables for the camera calibration code and the MEX functions for the probabilistic inverse code respectively. You only need to call the compile scripts once, but you must call init everytime you start up MATLAB.

$ tar xzvf derender.v1.0.tgz
$ cd derender
$ matlab
>> init
>> compileFF
>> compilePI


Calibrating Camera Tone-maps
Use the function CameraFit to estimate a camera's tone-map using a training set of RAW/JPEG pairs. This function is called with three parameters:

>> CameraFit(x,y,'traindata/mycamera');
Here, x and y are both 3xN matrices (of type double) that contain the corresponding RAW and JPEG values respectively, and the third parameter is the basename that CameraFit will use to store files with the learned parameters.
>> ls traindata/mycamera*
traindata/mycamera.c1    traindata/mycamera_T.mat
traindata/mycamera.c2    traindata/mycamera_stats.mat
traindata/mycamera.c3 
Here, the basename_T.mat file contains the parameters of the linear transform and per-channel polynomial map, the basename.c* contain the parameters of the gamut correction estimated through regression, and basename_stats.mat contains the value of the in-training error.

Note that CameraFit assumes that JPEG values are integers in the [0,255] range and that RAW values lie between [0,1], which is normally the case for values extracted from a RAW camera. However, when calibrating JPEG-only cameras with a RAW proxy, it is sometimes necessary to scale the proxy's original RAW values, and in these cases, the camera data files will contain a variable nfact which should be passed as an optional fourth parameter to CameraFit. We also provide a utility function selSubset (see help selSubset) to select subsets of illuminants and exposures, as described in the paper, from the captured data set for each camera.

Use the function Render to apply the learned camera tone-map to a novel set of RAW values and predict the camera's JPEG outputs:

>> ynew = Render(xnew,'traindata/mycamera');

Computing Probabilistic Inverse
Before computing the probabilistic inverse distributions for tone-mapped JPEG values from a given camera, you need to call the functions InvertBase and InvertBaseQ. These functions pre-compute and store data structures that will be used for the numerical integration required in the inverse computation step.

>> InvertBase('traindata/mycamera');
>> InvertBaseQ('traindata/mycamera');
Here, the arguments to both functions is the basename with which the CameraFit function was called (or the basename of the data files if you downloaded them from this site). These functions need to be called only once, and will use the files generated by CameraFit as well as create additional files with the same basename.

Now, given a set of JPEG values output from a given camera, call the function invJ to compute the inverse distributions for the corresponding RAW values as:

>> [mu,cv] = invJ(y,'traindata/mycamera');
Here, y is a 3xN matrix of JPEG values (of type double) and the second parameter is the training basename. The returned matrices mu, 3xN and cv, 3x3xN correspond to the mean and covariance matrices for each JPEG vector in y.