Compiling ‘LIBLINEAR MKL : A Fast Multiple Kernel Learning L1/L2-loss SVM solver in MATLAB’

Link to library: liblinear-mkl
The provided Makefile in matlab directory is outdated , I have updated the file and posting it below:

# This Makefile is used under Linux

MATLABDIR ?= /home/guo/MATLAB/R2015b
CXX ?= g++
#CXX = g++-3.3
CC ?= gcc

CFLAGS = -Wall -Wconversion -O3 -fPIC -I$(MATLABDIR)/extern/include -I..

MEX = $(MATLABDIR)/bin/mex
#MEX_OPTION = CC\#$(CXX) CXX\#$(CXX) CFLAGS\#"$(CFLAGS)" CXXFLAGS\#"$(CFLAGS)"
MEX_OPTION = CXX=$(CXX) CXXFLAGS="$(CFLAGS)"
# comment the following line if you use MATLAB on a 32-bit computer
MEX_OPTION += -largeArrayDims
MEX_EXT = $(shell $(MATLABDIR)/bin/mexext)

OCTAVEDIR ?= /usr/include/octave
OCTAVE_MEX = env CC=$(CXX) mkoctfile
OCTAVE_MEX_OPTION = --mex
OCTAVE_MEX_EXT = mex
OCTAVE_CFLAGS = -Wall -O3 -fPIC -I$(OCTAVEDIR) -I..

all:	matlab

matlab:	binary

octave:
@make MEX="$(OCTAVE_MEX)" MEX_OPTION="$(OCTAVE_MEX_OPTION)" \
MEX_EXT="$(OCTAVE_MEX_EXT)" CFLAGS="$(OCTAVE_CFLAGS)" \
binary

binary: train.$(MEX_EXT) predict.$(MEX_EXT) libsvmread.$(MEX_EXT) libsvmwrite.$(MEX_EXT)

train.$(MEX_EXT): train.c ../linear.h ../tron.o ../linear.o linear_model_matlab.o ../blas/blas.a
$(MEX) $(MEX_OPTION) train.c linear_model_matlab.c ../tron.o ../linear.o linear_model_matlab.o ../blas/blas.a

predict.$(MEX_EXT): predict.c ../linear.h ../tron.o ../linear.o linear_model_matlab.o ../blas/blas.a
$(MEX) $(MEX_OPTION) predict.c linear_model_matlab.c ../tron.o ../linear.o linear_model_matlab.o ../blas/blas.a

libsvmread.$(MEX_EXT):	libsvmread.c
$(MEX) $(MEX_OPTION) libsvmread.c

libsvmwrite.$(MEX_EXT):	libsvmwrite.c
$(MEX) $(MEX_OPTION) libsvmwrite.c

linear_model_matlab.o: linear_model_matlab.c ../linear.h
$(CXX) $(CFLAGS) -c linear_model_matlab.c

../linear.o:
cd ..; make linear.o

../tron.o:
cd ..; make tron.o

../blas/blas.a:
cd ../blas; make OPTFLAGS='$(CFLAGS)' CC='$(CC)';

clean:
cd ../blas;	make clean
rm -f *~ *.o *.mex* *.obj ../linear.o ../tron.o

Things to notice:

1) Updating the MATLABDIR to your installation directory for matlab
2) Change all the C++ style inline comments in c files to C style comment (/**/), I tried using the ‘-std=c99’ switch but make seems to ignore that , moreover, gcc might not need that at all. Strange.

How to choose the number of hidden layers and nodes in a feedforward neural network?

By following a small set of clear rules, one can programmatically set a competent network architecture (i.e., the number and type of neuronal layers and the number of neurons comprising each layer). Following this schema this will give you a competent architecture but probably not an optimal one.

But once this network is initialized, you can iteratively tune the configuration during training using a number of ancillary algorithms; one family of these works by pruning nodes based on (small) values of the weight vector after a certain number of training epochs–in other words, eliminating unnecessary/redundant nodes (more on this below).

So every NN has three types of layers: input, hidden, and output.


Creating the NN architecture therefore means coming up with values for the number of layers of each type and the number of nodes in each of these layers.

The Input Layer

Simple–every NN has exactly one of them–no exceptions that I’m aware of.

With respect to the number of neurons comprising this layer, this parameter is completely and uniquely determined once you know the shape of your training data. Specifically, the number of neurons comprising that layer is equal to the number of features (columns) in your data. Some NN configurations add one additional node for a bias term.


The Output Layer

Like the Input layer, every NN has exactly one output layer. Determining its size (number of neurons) is simple; it is completely determined by the chosen model configuration.

Is your NN going running in Machine Mode or Regression Mode (the ML convention of using a term that is also used in statistics but assigning a different meaning to it is very confusing). Machine mode: returns a class label (e.g., “Premium Account”/”Basic Account”). Regression Mode returns a value (e.g., price).

If the NN is a regressor, then the output layer has a single node.

If the NN is a classifier, then it also has a single node unless softmax is used in which case the output layer has one node per class label in your model.

The Hidden Layers

So those few rules set the number of layers and size (neurons/layer) for both the input and output layers. That leaves the hidden layers.

How many hidden layers? Well if your data is linearly separable (which you often know by the time you begin coding a NN) then you don’t need any hidden layers at all. Of course, you don’t need an NN to resolve your data either, but it will still do the job.

Beyond that, as you probably know, there’s a mountain of commentary on the question of hidden layer configuration in NNs (see the insanely thorough and insightful NN FAQ for an excellent summary of that commentary). One issue within this subject on which there is a consensus is the performance difference from adding additional hidden layers: the situations in which performance improves with a second (or third, etc.) hidden layer are very small. One hidden layer is sufficient for the large majority of problems.

So what about size of the hidden layer(s)–how many neurons? There are some empirically-derived rules-of-thumb, of these, the most commonly relied on is ‘the optimal size of the hidden layer is usually between the size of the input and size of the output layers‘. Jeff Heaton, author of Introduction to Neural Networks in Java offers a few more. [MY NOTE: here size means the dimension of the input feature and number of classes. ]

In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers.


Optimization of the Network Configuration

Pruning describes a set of techniques to trim network size (by nodes not layers) to improve computational performance and sometimes resolution performance. The gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance (i.e., resolution of the data). (Even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training; look weights very close to zero–it’s the nodes on either end of those weights that are often removed during pruning.) Obviously, if you use a pruning algorithm during training then begin with a network configuration that is more likely to have excess (i.e., ‘prunable’) nodes–in other words, when deciding on a network architecture, err on the side of more neurons, if you add a pruning step.

Put another way, by applying a pruning algorithm to your network during training, you can approach optimal network configuration; whether you can do that in a single “up-front” (such as a genetic-algorithm-based algorithm) I don’t know, though I do know that for now, this two-step optimization is more common.

[This note was originally posted by ‘doug’ in stackexchange.]

More:

  1. How many hidden layers should I use?
  2. How many hidden units should I use?

Some useful MATLAB code snippets

[ More to be added soon …]

1. Find all subfolders under a directory


d = dir(pathFolder);
 isub = [d(:).isdir]; %# returns logical vector
 nameFolds = {d(isub).name}';
 %You can then remove . and ..
 nameFolds(ismember(nameFolds,{'.','..'})) = [];

2. Find index of cells containing my string


IndexC = strfind(C, 'bla');
 Index = find(not(cellfun('isempty', IndexC)));

3. Read gif image correctly

You need to get and use the colormap from the file:


[X,map] = imread('im0004.gif');
 imshow(X,map)

4. To convert ‘gif’ to other format

        if strcmp(ext, 'gif') == 1
            [im,map] = imread(fullfile(src, dir_list{j}, files(i).name));
            im = ind2rgb(im,map); % convert gif to rgb format using colormap
        else
            im  = imread(fullfile(src, dir_list{j}, files(i).name));
        end
        [~,name,~] = fileparts(files(i).name);
        imwrite(im, fullfile(src, dir_list{j}, [name '.jpg']));

5. Counting frequency of occurrence in matrix

x =[
22 23 24 23
24 23 24 22
22 23 23 23];
a = unique(x);
out = [a,histc(x(:),a)];

6. count occurrences of string in a single cell array (How many times a string appear)

xx = {'computer', 'car', 'computer', 'bus', 'tree', 'car'}
a=unique(xx,'stable')
b=cellfun(@(x) sum(ismember(xx,x)),a,'un',0)

7. Add a vector as column to cell array

 C = {'a' 'b' 'c' 'd'
      'e' 'f' 'g' 'h'
      'i' 'j' 'k' 'l'
      'm' 'n' 'o' 'p'}
 a=[1 2 3 4]
 out=[num2cell(a); C]

%or

 out=[num2cell(a')  C]

% for third case

 out=C(:)'

%or

 out=reshape(C,1,[])

8. Global figure title for a group of subplots

suptitle(strrep(fnames{f}, '_', '-'));

9. Save maximized figure

set(fig, 'Position', get(0,'Screensize')); % Maximize figure
set(fig, 'PaperPositionMode', 'auto');
saveas(fig, [dst fnames{f} '_subrange_plot.png']);

10. Plot points on image with serial numbers.

% apply data labels to each point in a scatter plot
% x = 1:10; y = 1:10; scatter(x,y);
% a = [1:10]'; b = num2str(a); c = cellstr(b);
% dx = 0.1; dy = 0.1; % displacement so the text does not overlay the data points
% text(x+dx, y+dy, c);

Latex Problem: How to solve appearaance of question mark instead of citation number

What does a question mark mean

It means that somewhere along the line the combination of LaTeX and BibTeX has failed to find and format the citation data you need for the citation: LaTeX can see you want to cite something, but doesn’t know how to do so.

Missing citations show up differently in biblatex

If you are using biblatex you will not see a question mark, but instead you will see your citation key in bold. For example, if you have an item in your .bib file with the key Jones1999 you will see Jones1999 in your PDF.

How does this all work

To work out what’s happening, you need to understand how the process is (supposed to) work. Imagine LaTeX and BibTeX as two separate people. LaTeX is a typesetter. BibTeX is an archivist. Roughly the process is supposed to run as follows:

  1. LaTeX (the typesetter) reads the manuscript through and gives three pieces of information to BibTeX (the archivist): a list of the references that need to be cited, extracted from the \citecommands; a note of a file where those references can be found, extracted from the \bibliography command; a note of the sort of formatting required, extracted from the \bibliographystyle command.
  2. BibTeX then goes off, looks up the data in the file it has been told to read, consults a file that tells it how to format the data, and generates a new file containing that data in a form that has been organised so that LaTeX can use it (the .bbl file).
  3. LaTeX then has to take that data and typeset the document – and may indeed need more than one ‘run’ to do so properly (because there may be internal relationships within the data, or with the rest of the manuscript, which BibTeX neither knows or cares about, but which matter for typesetting.

Your question-mark tells you that something has gone wrong with this process.

More biblatex and biber notes:

  • If you are using biblatexthe style information is located in the options passed to the to the biblatex package, and the raw data is in the \addbibresource command.
  • If you are using biblatex, the stage described as BiBTeX in this answer are generally replaced with a different, and more cunning, archivist, Biber.

What to do

The first thing to do is to make sure that you have actually gone through the whole process at least once: that is why, to deal with any new citation, you will always need at least a LaTeX run (to prepare the information that needs to be handed to BibTeX), one BibTeX run, and one or more subsequent LaTeX runs. So first, make sure you have done that. Please notice, that latex and bibtex/biber need to be run on your main file (without the file ending). In other words, the basename of your main file.

latex MainFile bibtex MainFile latex MainFile latex MainFile

If you still have problems, then something has gone wrong somewhere. And it’s nearly always something about the flow of information.

Your first port of call is the BibTeX log (.blg) file. That will usually give you the information you need to diagnose the problem. So open that file (which will be called blah.blg where ‘blah’ is the name of your source file).

In a roughly logical order:

  1. BibTeX did not find the style file. That’s the file that tells it how to format references. In this case you will have an error, and BibTeX will complain I couldn't open the style file badstyle.bst. If you are trying to use a standard style, that’s almost certainly because you have not spelled the style correctly in your \bibliographystyle command – so go and check that. If you are trying to use a non-standard style, it’s probably because you’ve put it somewhere TeX can’t find it. (For testing purposes, I find, it’s wise to remember that it will always be found if it’s in the same directory as your source file; but if you are installing using the facilities of your TeX system — as an inexperienced person should be – you are unlikely to get that problem.)
  2. BibTeX did not find the database file. That’s the .bib file containing the data. In that case the log file will say I couldn't open database file badfile.bib, and will then warn you that it didn’t find database files. The cure is the same: go back and check you have spelled the filename correctly, and that it is somewhere TeX can find it (if in doubt, put it in the folder with your source file).
  3. BibTeX found the file, but it doesn’t contain citation data for the thing you are trying cite. Now you will just get, in the log-file: Warning--I didn't find a database entry for "yourcitation". That’s what happened to you. You might think that you should have got a type 2 error: but you didn’t because as it happens there is a file called mybib.bib hanging around on the system (as kpsewhich mybib.bib will reveal) — so BibTeX found where it was supposed to look, but couldn’t find the data it needed there. But essentially the order of diagnosis is the same: check you have the right file name in your \bibliography command. If that’s all right, then there is something wrong with that file, or with your citation command. The most likely error here is that you’ve either forgotten to include the data in your .bib file, or you have more than one .bib file that you use and you’ve sent BibTeX to the wrong one, or you’ve mis-spelled the citation label (e.g. you’ve done \cite{nobdoy06} for \cite{nobody06}.
  4. There’s something wrong with the formatting of your entry in the .bib file. That’s not uncommon: it’s easy (for instance) to forget a comma. In that case you should have errors from BibTeX, and in particular something like I was expecting a ',' or a '}' and you will be told that it was skipping whatever remains of this entry. Whether that actually stops any citation being produced may depend on the error; I think BibTeX usually manages to produce something — but biblatex can get totally stumped. Anyway, check and correct the particular entry.

biblatex and biber notes

If you are using biblatex, then generally you will also be using the Biber program instead of BiBTeX program to process your bibliography, but the same general principles apply. Hence the compilation sequence becomes

latex MainFile biber MainFile latex MainFile

Summary

The order of diagnosis is as follows:

  1. Have I run LaTex, BibTeX (or Biber), LaTeX, LaTeX?
  2. Look at the .blg file, which will help mightily in answering the following questions.
  3. Has BibTeX/Biber found my style file? (Check you have a valid \bibliographystylecommand and that there is a .bst with the same name where it can be found.)
  4. Has Bibtex/Biber found my database? (Check the \bibliography names it correctly and it is able to be found.)
  5. Has it found the right database?
  6. Does the database contain an entry which matches the citation I have actually typed?
  7. Is that entry valid?
  8. Finally: When you have changed something, don’t forget that you will need to go through the same LaTeX — BibTeX (or Biber) — LaTeX — LaTeX run all over again to get it straight. (That’s not actually quite true: but until you have more of a feel for the process it’s a safe assumption to make.)

Classifying Grayscale Images using Pycaffe

If you have trained a model with 1-dimensional gray image, and want to classify another gray image, the following is the hack worked for me:

  1. copy the offical classify.py in $CAFFE_ROOT/python/classify.py
  2. specify input_dim as 1, 1, x, x in deploy.prototxt
  3. change all call to caffe.io.load_image(fname) in classify.py to caffe.io.load_image(fname, False) because if you do not specify the second parameter as False, True will be used by default, the meaning of the second parameter is to tell load_image whether the image is color or gray, if it’s in color, then the returned image will have shape (width, height, 3) or (width, height, 4) depending on whether the alpha channel exists. If you specify False, the shape will be (width, height, 1) as you want.
  4. specify –channel_swap ‘0’ in python classify.py because this value is to reorder RGB to BGR, let’s say we have an image im, im is in numpy array format, and im.shape = (10, 10, 3), then caffe will do im = im[:, :, channel_wap] to swap channels, if you do not specify --channel_swap, it will be "2,1,0" by default, then in caffe, im = im[:, :, [2, 1, 0]], but the gray image’s shape is really (10, 10, 1) (if you follow the 2 step), so an index out of bounds exception will be raised. So just specify ‘0’ to --channel_swap, then caffe will run im = im[:, :, [0]], that’s fine.

then just use the official classify.py.

Here is the gist of classify.py and test.sh worked for me.
classify.py: https://gist.github.com/uronce-cc/869afe1bd85e79dda111
test.sh: https://gist.github.com/uronce-cc/e834e9cd2a0a62ceb5d5

Hope it will work for you too.

Installing frontalization 0.1.3: Face Frontalization in Unconstrained Images using MATLAB R2015b on Ubuntu 16.04

Library source: http://www.openu.ac.il/home/hassner/projects/frontalize/

Dependencies:
The code uses the following dependencies. You MUST have these installed and available on the MATLAB path:

1. calib-1.0.1 function available from: http://www.openu.ac.il/home/hassner/projects/poses/calib.1.0.1.zip
Installation: unzip calib.1.0.1, rename it to calib under frontaliztion home directory. Then rename calib_cv2.4.mexa64 to calib.mexa64

2. Facial feature detection functions. The code provides examples
of frontalization using different facial landmark detection methods. Currently supported are:
– SDM (default, used in paper; We don’t use this at the moment ),
– The facial feature detector of Zhu and Ramanan (We don’t use this at the moment)
DLIB detector (Our chosen method) .
– Any sparse (five-point) facial landmark detector. (We don’t use this at the moment)

3. OpenCV required by calib for calibration routines and some of the

detectors for cascase classifiers (We have already discussed about OpenCV installation in other blog posts. Check those.)

Frontalization set up:

1. Setup Dlib: Download from http://dlib.net/files/dlib-19.1.tar.bz2

tar jxvf dlib-19.1.tar.bz2
 cd dlib-19.1/
 cd examples/
cd build/
 cmake ..
 cmake --build . --config Release

2. Install dlib dependency (if required):

sudo apt-get install libboost-python1.58.0

3. Open demo.m

change line 86 from :
 detector = 'SDM'; to detector = 'dlib';

4. Open facial_feature_detection.m

Go to case ‘dlib’

change line 106 to following:
 Model3D = load('model3Ddlib'); % reference 3D points corresponding to dlib detections
 Model3D = Model3D.model_dlib;
and change line 111 to following:
 fidu_XY = load('dlib_xy.mat'); % load detections performed by Python script on current image
 fidu_XY = reshape(fidu_XY.lmarks,68,2);

5. Now open dlib_detect_script.py

Comment out line 7:
 #from Utils import HOME
Add the following two lines at the end: (Change image list as you like)
 lmarks, bboxes = get_landmarks(['test.jpg'])
 savemat('dlib_xy.mat', {'lmarks':lmarks})

6. Run the python file, this will create the dlib_xy.mat file

8. Now run demo.m , to see the frontalization demo result.

Installing OpenFace: an open source facial behavior analysis toolkit

Installation System: Ubuntu 16.04

1. Installing dependencies:

sudo apt-get update
sudo apt-get install build-essential
sudo apt-get install llvm
sudo gedit /etc/apt/sources.list
change
deb http://us.archive.ubuntu.com/ubuntu/ xenial main restricted
to
deb http://us.archive.ubuntu.com/ubuntu/ xenial main universe
sudo apt-get update
sudo apt-get install clang-3.7 libc++-dev libc++abi-dev
sudo apt-get install cmake
sudo apt-get install libopenblas-dev liblapack-dev
sudo apt-get install git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev checkinstall
wget https://github.com/Itseez/opencv/archive/3.1.0.zip
sudo unzip 3.1.0.zip
cd opencv-3.1.0
mkdir build
cd build/
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D BUILD_SHARED_LIBS=OFF ..
make -j4
sudo make install
sudo apt-get install libboost-all-dev

2. Installing OpenFace

git clone https://github.com/TadasBaltrusaitis/OpenFace.git
cd OpenFace/
mkdir build

cd build/
cmake -D CMAKE_BUILD_TYPE=RELEASE ..
make

Ref: https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-Installation

3. Using from MATLAB  (Error Resolution: ibstdc++.so.6: version `CXXABI_1.3.8′ not found)

To solve this issue, I found setting the LD_LIBARARY_PATH from the script works for me. Just add the following at the beginning of the script (change the libstdc++.so.6 path according to your system):

setenv('LD_LIBRARY_PATH', '/usr/lib/x86_64-linux-gnu/libstdc++.so.6');

Sample script for running FaceLandmarkImg

from subprocess import call


# FaceLandmarkImg
#
# Single image analysis
#
# -f <filename> the image file being input, can have multiple -f flags
# -of <filename> location of output file for landmark points, gaze and action units
# -op <filename> location of output file for 3D landmark points and head pose
# -gaze indicate that gaze estimation should be performed
# -oi <filename> location of output image with landmarks
# -root <dir> the root directory so -f, -of, -op, and -oi can be specified relative to it
# -inroot <dir> the input root directory so -f can be specified relative to it
# -outroot <dir> the root directory so -of, -op, and -oi can be specified relative to it
#
# Batch image analysis
#
# -fdir <directory> - runs landmark detection on all images (.jpg and .png) in a directory, if the directory contains
# .txt files (image_name.txt) with bounding box (min_x min_y max_x max_y), it will use those for initialisation
# -ofdir <directory> directory where detected landmarks, gaze, and action units should be written
# -oidir <directory> directory where images with detected landmarks should be stored
# -opdir <directory> directory where pose files are output (3D landmarks in images together with head pose and gaze)

exe = "../build/bin/FaceLandmarkImg"

# f_param = './OpenFace/image_sequence/001.jpg'
# of_param = './OpenFace/python/img_output/001.txt'
# op_param = './OpenFace/python/img_output/001_3d.txt'
# oi_param = './OpenFace/python/img_output/001.jpg'
# call([exe, "-f", f_param, "-of", of_param, "-op", op_param, "-oi", oi_param])

fdir_param = './OpenFace/image_sequence/'
ofdir_param = './OpenFace/python/imgseq_output'
oidir_param = ofdir_param
opdir_param = ofdir_param

call([exe, "-fdir", fdir_param, "-ofdir", ofdir_param, "-oidir", oidir_param, "-opdir", opdir_param, "-wild"])

 

Installing Nvidia DIGITS on Ubuntu 16.04.1

(I am assuming caffe and pycaffe are already successfully installed. If not, check my previous post on that.)

1. Install dependencies

sudo apt-get install --no-install-recommends git graphviz gunicorn python-dev python-flask python-flaskext.wtf python-gevent python-h5py python-numpy python-pil python-protobuf python-scipy

2. Download and install source

DIGITS_HOME=~/digits
git clone https://github.com/NVIDIA/DIGITS.git $DIGITS_HOME

3. Install python packages [Upgrade pip if needed]

sudo pip install -r $DIGITS_HOME/requirements.txt

4. Set an environment variable in ~/.bashrc so DIGITS knows where Caffe is installed:

export CAFFE_HOME=${HOME}/caffe

Remeber DIGITS will look for caffe binaries in ${HOME}/caffe/build/tools/ directory, so make sure the binaries are installed in that manner.

5. Start DIGITS server

 ./digits-server 

6. The default location of the web app is

 http://localhost:34448/

Installing Caffe with Cuda 7.5 on Ubuntu 16.04.1 (Nvidia GeForce GTX 960M GPU)

There are three main steps:

Step 1: Installing the Nvidia graphics driver

Step 2: Installing Cuda 7.5

Step 3: Installing Caffe, pycaffe

Step 1: Installing the Nvidia graphics driver

Ref: http://askubuntu.com/questions/658040/ubuntu-14-04-nvidia-drivers-for-geforce-gtx-960m


sudo apt-get purge nvidia*
sudo apt-get purge bumblebee* primus
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-get install nvidia-352 nvidia-prime
sudo add-apt-repository -r ppa:bumblebee/stable

Step 2: Installing Cuda 7.5

Ref: http://askubuntu.com/questions/799184/how-can-i-install-cuda-on-ubuntu-16-04

1. Download cuda_7.5.18_linux.run from https://developer.nvidia.com/cuda-downloads

2.

md5sum cuda_7.5.18_linux.run
sudo apt-get purge nvidia-cuda*
sudo sh cuda_7.5.18_linux.run --override
sudo reboot

Step 3: Installing Caffe, pycaffe
Ref: https://github.com/BVLC/caffe/wiki/Ubuntu-16.04-or-15.10-Installation-Guide

1. Enter the following commands


sudo apt-get update

sudo apt-get upgrade

sudo apt-get install -y build-essential cmake git pkg-config

sudo apt-get install -y libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler

sudo apt-get install -y libatlas-base-dev

sudo apt-get install -y --no-install-recommends libboost-all-dev

sudo apt-get install -y libgflags-dev libgoogle-glog-dev liblmdb-dev

# (Python general)
sudo apt-get install -y python-pip

# (Python 2.7 development files)
sudo apt-get install -y python-dev
sudo apt-get install -y python-numpy python-scipy

# (OpenCV 2.4)
sudo apt-get install -y libopencv-dev

2. clone caffe repository

 git clone https://github.com/BVLC/caffe.git 

3. enter caffe folder and copy the Makefile.config

 cp Makefile.config.example Makefile.config 

4. Edit Makefile.config as follows


PYTHON_INCLUDE := /usr/include/python2.7 /usr/lib/python2.7/dist-packages/numpy/core/include

WITH_PYTHON_LAYER := 1

INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial

LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial

5. Apply the following commands (for Ubuntu 16.04)

find . -type f -exec sed -i -e 's^&quot;hdf5.h&quot;^&quot;hdf5/serial/hdf5.h&quot;^g' -e 's^&quot;hdf5_hl.h&quot;^&quot;hdf5/serial/hdf5_hl.h&quot;^g' '{}' \;

cd /usr/lib/x86_64-linux-gnu

sudo ln -s libhdf5_serial.so.10.1.0 libhdf5.so

sudo ln -s libhdf5_serial_hl.so.10.0.2 libhdf5_hl.so

6. Installing required python packages

cd python

for req in $(cat requirements.txt); do pip install $req; done

7. Go back to caffe installation folder (~/caffe e.g.), and run the installation commands:

make all

make test

make runtest

make pycaffe

8. In order to make the Python work with Caffe, open the file ~/.bashrc for editing in your favorite text editor. There, add the following line at the end of file:

export PYTHONPATH=/path/to/caffe-master/python:$PYTHONPATH

9. Your binaries will reside in .build_release folder

~/caffe/.build_release/tools/ 

ERROR Resolution: “error — unsupported GNU version! gcc versions later than 5.3 are not supported!”

comment the #error line in file /usr/local/cuda/include/host_config.h

#if __GNUC__ > 5 || (__GNUC__ == 5 && __GNUC_MINOR__ > 3)

//#error — unsupported GNU version! gcc versions later than 5.3 are not supported!

#endif /* __GNUC__ > 5 || (__GNUC__ == 5 && __GNUC_MINOR__ > 1) */

ERROR Resolution: “/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
return (char *) memcpy (__dest, __src, __n) + __n;
^
Makefile:588: recipe for target ‘.build_release/cuda/src/caffe/util/im2col.o’ failed
make: *** [.build_release/cuda/src/caffe/util/im2col.o] Error 1

If you are compiling with make then edit Makefile and replace the line
NVCCFLAGS += -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
with
NVCCFLAGS += -D_FORCE_INLINES -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)

Compiling congealReal: Unsupervised joint alignment of complex images

The c++ code provided for the project ‘Unsupervised joint alignment of complex images’ at https://bitbucket.org/gbhuang/congealreal is very useful application for unsupervised image alignment. But the Makefile is now outdated, but we can fix it by modifying it as following,


IFLAGS = `pkg-config --cflags opencv` -O3
LFLAGS = `pkg-config opencv --libs`

all: congealReal funnelReal

clean:
rm congealReal funnelReal

congealReal: congealReal.cpp
g++ $(IFLAGS) -o congealReal congealReal.cpp $(LFLAGS)

funnelReal: funnelReal.cpp
g++ $(IFLAGS) -o funnelReal funnelReal.cpp $(LFLAGS)