Installing OpenPose on Ubuntu 16.04 with CUDA 8.0

1. Clone repository to desired location, well call it OPENPOSE_ROOT.

git clone

2. If running OpenCV 3.0 , modify OPENPOSE_ROOT/3rdparty/caffe/Makefile.config.Ubuntu16_cuda8.example and uncomment
and check cuda path , change if necessary eg.
CUDA_DIR := /usr/local/cuda-8.0

3. Open the Makefile in the OPENPOSE_ROOT directory, and modify the following

LIBRARIES += opencv_core opencv_highgui opencv_imgproc opencv_objdetect opencv_imgcodecs opencv_videoio

4. Run bash script to install from OPENPOSE_ROOT

bash ./ubuntu/


Some useful Bash scripts

1. List the number of files in sub directories (cd to the desired parent directory, and run the following script)

find . -maxdepth 1 -type d -print0 | while read -d '' -r dir; do num=$(find $dir -ls | wc -l); printf "%5d files in directory %s\n" "$num" "$dir"; done

2. List the size of all sub directories (cd to the desired parent directory, and run the following script)

du -k --max-depth=1 | sort -nr | awk '
split("KB,MB,GB,TB", Units, ",");
u = 1;
while ($1 >= 1024) {
$1 = $1 / 1024;
u += 1
$1 = sprintf("%.1f %s", $1, Units[u]);
print $0;

Compiling ‘LIBLINEAR MKL : A Fast Multiple Kernel Learning L1/L2-loss SVM solver in MATLAB’

Link to library: liblinear-mkl
The provided Makefile in matlab directory is outdated , I have updated the file and posting it below:

# This Makefile is used under Linux

MATLABDIR ?= /home/guo/MATLAB/R2015b
CXX ?= g++
#CXX = g++-3.3
CC ?= gcc

CFLAGS = -Wall -Wconversion -O3 -fPIC -I$(MATLABDIR)/extern/include -I..

MEX = $(MATLABDIR)/bin/mex
# comment the following line if you use MATLAB on a 32-bit computer
MEX_OPTION += -largeArrayDims
MEX_EXT = $(shell $(MATLABDIR)/bin/mexext)

OCTAVEDIR ?= /usr/include/octave
OCTAVE_MEX = env CC=$(CXX) mkoctfile

all:	matlab

matlab:	binary


binary: train.$(MEX_EXT) predict.$(MEX_EXT) libsvmread.$(MEX_EXT) libsvmwrite.$(MEX_EXT)

train.$(MEX_EXT): train.c ../linear.h ../tron.o ../linear.o linear_model_matlab.o ../blas/blas.a
$(MEX) $(MEX_OPTION) train.c linear_model_matlab.c ../tron.o ../linear.o linear_model_matlab.o ../blas/blas.a

predict.$(MEX_EXT): predict.c ../linear.h ../tron.o ../linear.o linear_model_matlab.o ../blas/blas.a
$(MEX) $(MEX_OPTION) predict.c linear_model_matlab.c ../tron.o ../linear.o linear_model_matlab.o ../blas/blas.a

libsvmread.$(MEX_EXT):	libsvmread.c
$(MEX) $(MEX_OPTION) libsvmread.c

libsvmwrite.$(MEX_EXT):	libsvmwrite.c
$(MEX) $(MEX_OPTION) libsvmwrite.c

linear_model_matlab.o: linear_model_matlab.c ../linear.h
$(CXX) $(CFLAGS) -c linear_model_matlab.c

cd ..; make linear.o

cd ..; make tron.o

cd ../blas; make OPTFLAGS='$(CFLAGS)' CC='$(CC)';

cd ../blas;	make clean
rm -f *~ *.o *.mex* *.obj ../linear.o ../tron.o

Things to notice:

1) Updating the MATLABDIR to your installation directory for matlab
2) Change all the C++ style inline comments in c files to C style comment (/**/), I tried using the ‘-std=c99’ switch but make seems to ignore that , moreover, gcc might not need that at all. Strange.

How to choose the number of hidden layers and nodes in a feedforward neural network?

By following a small set of clear rules, one can programmatically set a competent network architecture (i.e., the number and type of neuronal layers and the number of neurons comprising each layer). Following this schema this will give you a competent architecture but probably not an optimal one.

But once this network is initialized, you can iteratively tune the configuration during training using a number of ancillary algorithms; one family of these works by pruning nodes based on (small) values of the weight vector after a certain number of training epochs–in other words, eliminating unnecessary/redundant nodes (more on this below).

So every NN has three types of layers: input, hidden, and output.

Creating the NN architecture therefore means coming up with values for the number of layers of each type and the number of nodes in each of these layers.

The Input Layer

Simple–every NN has exactly one of them–no exceptions that I’m aware of.

With respect to the number of neurons comprising this layer, this parameter is completely and uniquely determined once you know the shape of your training data. Specifically, the number of neurons comprising that layer is equal to the number of features (columns) in your data. Some NN configurations add one additional node for a bias term.

The Output Layer

Like the Input layer, every NN has exactly one output layer. Determining its size (number of neurons) is simple; it is completely determined by the chosen model configuration.

Is your NN going running in Machine Mode or Regression Mode (the ML convention of using a term that is also used in statistics but assigning a different meaning to it is very confusing). Machine mode: returns a class label (e.g., “Premium Account”/”Basic Account”). Regression Mode returns a value (e.g., price).

If the NN is a regressor, then the output layer has a single node.

If the NN is a classifier, then it also has a single node unless softmax is used in which case the output layer has one node per class label in your model.

The Hidden Layers

So those few rules set the number of layers and size (neurons/layer) for both the input and output layers. That leaves the hidden layers.

How many hidden layers? Well if your data is linearly separable (which you often know by the time you begin coding a NN) then you don’t need any hidden layers at all. Of course, you don’t need an NN to resolve your data either, but it will still do the job.

Beyond that, as you probably know, there’s a mountain of commentary on the question of hidden layer configuration in NNs (see the insanely thorough and insightful NN FAQ for an excellent summary of that commentary). One issue within this subject on which there is a consensus is the performance difference from adding additional hidden layers: the situations in which performance improves with a second (or third, etc.) hidden layer are very small. One hidden layer is sufficient for the large majority of problems.

So what about size of the hidden layer(s)–how many neurons? There are some empirically-derived rules-of-thumb, of these, the most commonly relied on is ‘the optimal size of the hidden layer is usually between the size of the input and size of the output layers‘. Jeff Heaton, author of Introduction to Neural Networks in Java offers a few more. [MY NOTE: here size means the dimension of the input feature and number of classes. ]

In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers.

Optimization of the Network Configuration

Pruning describes a set of techniques to trim network size (by nodes not layers) to improve computational performance and sometimes resolution performance. The gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance (i.e., resolution of the data). (Even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training; look weights very close to zero–it’s the nodes on either end of those weights that are often removed during pruning.) Obviously, if you use a pruning algorithm during training then begin with a network configuration that is more likely to have excess (i.e., ‘prunable’) nodes–in other words, when deciding on a network architecture, err on the side of more neurons, if you add a pruning step.

Put another way, by applying a pruning algorithm to your network during training, you can approach optimal network configuration; whether you can do that in a single “up-front” (such as a genetic-algorithm-based algorithm) I don’t know, though I do know that for now, this two-step optimization is more common.

[This note was originally posted by ‘doug’ in stackexchange.]


  1. How many hidden layers should I use?
  2. How many hidden units should I use?

Some useful MATLAB code snippets

[ More to be added soon …]

1. Find all subfolders under a directory

d = dir(pathFolder);
 isub = [d(:).isdir]; %# returns logical vector
 nameFolds = {d(isub).name}';
 %You can then remove . and ..
 nameFolds(ismember(nameFolds,{'.','..'})) = [];

2. Find index of cells containing my string

IndexC = strfind(C, 'bla');
 Index = find(not(cellfun('isempty', IndexC)));

3. Read gif image correctly

You need to get and use the colormap from the file:

[X,map] = imread('im0004.gif');

4. To convert ‘gif’ to other format

        if strcmp(ext, 'gif') == 1
            [im,map] = imread(fullfile(src, dir_list{j}, files(i).name));
            im = ind2rgb(im,map); % convert gif to rgb format using colormap
            im  = imread(fullfile(src, dir_list{j}, files(i).name));
        [~,name,~] = fileparts(files(i).name);
        imwrite(im, fullfile(src, dir_list{j}, [name '.jpg']));

5. Counting frequency of occurrence in matrix

x =[
22 23 24 23
24 23 24 22
22 23 23 23];
a = unique(x);
out = [a,histc(x(:),a)];

6. count occurrences of string in a single cell array (How many times a string appear)

xx = {'computer', 'car', 'computer', 'bus', 'tree', 'car'}
b=cellfun(@(x) sum(ismember(xx,x)),a,'un',0)

7. Add a vector as column to cell array

 C = {'a' 'b' 'c' 'd'
      'e' 'f' 'g' 'h'
      'i' 'j' 'k' 'l'
      'm' 'n' 'o' 'p'}
 a=[1 2 3 4]
 out=[num2cell(a); C]


 out=[num2cell(a')  C]

% for third case




8. Global figure title for a group of subplots

suptitle(strrep(fnames{f}, '_', '-'));

9. Save maximized figure

set(fig, 'Position', get(0,'Screensize')); % Maximize figure
set(fig, 'PaperPositionMode', 'auto');
saveas(fig, [dst fnames{f} '_subrange_plot.png']);

10. Plot points on image with serial numbers.

% apply data labels to each point in a scatter plot
% x = 1:10; y = 1:10; scatter(x,y);
% a = [1:10]'; b = num2str(a); c = cellstr(b);
% dx = 0.1; dy = 0.1; % displacement so the text does not overlay the data points
% text(x+dx, y+dy, c);

11. calculate the execution time of program

timeElapsed = toc

12. normalize a vector of points belonging to the interval [a,b]

n_data = a+(b-a)*(v-min(v))/( max(v)-min(v))

13. Correlation between two vectors


A_1 = [10 200 7 150]';
A_2 = [0.001 0.450 0.007 0.200]';

There are tools to simply compute correlation, most obviously corr:

corr(A_1, A_2);  %Returns 0.956766573975184  (Requires stats toolbox)

You can also use base Matlab’s corrcoef function, like this:

M = corrcoef([A_1 A_2]):  %Returns [1 0.956766573975185; 0.956766573975185 1];
M(2,1);  %Returns 0.956766573975184 

Which is closely related to the cov function:

cov([condition(A_1) condition(A_2)]);

14. Convert java.util.ArrayList to MATLAB array

% create arraylist

import java.util.ArrayList

val_list = java.util.ArrayList();

for i = 1:length(data)



% convert to matlab array


Latex Problem: How to solve appearaance of question mark instead of citation number

What does a question mark mean

It means that somewhere along the line the combination of LaTeX and BibTeX has failed to find and format the citation data you need for the citation: LaTeX can see you want to cite something, but doesn’t know how to do so.

Missing citations show up differently in biblatex

If you are using biblatex you will not see a question mark, but instead you will see your citation key in bold. For example, if you have an item in your .bib file with the key Jones1999 you will see Jones1999 in your PDF.

How does this all work

To work out what’s happening, you need to understand how the process is (supposed to) work. Imagine LaTeX and BibTeX as two separate people. LaTeX is a typesetter. BibTeX is an archivist. Roughly the process is supposed to run as follows:

  1. LaTeX (the typesetter) reads the manuscript through and gives three pieces of information to BibTeX (the archivist): a list of the references that need to be cited, extracted from the \citecommands; a note of a file where those references can be found, extracted from the \bibliography command; a note of the sort of formatting required, extracted from the \bibliographystyle command.
  2. BibTeX then goes off, looks up the data in the file it has been told to read, consults a file that tells it how to format the data, and generates a new file containing that data in a form that has been organised so that LaTeX can use it (the .bbl file).
  3. LaTeX then has to take that data and typeset the document – and may indeed need more than one ‘run’ to do so properly (because there may be internal relationships within the data, or with the rest of the manuscript, which BibTeX neither knows or cares about, but which matter for typesetting.

Your question-mark tells you that something has gone wrong with this process.

More biblatex and biber notes:

  • If you are using biblatexthe style information is located in the options passed to the to the biblatex package, and the raw data is in the \addbibresource command.
  • If you are using biblatex, the stage described as BiBTeX in this answer are generally replaced with a different, and more cunning, archivist, Biber.

What to do

The first thing to do is to make sure that you have actually gone through the whole process at least once: that is why, to deal with any new citation, you will always need at least a LaTeX run (to prepare the information that needs to be handed to BibTeX), one BibTeX run, and one or more subsequent LaTeX runs. So first, make sure you have done that. Please notice, that latex and bibtex/biber need to be run on your main file (without the file ending). In other words, the basename of your main file.

latex MainFile bibtex MainFile latex MainFile latex MainFile

If you still have problems, then something has gone wrong somewhere. And it’s nearly always something about the flow of information.

Your first port of call is the BibTeX log (.blg) file. That will usually give you the information you need to diagnose the problem. So open that file (which will be called blah.blg where ‘blah’ is the name of your source file).

In a roughly logical order:

  1. BibTeX did not find the style file. That’s the file that tells it how to format references. In this case you will have an error, and BibTeX will complain I couldn't open the style file badstyle.bst. If you are trying to use a standard style, that’s almost certainly because you have not spelled the style correctly in your \bibliographystyle command – so go and check that. If you are trying to use a non-standard style, it’s probably because you’ve put it somewhere TeX can’t find it. (For testing purposes, I find, it’s wise to remember that it will always be found if it’s in the same directory as your source file; but if you are installing using the facilities of your TeX system — as an inexperienced person should be – you are unlikely to get that problem.)
  2. BibTeX did not find the database file. That’s the .bib file containing the data. In that case the log file will say I couldn't open database file badfile.bib, and will then warn you that it didn’t find database files. The cure is the same: go back and check you have spelled the filename correctly, and that it is somewhere TeX can find it (if in doubt, put it in the folder with your source file).
  3. BibTeX found the file, but it doesn’t contain citation data for the thing you are trying cite. Now you will just get, in the log-file: Warning--I didn't find a database entry for "yourcitation". That’s what happened to you. You might think that you should have got a type 2 error: but you didn’t because as it happens there is a file called mybib.bib hanging around on the system (as kpsewhich mybib.bib will reveal) — so BibTeX found where it was supposed to look, but couldn’t find the data it needed there. But essentially the order of diagnosis is the same: check you have the right file name in your \bibliography command. If that’s all right, then there is something wrong with that file, or with your citation command. The most likely error here is that you’ve either forgotten to include the data in your .bib file, or you have more than one .bib file that you use and you’ve sent BibTeX to the wrong one, or you’ve mis-spelled the citation label (e.g. you’ve done \cite{nobdoy06} for \cite{nobody06}.
  4. There’s something wrong with the formatting of your entry in the .bib file. That’s not uncommon: it’s easy (for instance) to forget a comma. In that case you should have errors from BibTeX, and in particular something like I was expecting a ',' or a '}' and you will be told that it was skipping whatever remains of this entry. Whether that actually stops any citation being produced may depend on the error; I think BibTeX usually manages to produce something — but biblatex can get totally stumped. Anyway, check and correct the particular entry.

biblatex and biber notes

If you are using biblatex, then generally you will also be using the Biber program instead of BiBTeX program to process your bibliography, but the same general principles apply. Hence the compilation sequence becomes

latex MainFile biber MainFile latex MainFile


The order of diagnosis is as follows:

  1. Have I run LaTex, BibTeX (or Biber), LaTeX, LaTeX?
  2. Look at the .blg file, which will help mightily in answering the following questions.
  3. Has BibTeX/Biber found my style file? (Check you have a valid \bibliographystylecommand and that there is a .bst with the same name where it can be found.)
  4. Has Bibtex/Biber found my database? (Check the \bibliography names it correctly and it is able to be found.)
  5. Has it found the right database?
  6. Does the database contain an entry which matches the citation I have actually typed?
  7. Is that entry valid?
  8. Finally: When you have changed something, don’t forget that you will need to go through the same LaTeX — BibTeX (or Biber) — LaTeX — LaTeX run all over again to get it straight. (That’s not actually quite true: but until you have more of a feel for the process it’s a safe assumption to make.)

Classifying Grayscale Images using Pycaffe

If you have trained a model with 1-dimensional gray image, and want to classify another gray image, the following is the hack worked for me:

  1. copy the offical in $CAFFE_ROOT/python/
  2. specify input_dim as 1, 1, x, x in deploy.prototxt
  3. change all call to in to, False) because if you do not specify the second parameter as False, True will be used by default, the meaning of the second parameter is to tell load_image whether the image is color or gray, if it’s in color, then the returned image will have shape (width, height, 3) or (width, height, 4) depending on whether the alpha channel exists. If you specify False, the shape will be (width, height, 1) as you want.
  4. specify –channel_swap ‘0’ in python because this value is to reorder RGB to BGR, let’s say we have an image im, im is in numpy array format, and im.shape = (10, 10, 3), then caffe will do im = im[:, :, channel_wap] to swap channels, if you do not specify --channel_swap, it will be "2,1,0" by default, then in caffe, im = im[:, :, [2, 1, 0]], but the gray image’s shape is really (10, 10, 1) (if you follow the 2 step), so an index out of bounds exception will be raised. So just specify ‘0’ to --channel_swap, then caffe will run im = im[:, :, [0]], that’s fine.

then just use the official

Here is the gist of and worked for me.

Hope it will work for you too.

Installing frontalization 0.1.3: Face Frontalization in Unconstrained Images using MATLAB R2015b on Ubuntu 16.04

Library source:

The code uses the following dependencies. You MUST have these installed and available on the MATLAB path:

1. calib-1.0.1 function available from:
Installation: unzip calib.1.0.1, rename it to calib under frontaliztion home directory. Then rename calib_cv2.4.mexa64 to calib.mexa64

2. Facial feature detection functions. The code provides examples
of frontalization using different facial landmark detection methods. Currently supported are:
– SDM (default, used in paper; We don’t use this at the moment ),
– The facial feature detector of Zhu and Ramanan (We don’t use this at the moment)
DLIB detector (Our chosen method) .
– Any sparse (five-point) facial landmark detector. (We don’t use this at the moment)

3. OpenCV required by calib for calibration routines and some of the

detectors for cascase classifiers (We have already discussed about OpenCV installation in other blog posts. Check those.)

Frontalization set up:

1. Setup Dlib: Download from

tar jxvf dlib-19.1.tar.bz2
 cd dlib-19.1/
 cd examples/
cd build/
 cmake ..
 cmake --build . --config Release

2. Install dlib dependency (if required):

sudo apt-get install libboost-python1.58.0

3. Open demo.m

change line 86 from :
 detector = 'SDM'; to detector = 'dlib';

4. Open facial_feature_detection.m

Go to case ‘dlib’

change line 106 to following:
 Model3D = load('model3Ddlib'); % reference 3D points corresponding to dlib detections
 Model3D = Model3D.model_dlib;
and change line 111 to following:
 fidu_XY = load('dlib_xy.mat'); % load detections performed by Python script on current image
 fidu_XY = reshape(fidu_XY.lmarks,68,2);

5. Now open

Comment out line 7:
 #from Utils import HOME
Add the following two lines at the end: (Change image list as you like)
 lmarks, bboxes = get_landmarks(['test.jpg'])
 savemat('dlib_xy.mat', {'lmarks':lmarks})

6. Run the python file, this will create the dlib_xy.mat file

8. Now run demo.m , to see the frontalization demo result.

Installing OpenFace: an open source facial behavior analysis toolkit (Ubuntu 16.04)

EDIT: 06/15/2017 Additional compile time troubleshooting tips added.

Installation System: Ubuntu 16.04

1. Installing dependencies:

sudo apt-get update
sudo apt-get install build-essential
sudo apt-get install llvm
sudo gedit /etc/apt/sources.list
deb xenial main restricted
deb xenial main universe
sudo apt-get update
sudo apt-get install clang-3.7 libc++-dev libc++abi-dev
sudo apt-get install cmake
sudo apt-get install libopenblas-dev liblapack-dev
sudo apt-get install git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev checkinstall
sudo unzip
cd opencv-3.1.0
mkdir build
cd build/
make -j4
sudo make install
sudo apt-get install libboost-all-dev

2. Installing OpenFace

git clone
cd OpenFace/
mkdir build

cd build/


3. Using from MATLAB  (Error Resolution: version `CXXABI_1.3.8′ not found)

To solve this issue, I found setting the LD_LIBARARY_PATH from the script works for me. Just add the following at the beginning of the script (change the path according to your system):

setenv('LD_LIBRARY_PATH', '/usr/lib/x86_64-linux-gnu/');

Sample script for running FaceLandmarkImg

from subprocess import call

# FaceLandmarkImg
# Single image analysis
# -f  the image file being input, can have multiple -f flags
# -of  location of output file for landmark points, gaze and action units
# -op  location of output file for 3D landmark points and head pose
# -gaze indicate that gaze estimation should be performed
# -oi  location of output image with landmarks
# -root  the root directory so -f, -of, -op, and -oi can be specified relative to it
# -inroot  the input root directory so -f can be specified relative to it
# -outroot  the root directory so -of, -op, and -oi can be specified relative to it
# Batch image analysis
# -fdir  - runs landmark detection on all images (.jpg and .png) in a directory, if the directory contains
# .txt files (image_name.txt) with bounding box (min_x min_y max_x max_y), it will use those for initialisation
# -ofdir  directory where detected landmarks, gaze, and action units should be written
# -oidir  directory where images with detected landmarks should be stored
# -opdir  directory where pose files are output (3D landmarks in images together with head pose and gaze)

exe = "../build/bin/FaceLandmarkImg"

# f_param = './OpenFace/image_sequence/001.jpg'
# of_param = './OpenFace/python/img_output/001.txt'
# op_param = './OpenFace/python/img_output/001_3d.txt'
# oi_param = './OpenFace/python/img_output/001.jpg'
# call([exe, "-f", f_param, "-of", of_param, "-op", op_param, "-oi", oi_param])

fdir_param = './OpenFace/image_sequence/'
ofdir_param = './OpenFace/python/imgseq_output'
oidir_param = ofdir_param
opdir_param = ofdir_param

call([exe, "-fdir", fdir_param, "-ofdir", ofdir_param, "-oidir", oidir_param, "-opdir", opdir_param, "-wild"])

UDPATE: 06/15/17

The following problem are faced when trying to install OpenFace using their installation script.

COMPILETIME ERROR 1: OpenCV3.1.0 Installation issue: cudalegacy not compile

This problem you’ll face if are trying to compile with CUDA-8.0:

try this: in graphcuts.cpp (where your error is thrown) change this:

#include "precomp.hpp"

#if !defined (HAVE_CUDA) || defined (CUDA_DISABLER)
to this:

#include "precomp.hpp"

#if !defined (HAVE_CUDA) || defined (CUDA_DISABLER) || (CUDART_VERSION >= 8000)
because graphcuts is not supported directly with CUDA8 anymore.

COMPILE TIME ERROR 2: /usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/ undefined reference to `pixman_glyph_cache_remove'
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/ undefined reference to `pixman_composite_glyphs_no_mask'
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/ undefined reference to `pixman_glyph_get_mask_format'
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/ undefined reference to `pixman_glyph_cache_insert'
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/ undefined reference to `pixman_glyph_cache_freeze'
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/ undefined reference to `pixman_glyph_cache_thaw'
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/ undefined reference to `pixman_glyph_cache_lookup'
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/ undefined reference to `pixman_composite_glyphs'
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/ undefined reference to `pixman_glyph_cache_create

Solution: 1) Install libpixman-1-dev
sudo apt-get install libpixman-1-dev
2) Modify link.txt files to add "-lpixman-1" switch. Path to link.txt -

Installing Nvidia DIGITS on Ubuntu 16.04.1

(I am assuming caffe and pycaffe are already successfully installed. If not, check my previous post on that.)

1. Install dependencies

sudo apt-get install --no-install-recommends git graphviz gunicorn python-dev python-flask python-gevent python-h5py python-numpy python-pil python-protobuf python-scipy

2. Download and install source

git clone $DIGITS_HOME

3. Install python packages [Upgrade pip if needed]

sudo pip install -r $DIGITS_HOME/requirements.txt

4. Set an environment variable in ~/.bashrc so DIGITS knows where Caffe is installed:

export CAFFE_HOME=${HOME}/caffe

Remeber DIGITS will look for caffe binaries in ${HOME}/caffe/build/tools/ directory, so make sure the binaries are installed in that manner.

5. Start DIGITS server


6. The default location of the web app is