Caffe is freely available to all users of HPC2N.
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and community contributors.
Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.
Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models.
Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. We believe that Caffe is among the fastest convnet implementations available.
Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. Join the Caffe community of brewers on the caffe-users group and Github.
On HPC2N we have Caffe available as a module on Kebnekaise.
To use the Caffe module, first add it to your environment. Use:
module spider caffe
to see which versions are available, as well as how to load the module and the needed prerequisites.
Note that all names of modules are case-sensitive when loading the modules.
You can read more about loading modules on our Accessing software with Lmod page and our Using modules (Lmod) page.
Giving the command 'Caffe' with no flags will give you a short list of commands and options to the program.
More information about using Caffe can be found on the Caffe homepage.