Caffe is a deep learning framework popular in Linux with Python or Matlab interface. I just managed to compile Caffe in Windows, and I think it’s worth sharing. niuzhiheng’s GitHub was of great help. This repository is organized in a way that future update merging from Caffe’s GitHub would be very straight forward. For quick setup and usage go to my GitHub.
For quick and dirty start, go to Caffe + vs2013 + OpenCV in Windows Tutorial (I) – Setup, where I’ll provide 1) the modified file that can be compiled in windows right away; 2) the vs2013 project that I’m currently using.
Below is my step by step record to compile Caffe from source in Windows 8.1 + vs2013 + OpenCV 2.4.9 + CUDA 6.5.
- Download source from Caffe’s GitHub and unzip.
- Create a new project in Visual Studio 2013.
- File -> New -> Project
- Choose Win32 Console Application
- Set location to the root of Caffe
- Change Name to caffe (the generated exe file later is named after the project name, so please use this low case word)
- Click OK
- Check Empty project, and then Finish
- Change the platform from Win32 to
- Build -> Configuration Manager -> Active solution platform -> new -> x64 -> OK
- An empty project called caffe is generated into the root of Caffe now. To compile big project like Caffe, good practice is to compile a few *.cpp files first and figure out dependencies one by one.
- Drag files in caffe/src/caffe to Source Files in VS.
- Let’s set some directories of the project.
- In Property Manager, both Debug|x64 and Release|x64 need to be set:
- In Configuration Properties -> General, set Output Directory to ‘../bin’. The generated exe files will be easy to use later. Change for both Debug and Release mode.
- In Configuration Properties -> C/C++ -> General, edit Additional Include Directories (Both Debug and Release) to include:
../include; ../src;
- Make sure to check Inherit from parent or project defaults.
- Now let’s fix dependencies one by one: (My pre-built 3rdparty folder can be downloaded: 3rdparty.zip, if you are using Windows 64bit + vs2013)
- CUDA 6.5
- Download and install (You have to have a GPU on your PC lol)
- OpenCV 2.4.9 + CUDA 6.5
- Boost
- Pre-built ones works fine. I used “boost_1_56_0-msvc-12.0-64.exe”.
- OpenBLAS
- Pre-built ones works fine. I used “OpenBLAS-v0.2.12-Win64-int64.zip”.
- Update: required 3 dll files (libgcc_s_seh-1.dll, libgcc_s_sjlj-1.dll, libgfortran-3.dll, libquadmath-0.dll) can be found here: OpenBlas_Required_DLL.zip (x64)
- Add OpenCV + CUDA + Boost into the project:
- I used to put these 3 libraries outside Caffe folder because they are so useful.
- Add include path to Additional Include Directories: (Both Debug and Release)
$(CUDA_PATH_V6_5)\include $(OPENCV_X64_VS2013_2_4_9)\include $(OPENCV_X64_VS2013_2_4_9)\include\opencv $(BOOST_1_56_0)
- CUDA_PATH_V6_5 is added by CUDA initialization. OPENCV_X64_VS2013_2_4_9 and BOOST_1_56_0 need to be added to the Environment Variables Also, the bin (e.g. /to/opencv-2.4.9/x64-vs2013/bin) folder of OpenCV need to be added to Path. You may need to log off to enable them.
- Add lib path to Additional Library Directories in Configuration Properties -> Linker -> General: (Both Debug and Release)
$(CUDA_PATH_V6_5)\lib\$(PlatformName) $(OPENCV_X64_VS2013_2_4_9)\lib $(BOOST_1_56_0)\lib64-msvc-12.0
- Add libraries to Additional Dependencies in Configuration Properties -> Linker -> Input
- Debug:
opencv_core249d.lib;opencv_calib3d249d.lib;opencv_contrib249d.lib;opencv_flann249d.lib;opencv_highgui249d.lib;opencv_imgproc249d.lib;opencv_legacy249d.lib;opencv_ml249d.lib;opencv_gpu249d.lib;opencv_objdetect249d.lib;opencv_photo249d.lib;opencv_features2d249d.lib;opencv_nonfree249d.lib;opencv_stitching249d.lib;opencv_video249d.lib;opencv_videostab249d.lib;cudart.lib;cuda.lib;nppi.lib;cufft.lib;cublas.lib;curand.lib;%(AdditionalDependencies)
- Release:
opencv_core249.lib;opencv_flann249.lib;opencv_imgproc249.lib;opencv_highgui249.lib;opencv_legacy249.lib;opencv_video249.lib;opencv_ml249.lib;opencv_calib3d249.lib;opencv_objdetect249.lib;opencv_stitching249.lib;opencv_gpu249.lib;opencv_nonfree249.lib;opencv_features2d249.lib;cudart.lib;cuda.lib;nppi.lib;cufft.lib;cublas.lib;curand.lib;%(AdditionalDependencies)
- GFlags + GLog + ProtoBuf + LevelDB
- Download source code from the internet.
- Use CMake to generate .sln for vs2013. Remember to set “CMAKE_INSTALL_PREFIX”, which is where the output files will be generated by build “INSTALL”.
- Build in vs2013. Usually build “BUILD_ALL” first, then “INSTALL”. Both Debug and Release mode.
- Copy compiled files to caffe/3rdparty. Debug versions should be renamed “+d” before copy, e.g. “lib” -> “gflagsd.lib”.
- Google’s code are very good maintained. Nothing much to say.
- HDF5
- Download source code form the internet.
- In CMake, enable HDF5_BUILD_HL_LIB.
- Copy compiled files to caffe/3rdparty.
- LMDB
- Download source code from the internet.
- In Visual Studio 2013, File -> New -> Project From Existing Code…
- In c, a header file called unistd.h is needed. A walk around is found here. Or one can download here: unistd.h, getopt.h and getopt.c.
- The compiled LMDB has some problem which I’ll mention later.
- Copy some .dll files so that caffe can run
- Copy libglog.dll from GLog to caffe/bin.
- Copy libopenblas.dll from OpenBLAS to caffe/bin.
- Copy msvcp120.dll and msvcr120.dll from HDF5 to caffe/bin.
- More paths added to Additional Include Directories (Both Debug and Release):
../3rdparty/include; ../3rdparty/include/openblas; ../3rdparty/include/hdf5; ../3rdparty/include/lmdb;
- More paths added to Additional Library Directories (Both Debug and Release):
../3rdparty/lib
- More files added to Additional Dependencies:
- Debug:
gflagsd.lib;libglog.lib;libopenblas.dll.a;libprotobufd.lib;libprotoc.lib;leveldbd.lib;lmdbd.lib;libhdf5_D.lib;libhdf5_hl_D.lib;Shlwapi.lib;
- Release:
gflags.lib;libglog.lib;libopenblas.dll.a;libprotobuf.lib;libprotoc.lib;leveldb.lib;lmdb.lib;libhdf5.lib;libhdf5_hl.lib;Shlwapi.lib;
- My pre-built 3rdparty folder can be downloaded: 3rdparty.zip, if you are using Windows 64bit + vs2013.
- Now compile “common.cpp” to fix errors.
- Add “#include <process.h>” to common.cpp fix “getpid” error.
- Add “_CRT_SECURE_NO_WARNINGS” to Configuration Properties -> C/C++ -> Preprocessor -> Preprocessor Definitions to fix “fopen_s” error.
- Change the line used “getpid” in common.cpp to fix POSIX error.
// port for Win32 #ifndef _MSC_VER pid = getpid(); #else pid = _getpid(); #endif
- common.cpp should be compiled without error now.
- Now compile “blob.cpp”
- To enable #include “caffe/proto/caffe.pb.h”. We need to generate it from caffe.proto.
- Put proto.exe in caffe/3rdparty/bin folder.
- Put GeneratePB.bat in caffe/scripts folder.
- Add the line below (with the quote mark) to Configuration Properties -> Build Events -> Pre-Build Event -> Command Line:
“../scripts/GeneratePB.bat”
- Right click caffe to build to project, you will see “caffe.pb.h is being generated” and “caffe_pretty_print.pb.h is being generated”
- Now blop.cpp can be compiled without errors.
- Now compile “net.cpp”
- “unistd.h” missing error can be fixed like above. (also included in 3rdparty.zip already)
- “mkstemp” missing error can be fixed according to this (download here: mkstep.h and mkstep.cpp or find in 3rdparty.zip)
- Add “#include “mkstemp.h”” to io.hpp.
- Change the line used “close” the below in io.hpp to fix close error. (Don’t do “#define close _close” in io.hpp. Such define are dangerous in any .hpp files, but acceptable in .cpp files)
#ifndef _MSC_VER close(fd); #else _close(fd); #endif
- Change the line used “mkdtemp” function to below in io.hpp (to use _mktemp_s as a walk-around for mkdtemp):
#ifndef _MSC_VER char* mkdtemp_result = mkdtemp(temp_dirname_cstr); #else errno_t mkdtemp_result = _mktemp_s(temp_dirname_cstr, sizeof(temp_dirname_cstr)); #endif
- Now compile “solver.cpp”
- Add these line to solver.cpp to fix “snprintf” error to below:
// port for Win32 #ifdef _MSC_VER #define snprintf sprintf_s #endif
- Now compile files in caffe/src/layers
- Create a folder in Source Files and drag ONE .cu files in layer folder into it.
- Enable CUDA 6.5(.targets, .props) in PROJECT -> Build Customization.
- In the property of any .cu file, change Item Type to CUDA C/C++
- Drag the rest files in layer folder to the project. All new added .cu file will have its Item Type Otherwise you have to change them one by one.
- In bnll_layer.cu, change “const float kBNLL_THRESHOLD = 50.;” to “#define kBNLL_THRESHOLD 50.0” to avoid error.
- Now every files in layer folder compiles fine.
- Now compile files in caffe/src/util folder
- Goto ReadProtoFromBinaryFile function. Change “O_RDONLY” to “O_RDONLY | O_BINARY”. When loading a binary file in Windows, you have to specify that it’s binary. Otherwise you might have error loading a *_mean.binaryproto file later.
- Add these lines to io.cpp file to avoid POSIX error:
// port for Win32 #ifdef _MSC_VER #define open _open
#define close _close#endif
- Change “close()” to “_close()” manually expect the “file.close()”.
- Add these lines to math_functions.cpp to fix __builtin_popcount and __builtin_popcountl error:
#define __builtin_popcount __popcnt #define __builtin_popcountl __popcnt
- Now compile files in caffe/src/proto
- Previously 2 .cc files were generated by GeneratePB.bat.
- Create a folder in Source Files and drag 2 .cc files (“caffe.pb.cc” and “caffe_pretty_print.pb.cc”) into it.
- Add “-D_SCL_SECURE_NO_WARNINGS” to Configuration Properties -> C/C++ -> Command Line (Both Debug and Release mode) to fix ‘std::_Copy_impl’ unsafe error.
- Now we have every file compliable in Visual Studio.
- Finally compile caffe.cpp in caffe/tools folder
- Drag caffe/tools/caffe.cpp file into Visual Studio’s Source Files
- Right click project name caffe, and BUILD!
- Now everything should work! If you got some linking error, check the missing function’s name to decide what library is missing.
- Change to Release mode and BUILD again to get a caffe/bin/caffe.exe of Release mode.
- Copy caffe.exe to get a backup, e.g. caffe-backup.exe. For if you build the project again, caffe.exe will be overwritten.
- Let’s do a QUICK TEST on MNIST!
- Get MNIST Dataset
- Put 7za.exe and wget.exe into caffe/3rdparty/bin.
- Put get_mnist.bat in caffe/data/mnist and run.
- You will get 4 files ending with “-ubyte”.
- Generate convert_mnist_data.exe
- Check the project in Visual Studio is in Release|x64.
- Exclude “caffe.cpp” from the project.
- Drag “caffe/examples/mnist/convert_mnist_data.cpp” to the project.
- Add these lines in convert_mnist_data.cpp:
// port for Win32 #ifdef _MSC_VER #include <direct.h> #define snprintf sprintf_s #endif
- Change “mkdir” to “_mkdir” in convert_mnist_data.cpp:
// port for Win32 #ifndef _MSC_VER CHECK_EQ(mkdir(db_path, 0744), 0) << “mkdir ” << db_path << “failed”; #else CHECK_EQ(_mkdir(db_path), 0) << “mkdir ” << db_path << “failed”; #endif
- The “potentially uninitialized local pointer variable” error can be fixed by initialize db, mdb_env and mdb_txn with NULL in convert_mnist_data.cpp. Change “MDB_env *mdb_env;” to “MDB_env *mdb_env = NULL;”, and similar for “mdb_txn” and “db”.
- BUILD the project to get caffe.exe in caffe/bin. Rename it to convert_mnist_data.exe.
- Create LevelDB data for training
- Put create_mnist.bat in caffe/examples/mnist and run. Here we generate mnist_train_leveldb and mnist_test_leveldb. If lmdb was used as backend, error occurs. Fortunately, leveldb works fine. It would be great if someone can fix the error of lmdb.
- Let’s get caffe.exe back from the copied caffe-backup.exe if you copied it before.
- Now modify caffe/examples/mnist/lenet_train_test.prototxt, substitute “lmdb” with “leveldb”, because we generate “leveldb” data instead of “lmdb”.
- Do training
- Put train_lenet.bat in caffe/examples/mnist and run. Cool! The training is on! The accuracy is around 0.98 after hundreds of iterations.
Update – 2015.1.26
- The above steps didn’t include cuDNN acceleration.
- One can add “USE_CUDNN” to Preprocessor of the project to enable cuDNN. (Currently Caffe works with cuDNN v1. The set-up is pretty straight forward. Noted that the “Code Generation” for CUDA should be at least “compute_30, sm_30”.)
- My practice showed that for LeNet layout, testing 800 images with the size of 32*32 (excluding the time for reading input images):
- CPU takes around 400~500 ms
- GPU takes around 50 ms
- cuDNN takes around 15 ms
I have some problem when building your code on x64 debug mode. Can you help me solve the two problems?
Thanks!
1>common.obj : error LNK2019: unresolved external symbol “__declspec(dllimport) void __cdecl google::InstallFailureSignalHandler(void)” (__imp_?InstallFailureSignalHandler@google@@YAXXZ) referenced in function “void __cdecl caffe::GlobalInit(int *,char * * *)” (?GlobalInit@caffe@@YAXPEAHPEAPEAPEAD@Z)
1>data_layer.obj : error LNK2019: unresolved external symbol “class boost::shared_ptr<class caffe::Dataset<class std::basic_string<char,struct std::char_traits,class std::allocator >,class caffe::Datum,struct caffe::dataset_internal::DefaultCoder<class std::basic_string<char,struct std::char_traits,class std::allocator > >,struct caffe::dataset_internal::DefaultCoder > > __cdecl caffe::DatasetFactory<class std::basic_string<char,struct std::char_traits,class std::allocator >,class caffe::Datum>(enum caffe::DataParameter_DB const &)”
1) `InstallFailureSignalHandler` can be commented out in `common.cpp`.
2) The linking problem is probably caused by not having boost library or not compiling all files in `caffe/src/caffe` folder.
It seems that you are using 9c39b93, which is my attempt to merge from caffe::dev. Yet some static lining problem happened afterwords. Please try later commits, e.g. 2a9acc2.
The code has worked. Thanks!
hello,i just met the same problem with yours, can you describe the details how you solved this problem, thank you
hi, how do you solve this problem, i met the same problems. thank you
Can I build and test it on the windows 8, Core-i7 x64 without nVIDIA card?
Yes, I think so. Just no need to add any .cu files and set Caffe mode to CPU before you create a Net in your code.
Thanks for you reply. How can I set the Caffe mode to CPU?
An example is roughly like this:
// Set CPU
Caffe::set_mode(Caffe::CPU);
// Set to TEST Phase
Caffe::set_phase(Caffe::TEST);
// Load net
Net net(“deploy.prototxt”);
// Load pre-trained net (binary proto)
net.CopyTrainedLayersFrom(“trained_model.caffemodel”);
Thank you very much for this! I was able to build and run caffe with VS2013 following your detailed instructions! Everything else that I tried previously has failed.
have you managed to build the python and Matlab wrappers?
I want to use them too,if you build them,plesae tell me how,thank you very much.
@Sepehr for matlab wrapper do the following step,
1) In tools Drag matlab/caffe/matcaffe.cpp file into Visual Studio’s Source Files.
2) C/C++ Add include path of matlab to Additional Include Directories in my case “C:\Program Files\MATLAB\R2011b\extern\include”
3) Add matlab lib path to Additional Library Directories in Configuration Properties -> Linker -> General: In my case “C:\Program Files\MATLAB\R2011b\extern\lib\win64\microsoft”.
4) Configuration Properties -> General set Target Extension to “.mexw64” and configuration type “Dynamic Library (.dll)”
5) under “Additional Dependencies” add libmx.lib; libmex.lib; libmat.lib;
Hi Neil,
I have almost successfully installed caffe following your instruction, only failed at step 14. When I was building caffe, I got “fatal error LNK1104: cannot open file ‘libboost_date_time-vc120-mt-gd-1_56.lib'”. But I used boost_1_57_0 in my project thoroughly and everything else worked well, it’s so strange that the project calls for a library in boost_1_56_0, could you help me?
It’s not weird because the call for boost libs lies in the header files of the boost. In order to use boost 1.57, you have to replace the boost header files that you use in your project.
I have been using the header files of the boost 1.57 in my project, which means I am using the path of boost 1.57 in Additional Include Directories.
Could you articulate this in detail? It is the header files of boost 1.57 I have been using in my project.
“boost/version.hpp” specifies the version of boost. e.g. #define BOOST_LIB_VERSION “1_56”
I had the same problem and solved by replacing boost 1.57 with 1.56 in my project.
I think the pre-built libraries use the boost 1.56.
You may install boost 1.56 to solve it.
Thanks for your reply, I have solved the problem via the same way.
I had the same problem, but I used boost_1_58_0 in my project. This problem is caused by the leveldb.lib. This lib file needed to be prebuilt by using boost_1_57_0 if you want to use the new version boost_1_57_0. I guess that Neil Shao’s leveldb.lib was probably prebuilt by using boost_1_56_0.
I had build and test it on the windows 8, Core-i7 x64 without CUDA mode.
Thanks for your help.
I have the new problem.
I test the net in debug and release mode.
In the debug mode, I got the valid Test net output in first iteration.
But in the release mode, I got the invalid test net loss output.
I0302 17:09:29.914144 9944 solver.cpp:320] Test net output #0: accuracy = 0.0128
I0302 17:09:29.926195 9944 solver.cpp:320] Test net output #1: loss = -1.#QNAN (* 1 = -1.#QNAN loss)
What’s different with debug and release mode?
i bulid the lmdb as you said
but there is on error.
>mdb.c(8767): error C2440: “函数”: 无法从“DWORD (__cdecl *)(void *)”转换为“LPTHREAD_START_ROUTINE”
I used your 3rdparty.
I test a simple leveldb c++ code
#include “leveldb/db.h”
#include “leveldb/write_batch.h”
#include
int main()
{
leveldb::DB* db;
leveldb::Options options;
return 0;
}
but i get error like this
error LNK2019: unresolved external symbol public: __thiscall leveldb::Options::Options(void)” (??0Options@leveldb@@QAE@XZ) referenced in function _main
what’ wrong?
i am using windows8.1 + vs2013 +64bit
Thank you very much!
Linking error is usually caused by missing implementation of declared functions. Did you add the leveldb’s lib into linking input?
Yes, I have add leveldb’s lib into linking input.
I find the same problem in stackoverflow
http://stackoverflow.com/questions/9244670/leveldb-example-not-working-on-windows-error-lnk2029
but the link in the answer is broken.
I have no idea what to do..
I’m not sure, but looks like the missing part is “Options”. You may try to add you cpp file into the leveldb project and compile them together.
I’m getting the following, when running the batch file:
C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(122,5): error MSB3073: The command ““../scripts/GeneratePB.bat”
C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(122,5): error MSB3073: :VCEnd” exited with code 9009.
It seems it can’t find this:
../src/caffe/proto/caffe_pretty_print.proto: No such file or directory
You can modify the path yourself of course.
What I mean is that in /src/caffe/proto/ there is only a caffe.proto file,
and the caffe_pretty_print.proto is missing.
Is the latter necessary?
You may find it here https://github.com/initialneil/caffe/tree/windows/src/caffe/proto.
Caffe of the latest version has got rid of caffe_pretty_print.proto, thus just delete the line which compling caffe_pretty_print.proto in GeneratePB.bat.
Is there any way to solve?
Do you know where we can find the logs that are produced during training?
Found it in
C:\Users\\AppData\Local\Temp
in Windows 7
That is
C:\Users\userName\AppData\Local\Temp
Have any of you encountered memory-hog problems? As the iterations increase, ram usage goes up as if there is a memory leak.
I am currently in iteration 6320 training bvlc’s googlenet on a smaller dataset in leveldb format, and I am using 15 out of the 16 GB of ram. However caffe.exe in the task manager is shown to use ~ 170MB.
I am using batchsizes 8 and 5 for train,test respectively and running in gpu mode on a 1GB memory GPU.
I found the same problem.
There is a similar problem for LMDB dataset(helped by Kazukuni Hosoi’s comment). I solved it by just adding following lines at the top of LMDBCursor::Seek method. It releases memory mapped pages which are not used after current seek.
if (op != MDB_FIRST)
VirtualUnlock(mdb_value_.mv_data, mdb_value_.mv_size);
I think LevelDB case could be solved similar.
hi,
I have completed all steps successfully except in final step running “train_lenet.bat” i got error “The program can’t start opencv_core249.dll is missing from your computer.” Although opencv_core249.dll is included and it is build successfully.
Only .h and .lib files are needed for compilation. The .dll files are needed during run time. You can either add the folder of OpenCV’s dll to your system path then reboot, or just copy needed dll files to the folder of the compiled exe.
Hi, Neil. I got strange problems. I followed exactly your steps when building OpenCV gpu module with VS2010 and CUDA 6.5. I successfully compiled 2.4.9 with Debug mode and 2.4.10 with release mode. But failed 2.4.9release/2.4.10 debug. Errors look like:
CMake Error at modules/highgui/cmake_install.cmake:42 (file):
48> file INSTALL cannot find
48> “D:/MyProgram/opencv-2.4.9/gpu_build/bin/Release/opencv_highgui249.dll”.
48> Call Stack (most recent call first):
48> modules/cmake_install.cmake:60 (include)
48> cmake_install.cmake:105 (include)
48>
48>
48>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppCommon.targets(113,5): error MSB3073: The command “setlocal
48>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppCommon.targets(113,5): error MSB3073: “C:\Program Files (x86)\CMake\bin\cmake.exe” -DBUILD_TYPE=Release -P cmake_install.cmake
48>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppCommon.targets(113,5): error MSB3073: if %errorlevel% neq 0 goto :cmEnd
48>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppCommon.targets(113,5): error MSB3073: :cmEnd
48>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppCommon.targets(113,5): error MSB3073: endlocal & call :cmErrorLevel %errorlevel% & goto :cmDone
48>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppCommon.targets(113,5): error MSB3073: :cmErrorLevel
48>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppCommon.targets(113,5): error MSB3073: exit /b %1
48>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppCommon.targets(113,5): error MSB3073: :cmDone
48>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppCommon.targets(113,5): error MSB3073: if %errorlevel% neq 0 goto :VCEnd
48>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppCommon.targets(113,5): error MSB3073: :VCEnd” exited with code 1.
48>
48>Build FAILED.
Very strange..
Thanks.
Could you help with the compilation of python and matlab wrapper of caffe , I don’t know what to do with it in windows platform,expect your reply
Firstly, thanks for this very detailed and useful tutorial. I managed (or I think I did) through all the steps of compilation but I finally have an error when running create_mnist.bat (modified with leveldb format). I have only the following message : “The application was unable to start correctly (0xc000007b) “. I downloaded the leveldb-formatted mnist dataset from niuzhiheng’s GitHub to try launching the training but the same error occurs when running train_lenet.bat. I have no idea where it can come from.
Hoping someone can help. Thanks in advance!
Regards,
Stephane
I think good practice is to drag the caffe.cpp file in ‘tools’ folder to you visual studio project. So that you can run caffe in VS. Debugging will be easier comparing to running an exe.
Thanks for your answer but it still does not work. Sorry, I don’t know how to run caffe with VS debugging, it is always the exe that runs even though the solution is in debug mode. When the exe crashes, it does not enter the code, it just stops. Sorry for the inconvenience, it seems to be a basic Visual Studio problem and not a problem coming from the tutorial. So frustating when close to the end of the tutorial ;-).
Running caffe with VS debugging, just see here http://stackoverflow.com/questions/3697299/passing-command-line-arguments-in-visual-studio-2010 and here https://msdn.microsoft.com/en-us/library/cs8hbt1w(v=vs.90).aspx
but when i begin debugging in vs. this error ‘The application was unable to start correctly (0xc000007b) ‘ still coms out. how to fix this? thank you very much
I have the same error. Did you find how to fix this? Thank you vey much
At least, I am not the only one with this error ;-). Sorry but unfortunately, I didn’t fix it. I am still working on it but I have no idea at all for the moment. I don’t understand what is the entry point in the application because when I put breakpoints in Debug mode in the main() function of caffe.cpp or convert_mnist_data.cpp, they are not attained.
If you fix it. can you tell me please? Thank you
Hello, I finally got it working!
My problem was very specific to my config I think, I had some conflicts with dlls in 32-bit format whereas 64-bit is required. The dll in my case seemed to be libgfortran-3.dll. I reinstalled the 64-bit version of the application using this dll to fix the problem.
I used dependencyWalker to find this, maybe you can try it also to check the conflicts between 32-bit and 64-bit dlls used by caffe.exe.
Hope it helps.
Regards,
Stephane
Hello Stephane,
I have same problem with libgfortran-3.dll.
Did you reinstalled the openblas to resolve the problem?
Hello,
I used the openblas version provided in the tutorial and just copied the dlls in the bin directory as indicated. I think it worked correctly as I did not have to reinstall it. libgfortran-3.dll is also present in MinGW so be sure to have the correct version (64-bit).
For information, in my case, the problem came from the anaconda distribution that I use for Python development, it was in 32-bit version and it was this libgfortran-3 file called by caffe.exe.
Hope it helps.
Regards,
Stephane
我想你是中国学生吧,我也是,我刚解决了这个问题。你下载“libquadmath-0.dll” “libgcc_s_sjlj-1.dll” “libgfortran-3.dll”这三个dll放到生成的可执行文件的目录就可以了。具体参考这篇博客http://blog.csdn.net/lien0906/article/details/44221355
祝好!
Dear Stephane Zieba
Thank you very very very much. Your are my hero. I have fixed it!!!!!! I put three dlls in caffe/bin “libquadmath-0.dll” “libgcc_s_sjlj-1.dll” “libgfortran-3.dll” and it worked.
I have the same problem, missing “libquadmath-0.dll” “libgcc_s_sjlj-1.dll” “libgfortran-3.dll”
can you send me? isr.nadav@gmail.com
>> You have to have a GPU on your PC lol
I do not have a GPU on my system. CUDA 6.5 installed fine and Visual Studio is recognizing it. And at least at the start, I am happy to deal with 100x running times sans GPU acceleration. Do you think I stand a chance without a GPU? 🙂 Or is upgrading my desktop for adding a suitable GPU card is the only option?
shouldn’t CUDA be needed only if you have a NVIDIA GPU in the first place? What if you have an integrated graphics card (such as on laptop)? Will caffe not run on such a system? I have a laptop with Intel HD Graphics 3000. Is this machine not capable of running caffe?
I got this message while trying to install CUDA – The graphics driver could not find compatible graphics hardware. You may continue installation, but you will not be able to run CUDA applications.
GPU version Caffe requires NVidia graphic card and CUDA. For your computer you can use CPU version. If you don’t have an NVidia Graphic card on your PC, of course you don’t need to install CUDA, which will not work any way.
hi
Thanks for you help. I am stuck in training. I have generate mnist_train_leveldb and mnist_test_leveldb. And modified caffe/examples/mnist/lenet_train_test.prototxt, substituted “lmdb” with “leveldb”. when i do train i got error.
snapshot_prefix: “examples/mnist/lenet”
solver_mode: CPU
net: “examples/mnist/lenet_train_test.prototxt”
I0322 17:47:03.442919 4880 solver.cpp:70] Creating training net from net file:
examples/mnist/lenet_train_test.prototxt
[libprotobuf ERROR ..\src\google\protobuf\text_format.cc:274] Error parsing text
-format caffe.NetParameter: 17:3: Unknown enumeration value of “leveldb” for fie
ld “backend”.
F0322 17:47:03.445919 4880 upgrade_proto.cpp:928] Check failed: ReadProtoFromTe
xtFile(param_file, param) Failed to parse NetParameter file: examples/mnist/lene
t_train_test.prototxt
*** Check failure stack trace: ***
I cannot test it right now. Just guessing that maybe you should write LEVELDB in upper case. You can find in caffe.proto:
message DataParameter {
enum DB {
LEVELDB = 0;
LMDB = 1;
}
Thanks for all of this.But could you please help with the compilation of the python and matlab wrappers , ’cause I don’t know what to do with them.Expect your detailed help with them.Thanks.
Sorry but I don’t have Matlab at hand and I don’t use Python in Windows. I think if you use Python and Matlab instead of Visual Studio you should do that in Ubuntu.
Because workstations in our lab doesn’t connect to the internet, it’s very tivial to use linux, so do you have some further development to replace the visualization in python wrapper.
Sure. I’ll try to figure out and come up with some tutorials. But please keep in mind that Caffe’s source code is actually not difficult to understand if you step by step debug it in Visual Studio.
Thanks @Neil Shao Training done! Accuracy 0.9908 after 10000 iteration.
Hi,
I have trained LeNet MNIST & CIFAR- 10 , But i am unable to train” VOC2012″ with caffe.
Hoping someone can help. Thanks in advance!
In io.cpp.
I add
#ifdef _MSC_VER
#define open _open
#define close _close
#endif
at the beginning of the file.
but i still get the error at function ReadFileToDatum ” file.close();”
how to fix this? thank you
It’s because “file.close()” is changed to “file._close()” by the macro. Easy solution is to delete the macro regarding “close”, and modify “close()” to “_close()” manually except the “file.close()”.
how to fix “file.close()” error? comment out it? Thank you very much!
“file.close()” itself is not an error. It is “file._close()” that cause error. So if you delete “#define close _close”, the “file.close()” will be just fine.
Hi, it is nice work!
But , why do not you use nuget in VS ?
It is quite easy to install OpenCV and Boost.
Hi,
I followed your great tutorial, and successfully created caffe.exe file in /caffe/bin folder, but when i run “C:\caffe\bin>caffe.exe” in windows cmd. The program crashed. ‘The application was unable to start correctly (0xc000007b)’
How can I fix this? Thank you very much
Hello,
I have this exact same problem. I have been struggling with this for days. Have you been able to get around this error? I get the 0xc000007b error also.
Thanks,
Bonzi
According to some comments below, maybe you are using 32 bit dll files of libgcc_s_seh-1.dll, libgcc_s_sjlj-1.dll, libgfortran-3.dll, libquadmath-0.dll. Please try to change them with 64 bit ones.
hi,
Thanks for this tutorial, it helped me a lot i have run some examples they were successful. Now I am training image net but i am getting error when running “create_imagenet.sh”
I0327 22:49:42.529912 5664 convert_imageset.cpp:86] A total of 0 images.
F0327 22:49:42.529912 5664 db.cpp:30] Check failed: _mkdir(source.c_str()) == 0 (-1 vs. 0) mkdir data/ilsvrc12/val.txtfailed
How can i fixed this. Thanks in advance!
I had that error and figured that something was wrong with my paths specified.
Here’s working, at least for me example of .bat file
set ROOTFOLDER=E:\Data\MNIST\
set FILES=E:\Data\MNIST\test_dr.txt
set DATA=E:\Data\MNIST\MNIST_TEST_
set BINARIES=G:\Ilya\Projects\caffe-master\bin\
cd %ROOTFOLDER%
set BACKEND=leveldb
rd /s /q “%DATA%%BACKEND%”
echo “Creating %BACKEND%…”
“%BINARIES%convert_images.exe” %ROOTFOLDER% %FILES% %DATA%%BACKEND% –backend=%BACKEND%
echo “Done.”
pause
Hi,
I solved my problem, Problem was my output folder name contain space i used tab sequence to avoid but some how unable to create dir so i renamed folder and removed space, it worked.
Hi, I followed your detailed steps and finally make it work. But, the precision on MNIST I got was only 0.13. Do you know why? Thanks a lot.
Hello! Thank you for excellent guide. I was able to build ‘convert_images’ and ‘caffe’. But didnt find any documentation about using trained net so I decided to build Python module. Getting through it I still, after days wasn’t able to solve such a error:
caffe.obj : error LNK2001: unresolved external symbol “__declspec(dllimport) struct _object * __cdecl boost::python::detail::init_module(struct PyModuleDef &,void (__cdecl*)(void))” (__imp_?init_module@detail@python@boost@@YAPEAU_object@@AEAUPyModuleDef@@P6AXXZ@Z)
1>G:\Ilya\Projects\caffe-master\bin\caffe.exe : fatal error LNK1120: 1 unresolved externals
I think I added everything that I could about boost in Additional Include Directories and Additional Library Directories…
Could you advise something?
Hello. I did not use a trained network for the moment but did you try running caffe.exe with “test” instead of “train” as an argument of the command line? According to the line 142 of caffe.cpp, two other files about the network must be provided: model and weights. Maybe they are the files generated during training (with extensions .caffemodel and .solverstate). if someone can confirm this, this would be great.
Thanks for the hint, I tried it today.
Yes, it works.
For running it you need two files:
caffe.exe test -model E:\Data\MNIST\lenet_train_test.prototxt -weights E:\Data\MNIST\lenet_iter_10000.caffemodel -iterations 1
*.prototxt – config of your trained net, *.caffemodel – weights(this file obtained during training)
Cant understand though connection and need between number of specified iterations and batch size in net config.
Managed to build _caffe for python. But after copying .dll | .lib still getting error in __init__.py in pycaffe.py
from ._caffe import Net, SGDSolver
ImportError: No module named ‘caffe._caffe’
Trying to solve it..
Hello,
I’m trying to build python wrapper to test network on my custom dataset following this method: “http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/classification.ipynb” . How did you build _caffe for python? I don’ know how to use “make pycaffe” under windows.
Thanks in advance for your help.
Stephane
I created new project in Visual Studio(dll type), added everything as mentioned above, just copied it from previous project where we build caffe.cpp and other tools. .cpp and .h files shloud be already changed according to this blog, so you need just add them in project(for folders layers and others do not forget to change “Build Customization”).
I have Anaconda with Python 3.4, so I added essential dependencies for python and numpy(I can write explicitly what dependencies when Ill get to my work PC, but you can figure it out by looking at errors building project). Also you need build Boost 1.56(prebuild didnt work in this case) and separatly Boost.Python.
I ran into this same problem, and it turns out that this error occurs when python can’t find the _caffe extension itself (which is the obvious interpretation), but that it can also occur when Windows can’t find some of the dependencies for the _caffe extension. The first thing to try is to rename _caffe.dll to _caffe.pyd (since python only imports files with the pyd extension) and then to make sure that _caffe.pyd is in your PYTHONPATH (for example, add it to your site_packages directory). You can find the directories in your PYTHONPATH from within python with:
import sys
print sys.path
If that doesn’t work, then use Dependency Walker to find out if the module is having trouble loading any of it’s dependencies (http://www.dependencywalker.com/). Just use dependency walker to open the caffe.pyd file that you just moved. In my case, I had forgotten to add libglog.dll and libopenblas.dll to my Windows PATH.
After that, you will probably also run into a problem with protobuf. Namely, you won’t have the caffe.proto submodule. In order to make this, you have to put protoc.exe in your path and then cd to the caffe directory and use these commands:
mkdir -p python/caffe/proto && \
touch python/caffe/proto/__init__.py && \
protoc –proto_path=src/caffe/proto –python_out=python/caffe/proto src/caffe/proto/caffe.proto && \
protoc –proto_path=src/caffe/proto –python_out=python/caffe/proto src/caffe/proto/caffe_pretty_print.proto
This will give you caffe.proto, which you won’t get when building _caffe.dll in VS. Finally, if you get an “unexpected keyword argument” error when loading protobuf, this is likely from using protobuf 3.0. Use protobuf 2.6 instead (recompile _caffe.dll with the protobuf 2.6 libs as well). Hope that helps!
Dave
WOW, how did u manage to build _caffe for python? By putting _caffe.cpp into the project? I have no idea what i should do using windows…
Hi Ilya, do you remember, how did you solve the problem with the unresolved external symbol “__declspec(dllimport) struct _object * __cdecl boost::python::detail::init_module
…”
?
Every thing works fine except the following error. Please help
error MSB3191: Unable to create directory “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\lib\x64 $C:\Users\dan\Documents\Utilities\opencv\build\x64\vc12\lib $C:\local\boost_1_56_0”. The given path’s format is not supported.
How did this happened? Maybe the $ is not recognized and you need the ” ; ” between different path.
Thanks for your reply Neil. I deleted “$” and add “; “in linker and found the following error:
Error 109 error LNK1104: cannot open file ‘(CUDA_PATH_V6_5)\lib\,x64 ;C:\Users\dan\Documents\Utilities\opencv\build\x64\vc12\lib ;C:\local\boost_1_56_0\lib64-msvc-12.0$’ C:\Users\dan\Documents\caffe-master\caffe\caffe\LINK caffe
$(CUDA_PATH_V6_5) is the path of CUDA, don’t delete the $. For the rest just have to make the path reasonable.
Yes
I am doing the one mentioned in step 6.
I am configuring the linker in following way: $(CUDA_PATH_V6_5)\lib\$(PlatformName) $(OPENCV_X64_VS2013_2_4_9)\lib $(BOOST_1_56_0)\lib64-msvc-12.0
But still the error is there
error MSB3191: Unable to create directory “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\lib\x64 C:\Users\dan\Documents\Utilities\opencv\build\lib C:\local\boost_1_56_0”. The given path’s format is not supported.
The ; is needed.
$(CUDA_PATH_V6_5)\lib\$(PlatformName);$(OPENCV_X64_VS2013_2_4_9)\lib; $(BOOST_1_56_0)\lib64-msvc-12.0;
After fixing that I have another error
error MSB4018: The “VCMessage” task failed unexpectedly.
System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list.
at System.Text.StringBuilder.AppendFormat(IFormatProvider provider, String format, Object[] args)
at System.String.Format(IFormatProvider provider, String format, Object[] args)
at Microsoft.Build.Shared.ResourceUtilities.FormatString(String unformatted, Object[] args)
at Microsoft.Build.Utilities.TaskLoggingHelper.FormatString(String unformatted, Object[] args)
at Microsoft.Build.Utilities.TaskLoggingHelper.FormatResourceString(String resourceName, Object[] args)
at Microsoft.Build.Utilities.TaskLoggingHelper.LogWarningWithCodeFromResources(String messageResourceName, Object[] messageArgs)
at Microsoft.Build.CPPTasks.VCMessage.Execute()
at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
at Microsoft.Build.BackEnd.TaskBuilder.d__20.MoveNext()
Hi, Neil! I succeeded in making LMDB to work in your project.
In cast you are still interested in it, here is what I did.
1. Get rid of lmdb.lib from the project.
2. Change the character set from “Unicode” to “Multi-byte” in “Property” of the project.
It was necessary in my environment of Windows-7(Language is Japanese) to make the file path appropriate.
3. Add mdb.c and midl.c into your project.
4. Change the following lines in “mdb.c”.
(1) Add the two lines at the top of the file.
#pragma warning( disable : 4996 )
#pragma warning( disable : 4146 )
(2) Cast is necessary for the 2nd argument in THREAD_CREATE().
#if defined(_MSC_VER)
THREAD_CREATE(thr, (LPTHREAD_START_ROUTINE)mdb_env_copythr, &my);
#else
THREAD_CREATE(thr, mdb_env_copythr, &my);
#endif
(3) In order to make the file path appropriate on Windows, you should use double backslashes instead of a slash.
#if defined(_MSC_VER)
/** The name of the lock file in the DB environment */
#define LOCKNAME “\\lock.mdb”
/** The name of the data file in the DB environment */
#define DATANAME “\\data.mdb”
/** The suffix of the lock file when no subdir is used */
#else
/** The name of the lock file in the DB environment */
#define LOCKNAME “/lock.mdb”
/** The name of the data file in the DB environment */
#define DATANAME “/data.mdb”
/** The suffix of the lock file when no subdir is used */
#endif
(4) In order to truncate the file size at the end of the process, you should add a line in “mdb_env_close0(MDB_env *env, int excl)”.
if (env->me_fd != INVALID_HANDLE_VALUE)
{
#if defined(_MSC_VER)
// Truncate the file size.
SetEndOfFile(env->me_fd);
#endif
(void)close(env->me_fd);
}
5. Change the following lines in “convert_mnist_data.cpp”.
(1) It is only for convenience to enforce overwriting the directory if it already exists.
#if defined(_MSC_VER)
if (0 != _mkdir(db_path))
{
std::string command(“rd /s /q “); // Enforce to remove the directory.
command.append(db_path);
system(command.c_str());
CHECK_EQ(_mkdir(db_path), 0) << "mkdir " << db_path << "failed";
}
#else
CHECK_EQ(mkdir(db_path), 0) << "mkdir " << db_path << "failed";
#endif
(2) Since LMDB temporarily makes a file whose size equals to "mapSize", 1TB was too large for my PC.
If I made it 1GB or 40GB, mdb_env_open(mdb_env, db_path, 0, 0664) was successful.
In "lmdb.h", there is a comment which says "The size should be a multiple of the OS page size".
Therefore, I made a change as follows:
#if defined(_MSC_VER)
SYSTEM_INFO si;
GetSystemInfo(&si);
const size_t mapSize = static_cast(si.dwPageSize) * 300000; // 1GB : OK
//const size_t mapSize = static_cast(si.dwPageSize) * 10000000; // 40GB : OK
CHECK_EQ(mdb_env_set_mapsize(mdb_env, mapSize), MDB_SUCCESS) // 1GB
<< "mdb_env_set_mapsize failed";
#else
CHECK_EQ(mdb_env_set_mapsize(mdb_env, 1099511627776), MDB_SUCCESS) // 1TB
<< "mdb_env_set_mapsize failed";
#endif
(3) Add the following lines prior to "delete pixels". It is to make the file size as small as possible.
Otherwise, the file size might be "mapSize".
#if defined(_MSC_VER)
else
{
if (db_backend == "lmdb")
{ // Truncate and close the file.
mdb_close(mdb_env, mdb_dbi);
mdb_env_close(mdb_env);
}
}
#endif
delete pixels;
These are all I remember now, although I could be missing something.
If you still have any problems, do not hesitate to ask me.
Thank you Kazukuni! It’s very helpful!
Any suggestions?
error LNK1104: cannot open file ‘libopenblas.dll.a’
Brothers, I have some troubles. After compile the caffe ,I have 631 link errors like this
:1>solver.obj : error LNK2019: 无法解析的外部符号 “public: virtual __cdecl caffe::SolverParameter::~SolverParameter(void)” (??1SolverParameter@caffe@@UEAA@XZ),该符号在函数 “public: __cdecl caffe::Solver::Solver(class std::basic_string<char,struct std::char_traits,class std::allocator > const &)” (??0?$Solver@M@caffe@@QEAA@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) 中被引用
, How can I solve them ?
Best Wishes
Thank you very much~
Hi,Neil Shao
Now I have 9 error Link errors like this:错误 15 error LNK2019: 无法解析的外部符号 “void __cdecl caffe::WriteProtoToBinaryFile(class google::protobuf::Message const &,char const *)” (?WriteProtoToBinaryFile@caffe@@YAXAEBVMessage@protobuf@google@@PEBD@Z),该符号在函数 “protected: void __cdecl caffe::Solver::Snapshot(void)” (?Snapshot@?$Solver@M@caffe@@IEAAXXZ) 中被引用 H:\CAFFEnew\caffe\caffe\solver.obj . I have add all the include directions and dependencyies.
I have solve it by myself,thanks .
best wishes for you.
多谢了~
By how?
Hello Neil, Thank you very much for your useful guide.
Two errors occured when I tried to compile the code in CPU_ONLY mode. I am using visual studio 12 and I entered CPU_ONLY in preprocessor definitions to run in CPU_ONLY mode. I didn’t add the .cu files for this mode
\caffe\src\caffe\syncedmem.cpp(93): error C4716: ‘caffe::SyncedMemory::gpu_data’ : must return a value
\caffe\src\caffe\syncedmem.cpp(109): error C4716: ‘caffe::SyncedMemory::mutable_gpu_data’ : must return a value
I could compile the codes for GPU but as soon as I go to the CPU_ONLY mode I encounter these errors. I would appreciate your help.
Also would you please elaborate on your following reply? where do you enter these commands?
An example is roughly like this:
// Set CPU
Caffe::set_mode(Caffe::CPU);
// Set to TEST Phase
Caffe::set_phase(Caffe::TEST);
// Load net
Net net(“deploy.prototxt”);
// Load pre-trained net (binary proto)
net.CopyTrainedLayersFrom(“trained_model.caffemodel”);
Seems like you need to look into the 93 and 109 line of `syncedmem.cpp` and make small changes to hack it.
I put a “return 0;” at the end of #else sections of the two error methods and the code compiled.
Neil: Would you please tell me where I put these:
// Set CPU
Caffe::set_mode(Caffe::CPU);
// Set to TEST Phase
Caffe::set_phase(Caffe::TEST);
// Load net
Net net(“deploy.prototxt”);
// Load pre-trained net (binary proto)
net.CopyTrainedLayersFrom(“trained_model.caffemodel”);
Sorry for the possibly obvious question.
Thank your share! There are some additional cpp files in tool folder to build, such as, extract_features.cpp, finetune_net.cpp, and so on. If you can share how to modify and build these cpp files in the windows, it is very great!
Thanks for your contribution. I find an error when I run your shared code. When I changed the pooling size of the second pool lay from 3 to 1 with cifar10_quick net, some error happened like “loss = -1.#QNAN”, and when I change the square convelusiton map to 1 by n, similar errors happens too. Can you help to check the problem?
The”loss = -1″ problem happened to me before. I fixed by changing learning rate smaller.
Yes. I tried to set momentum smaller, it also works. However, when the layer of the net is deep enough, only change these two parameters can not fix this problem. I find the value of some neures are Inf in the process of interation, but I have not find out why. The old version released by Niuzhiheng don’t have this problem. Thus, if you have time, you can have a look at it. Thanks.
Hi, great work getting caffe to run on windows.
Is it possible to distribute the pre-compiled caffe binary built from your instructions? Would this be possible? It’d definitely save me the trouble from building it from scratch. If not then I’d have to give this a go and try my luck.
Thanks.
Thanks for this, you saved us a bunch of time.
Hi! To compile the current db.cpp you also need to add the workaround:
#include
#ifndef _MSC_VER
CHECK_EQ(mkdir(source.c_str(), 0744), 0) << "mkdir " << source << "failed";
#else
CHECK_EQ(_mkdir(source.c_str()), 0) << "mkdir " << source << "failed";
#endif
I meant:
#include “direct.h”
I have two GPU,how can I modify the GPU I would use?
cd ../../”bin/caffe.exe” train –solver=examples/mnist/lenet_solver.prototxt -gpu 1 pause
can it work?
Could you please look into the following issue? I compiled Caffe on win7, VS 2013 successfuly with your guide, and i did train lenet and my own model with it as well. But when it comes to using trained models for prediction, i’m in trouble:
Training goes as expected with such prototxt:
name: “FACES”
layer {
name: “data”
type: “Data”
top: “data”
top: “label”
include {
phase: TRAIN
}
data_param {
source: “examples/_faces/trainldb”
batch_size: 155
backend: LEVELDB
}
}
layer {
name: “data”
type: “Data”
top: “data”
top: “label”
include {
phase: TEST
}
data_param {
source: “examples/_faces/testldb”
batch_size: 45
backend: LEVELDB
}
}
layer {
name: “conv1”
type: “Convolution”
bottom: “data”
top: “conv1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 12
kernel_size: 13
stride: 2
weight_filler {
type: “gaussian” # initialize the filters from a Gaussian
std: 0.01 # distribution with stdev 0.01 (default mean: 0)
}
bias_filler {
type: “constant”
value: 0
}
}
}
….
layer {
name: “accuracy”
type: “Accuracy”
bottom: “ip2”
bottom: “label”
top: “accuracy”
include {
phase: TEST
}
}
layer {
name: “loss”
type: “SoftmaxWithLoss”
bottom: “ip2”
bottom: “label”
top: “loss”
}
but when i’m trying to load with model for making predictions with such prototxt:
name: “FACES”
input: “data”
input_dim: 1
input_dim: 3
input_dim: 150
input_dim: 150
layer {
name: “conv1”
type: “Convolution”
bottom: “data”
top: “conv1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 12
kernel_size: 13
stride: 2
}
}
….
layer {
name: “prob”
type: “Softmax”
bottom: “ip2”
top: “prob”
}
It breaks with target_blobs.size() == source_layer_blobs.size() (2 vs. 0) – Incompatible number of blobs for layer 1
but i can’t see a mistake on my side. and it also matches simillar setup for caffenet.
my initialization code is pretty simple:
#include
#include
#include
#include
#include
#include “boost/algorithm/string.hpp”
#include “caffe/caffe.hpp”
using caffe::Blob;
using caffe::Caffe;
using caffe::Net;
using caffe::Layer;
using caffe::shared_ptr;
using caffe::Timer;
using caffe::vector;
int main(int argc, char** argv) {
Caffe::set_mode(Caffe::CPU);
Net* net;
net = new Net(“azoft_faces.prototxt”, TEST);
net->CopyTrainedLayersFrom(“azf3_iter_100.caffemodel”);
return 1;
}
Same error also occurs with mnist example. It has trained succesfully, but when i try to load resulting model with lenet.prototxt for recognition, it throws an exception “target_blobs.size() == source_layer_blobs.size() (2 vs. 0) – Incompatible number of blobs for layer conv1”.
I think it’s because your second protxt doesn’t have para for data later.
The source of data layer can be a text file that contain the list of testing images(full path), or LMDB file.
The type of data layer can also be memory_ data.
Thank you for such quick reply, Neil. I did already try replacing
“input: “data”
input_dim: 1
input_dim: 3
input_dim: 150
input_dim: 150”
with
“layer {
name: “data”
type: “MemoryData”
top: “data”
top: “label” //doesn’t make sense here for me, but if i omit it, i get CHECK_EQ(ExactNumTopBlobs(), top.size()) assert
memory_data_param {
batch_size: 1
channels: 1
height: 150
width: 150
}
}”
but i get exactly the same error. i’m not sure it is correct, though, there isn’t much info on memorydata layers.
Hi. You need to make sure data layer has data before forward propagation.
If the type is data, the net will read from file when created.
If the type is memory_data, you need to pass data in your program.
“Hi. You need to make sure data layer has data before forward propagation.
If the type is memory_data, you need to pass data in your program.”
i don’t do propagation yet. it gives me error then i’m just trying to load trained model.
layer {
name: “data”
type: “ImageData”
top: “data”
top: “label”
image_data_param {
batch_size: 1
source: “C:/caffe/examples/_faces/test_image.txt”
}
}
this also results in “target_blobs.size() == source_layer_blobs.size() (2 vs. 0) – Incompatible number of blobs for layer conv1” my version of caffe(latest one) did not compile correctly. do you use protov1(“layers”, “IMAGE_DATA”) or protov2(“layer”, “ImageData”)?
I had to correct it to for it to be parsed correctly:
layer {
name: “data”
type: “Data”
top: “data”
top: “label”
data_param {
source: “examples/_faces/testldb”
batch_size: 45
backend: LEVELDB
}
}
and… it still gives the same error. furthemore, i have to use either basic input or memory data for my application. i think, something went wrong in my project.
if you will have some time, please take a look into minimal example(solution files + source):
dropbox. com/s/0jyld3lpiaky2we/caffe_wtf.7z?dl=0
Thank you.
It was O_RDONLY | O_BINARY flag in io.cpp … Now every combination i tried doesn’t crash anymore.
Helo
Thank you for the tutorial. The project can be built without error. But i have a problem :
Each time there is a function from glog, the program crash.
For example when i go in the device_query() function:
check failed: FLAGS_gpu > -1 (-1 vs. -1) Need a device ID to query.
*** Check failure stack trace: ***
it fails comparing -1 with -1 ??
I don’t understand what GFlags + GLog + ProtoBuf + LevelDB are. I think i did something wrong during this step of your tutorial.
Can you help me?
It seem to me that you need to either set the mode to GPU and set device (e.g. id set to 0), or set the mode to CPU.
I don’t think that the value of FLAGS_gpu is a problem. I have a problem about every CHECK. The flags_gpu was an example.
When i use your quick test on mnist, i have a similar problem
Flags_model.size() > 0 (0 vs. 0). Need a model definition to score.
I think that something is wrong with my installation.
Ok my question is not relevant. You are right the problem comes from the value of the flags.
I don’t understand how it work. I have to set all the flags manualy? Is there any initialisation?
I got caffe.ext
but
The DependencyWalker reports that the caffe.exe is missing six Windows 8.1 system DLLs:
API-MS-WIN-CORE-KERNEL32-PRIVATE-L1-1-1.DLL
API-MS-WIN-CORE-PRIVATEPROFILE-L1-1-1.DLL
API-MS-WIN-SERVICE-PRIVATE-L1-1-1.DLL
API-MS-WIN-CORE-SHUTDOWN-L1-1-1.DLL
EXT-MS-WIN-NTUSER-UICONTEXT-EXT-L1-1-0.DLL
IESHIMS.DLL
why?
i am using windows8.1+vs2013
Hav u fixed this problem yet? I have the exact same problem like u do and the last 2 .dll files can not even be found on Google….
Hi;
First, thank you for the tutorial here.
I faces a problem during compiling the Caffe inside Visual Studio 2013 in Windows 7. The VS keeps telling me that I have a syntax error inside “hdf5_output_layer.cpp”. I didn’t change any code inside this one and didn’t find anyone with this problem. This is the list of problems:
Error 18 error C2143: syntax error : missing ‘;’ before ‘<' \src\caffe\layers\hdf5_output_layer.cpp 18 1 Caffe
Error 19 error C2988: unrecognizable template declaration/definition \src\caffe\layers\hdf5_output_layer.cpp 18 1 Caffe
Error 20 error C2059: syntax error : '<' \src\caffe\layers\hdf5_output_layer.cpp 18 1 Caffe
Error 21 error C2039: 'HDF5OutputLayer' : is not a member of '`global namespace'' \src\caffe\layers\hdf5_output_layer.cpp 18 1 Caffe
Error 22 error C2588: '::~HDF5OutputLayer' : illegal global destructor \src\caffe\layers\hdf5_output_layer.cpp 28 1 Caffe
Error 23 error C1903: unable to recover from previous error(s); stopping compilation \src\caffe\layers\hdf5_output_layer.cpp 28 1 Caffe
24 IntelliSense: HDF5OutputLayer is not a template \src\caffe\layers\hdf5_output_layer.cpp 82 1 Caffe
Do you know what causes this problem and how I can solve it?
Best
Mohammad
Hi Mohammed,
it seems you have the same source files as me. You need to remove the comment tags from the header definitions of the hdf5_load_nd_dataset_helper and hdf5_load_nd_dataset functions in the io.hpp file as well as from the definitions of the HDF5DataLayer template class in vision_layers.hpp.
For some reasons this lines are commented-out in this source version.
Best Regards,
Jan
Hi,
I got stuck in contrastive_loss_layer.cpp on this line:
Dtype dist = std::max(margin – sqrt(dist_sq_.cpu_data()[i]), 0.0);
Error 28 error C2780: ‘_Ty std::max(std::initializer_list)’ : expects 1 arguments – 2 provided C:\work\Caffe\caffe\src\layers\contrastive_loss_layer.cpp 56 1 caffe
Error 27 error C2780: ‘const _Ty &std::max(const _Ty &,const _Ty &,_Pr)’ : expects 3 arguments – 2 provided C:\work\Caffe\caffe\src\layers\contrastive_loss_layer.cpp 56 1 caffe
Error 29 error C2782: ‘const _Ty &std::max(const _Ty &,const _Ty &)’ : template parameter ‘_Ty’ is ambiguous C:\work\Caffe\caffe\src\layers\contrastive_loss_layer.cpp 56 1 caffe
Error 26 error C2784: ‘_Ty std::max(std::initializer_list,_Pr)’ : could not deduce template argument for ‘std::initializer_list’ from ‘float’ C:\work\Caffe\caffe\src\layers\contrastive_loss_layer.cpp 56 1 caffe
Can someone help me out?
Thanks!
OK: 0.0 needs to be cast as Dtype(0.0)
Hi,
I did the mnist training and it ran through without a problem. Thank you Neil for this post!
I am now looking for C++ code that does the testing on mnist with one’s own data. It seems to be a common problem for beginners. I saw a few postings on similar topics using python. Can someone provide a pointer for C++?
Thanks!
Paul
Since the input of “__builtin_popcountl” is 64 bit integer. I think it’s better to make such modification, “#define __builtin_popcountl __popcnt64”.
As a matter of fact, some of the tests fail for Visual Studio 13 / 64-bit, if __builtin_popcountl is replaced by __popcnt. With __popcnt64, these tests succeed.
Hi,
I am getting compile error on
this->forward_gpu_gemm(bottom_data + bottom[i]->offset(n), weight, top_data + top[i]->offset(n));
I cannot find the definition of forward_gpu_gemm.
the forward_gpu_gemm needs a fourth parameter bool skip_im2col. I am not sure if is is true or false. By adding true/false, the compile error goes away
The fourth parameter for forward_gpu_gemm should be “false”.
this->forward_gpu_gemm(bottom_data + bottom[i]->offset(n), weight,
top_data + top[i]->offset(n), false);
When using false, the
Test net output #0: accuracy = 0.9907
which is expected. I want to thank Neil for his excellent step by step instructions. This saved me a lot of time.
Hi,
thanks for this tutorial, but i have met some problems.
1. 错误 18 error MSB3721: 命令“”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\bin\nvcc.exe” –use-local-env –cl-version 2013 -ccbin “X:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\x86_amd64″ -IC:\Users\blackpepperX\Desktop\CAFFE_ROOT\include -IC:\Users\blackpepperX\Desktop\CAFFE_ROOT\3rdparty\include -IC:\Users\blackpepperX\Desktop\CAFFE_ROOT\3rdparty\include\openblas -IC:\Users\blackpepperX\Desktop\CAFFE_ROOT\3rdparty\include\hdf5 -IC:\Users\blackpepperX\Desktop\CAFFE_ROOT\3rdparty\include\lmdb -I”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include” -IX:\opencv\build\include -IC:\local\boost_1_56_0 -IC:\Users\blackpepperX\Desktop\CAFFE_ROOT\src\caffe -IC:\Users\blackpepperX\Desktop\CAFFE_ROOT\src -I”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include” -I”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include” -G –keep-dir x64\Debug -maxrregcount=0 –machine 64 –compile -cudart static -g -DWIN32 -D_DEBUG -D_CONSOLE -D_LIB -D_CRT_SECURE_NO_WARNINGS -D_SCL_SECURE_NO_WARNINGS -D_UNICODE -DUNICODE -Xcompiler “/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd ” -o x64\Debug\bnll_layer.cu.obj “C:\Users\blackpepperX\Desktop\CAFFE_ROOT\src\caffe\layers\bnll_layer.cu””已退出,返回代码为 2。 C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\BuildCustomizations\CUDA 7.0.targets 593 9 caffe.
This happens when compile the bnll_layer.cu
2.错误 22 error C2784: “_Ty std::max(std::initializer_list,_Pr)”: 未能从“float”为“std::initializer_list”推导 模板 参数
this happens when compile the contrasive_loss_layer
3.错误 17 error C3861: “_mkdir”: 找不到标识符 C:\Users\blackpepperX\Desktop\CAFFE_ROOT\src\caffe\util\db.cpp 32 1 caffe
this happens when compile the db.cpp
Could u help out? thx!
Please Do not forget to include direct.h…
Thanks for the useful tutorial. I haven finished till the end. I just want to point out that the part starting from GFlags + GLog + ProtoBuf + LevelDB is very confusing. It would be useful if you add a few lines just before it to explain what is going on. Thank you
Thanks for the useful tutorial. I haven finished till the end. I just want to point out that the part starting from GFlags + GLog + ProtoBuf + LevelDB is very confusing. It would be useful if you add a few lines just before it to explain what is going on. Thank you
Thanks for the useful tutorial. I haven finished till the end. I just want to point out that the part starting from GFlags + GLog + ProtoBuf + LevelDB is very confusing. It would be useful if you add a few lines just before it to explain what is going on. Thank you
Hi.
i have a problem, so i need your help.
i want detail explain for setting.
thank you
Thanks for the great tutorial!
I have been using Caffe on linux for a while now, but since I’m new to linux I was always struggling to get things working.
This makes life a lot easier!
I compiled it on windows 7, VS2013, CUDA7.0
Everything works, including my own previous “linux caffe” experiments.
Only problem: it’s quite a lot slower, in the order of 3 times slower.
This is probably due to CUDNN, which I couldn’t get to work.
I have used the latest master branch by BVLC (08 juli 2015) and tried the following things to get CUDNN working:
first attempt with latest CUDNN (cudnn-6.5-win-v2-rc3)
– Add path to CUDNN folder to “additional include dirs”
– Add path to CUDNN folder to “additional library dirs”
– Add cudnn.lib, cudnn64_65.lib to “additional dependencies”
– add “USE_CUDNN” to the preprocessor definitions
– set CUDA C/C++ -> common-> target machine type” to “64 bit”
If I now try to compile any of the cudnn layers, for instance: cudnn_conv_layer.c, I get the following errors:
IntelliSense: declaration is incompatible with “const char *__stdcall cudnnGetErrorString(cudnnStatus_t status)” (declared at line 98 of “D:\toolkits\cudnn_v2\cudnn.h”) d:\caffe\caffe-master\include\caffe\util\cudnn.hpp 17 20 caffe
Error
error MSB3721: The command “”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\bin\nvcc.exe” -gencode=arch=compute_30,code=\”sm_30,compute_30\” –use-local-env –cl-version 2013 -ccbin “C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\x86_amd64″ -I../3rdparty/include -I../3rdparty/include/openblas -I../3rdparty/include/hdf5 -I../3rdparty/include/lmdb -I../include -I../src -ID:\toolkits\boost_1_56_0 -I”D:\toolkits\opencv-2.4.9\build\include” -I”D:\toolkits\opencv-2.4.9\build\include\opencv” -I”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include” -ID:\toolkits\cudnn_v2 -I”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include” -I”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include” –keep-dir x64\Release -maxrregcount=0 –machine 64 –compile -cudart static -DWIN32 -DNDEBUG -D_CONSOLE -D_LIB -D_CRT_SECURE_NO_WARNINGS -DUSE_CUDNN -D_UNICODE -DUNICODE -Xcompiler “/EHsc /W3 /nologo /O2 /Zi /MD ” -o x64\Release\cudnn_conv_layer.cu.obj “D:\caffe\caffe-master\src\caffe\layers\cudnn_conv_layer.cu”” exited with code 2.
error : declaration is incompatible with “const char *cudnnGetErrorString(cudnnStatus_t)” D:\caffe\caffe-master\include\caffe\util\cudnn.hpp 17 1 caffe
It seems that there are some incompatibilities between CUDNN V2 and caffe CUDNN layers.
If I instead use CUDNN V1 I get some other errors:
IntelliSense: expected a ‘;’ d:\caffe\caffe-master\include\caffe\util\cudnn.hpp 127 1
error MSB3721: The command “”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\bin\nvcc.exe” -gencode=arch=compute_30,code=\”sm_30,compute_30\” –use-local-env –cl-version 2013 -ccbin “C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\x86_amd64″ -I../3rdparty/include -I../3rdparty/include/openblas -I../3rdparty/include/hdf5 -I../3rdparty/include/lmdb -I../include -I../src -ID:\toolkits\boost_1_56_0 -I”D:\toolkits\opencv-2.4.9\build\include” -I”D:\toolkits\opencv-2.4.9\build\include\opencv” -I”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include” -ID:\toolkits\cudnn_v1 -I”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include” -I”C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include” –keep-dir x64\Release -maxrregcount=0 –machine 64 –compile -cudart static -DWIN32 -DNDEBUG -D_CONSOLE -D_LIB -D_CRT_SECURE_NO_WARNINGS -DUSE_CUDNN -D_UNICODE -DUNICODE -Xcompiler “/EHsc /W3 /nologo /O2 /Zi /MD ” -o x64\Release\conv_layer.cu.obj “D:\caffe\caffe-master\src\caffe\layers\conv_layer.cu”” exited with code 2.
error : identifier “cudnnTensorDescriptor_t” is undefined D:\caffe\caffe-master\include\caffe\util\cudnn.hpp 64 1 caffe
error : identifier “cudnnTensorDescriptor_t” is undefined D:\caffe\caffe-master\include\caffe\util\cudnn.hpp 69 1 caffe
error : identifier “cudnnTensorDescriptor_t” is undefined D:\caffe\caffe-master\include\caffe\util\cudnn.hpp 77 1 caffe
error : identifier “cudnnTensorDescriptor_t” is undefined D:\caffe\caffe-master\include\caffe\util\cudnn.hpp 102 1 caffe
It now seems that “cudnnTensorDescriptor_t” can not be found at all, as opposed to an incompatible declaration.
Now ofcourse the question, what am I doing wrong? Did I forget something, or should I use a different version of CUDNN (any of the Release candidates maybe?)
I would be really gratefull if you, or anyone else, could help me out 🙂
Solved it!
Using CUDNN v2, I had to change the following in cudnn.hpp
“inline const char* cudnnGetErrorString(cudnnStatus_t status)”
to
“inline const char * CUDNNWINAPI cudnnGetErrorString(cudnnStatus_t status)”
Now it works great with a speedup of 3x, similiar to linux performance!
Thank you!!!!!
I am able to compile common.cpp but when i try to compile any of blob.cpp, net.cpp or solver.cpp i get the following same error.
Error 24 error C4996: ‘std::_Copy_impl’: Function call with parameters that may be unsafe – this call relies on the caller to check that the passed values are correct. To disable this warning, use -D_SCL_SECURE_NO_WARNINGS. See documentation on how to use Visual C++ ‘Checked Iterators’
Any inputs would be appreciated. Thanks in advance.
Got it fixed adding “_SCL_SECURE_NO_WARNINGS” in preprocessor definitions (Configuration Properties -> C/C++ -> Preprocessor -> Preprocessor Definitions ).
Now all the files in layers folder are getting compiled but contrastive_loss_layer.cpp the following three error appears when i try to compile this.
Error 23 error C2784: ‘_Ty std::max(std::initializer_list,_Pr)’ : could not deduce template argument for ‘std::initializer_list’ from ‘float’
Error 24 error C2780: ‘const _Ty &std::max(const _Ty &,const _Ty &,_Pr)’ : expects 3 arguments – 2 provided
Error 26 error C2782: ‘const _Ty &std::max(const _Ty &,const _Ty &)’ : template parameter ‘_Ty’ is ambiguous
Thanks in advance.
Solved this by replacing
Dtype dist = std::max(margin – sqrt(dist_sq_.cpu_data()[i]), 0.0);
with
Dtype dist = std::max(margin – sqrt(dist_sq_.cpu_data()[i]), Dtype(0.0));
Now i am able to compile every file but at the end when i try to compile all of the caffe project i get 242 errors like
Error 2 error LNK2019: unresolved external symbol “void __cdecl caffe::caffe_gpu_axpy(int,float,float const *,float *)” (??$caffe_gpu_axpy@M@caffe@@YAXHMPEBMPEAM@Z) referenced in function “public: void __cdecl caffe::Blob::Update(void)” (?Update@?$Blob@M@caffe@@QEAAXXZ) D:\z_caffe\caffe-master\caffe\caffe\blob.obj caffe
Error 3 error LNK2001: unresolved external symbol “void __cdecl caffe::caffe_gpu_axpy(int,float,float const *,float *)” (??$caffe_gpu_axpy@M@caffe@@YAXHMPEBMPEAM@Z) D:\z_caffe\caffe-master\caffe\caffe\solver.obj caffe
Hi,
I followed your guidelines to install caffe on windows. I used cmake to create vs solution and I compiled caffe statically. When I try to run the lenet training example, the db conversion works fine, but the training fails complaining about the layer type. Layer_factory.hpp 77 check failed, registry count type = 1; unknown layertype. Am I missing something? Is there a workaround to resolve this?
thanks!
phani
Folks, after compiling the files, I am getting a huge bunch of errors of type LNK 2001. Indeed is something about 165 unresolved externals like the one showed below:
error LNK2001: unresolved external symbol “__declspec(dllimport) public: __thiscall google::base::CheckOpMessageBuilder::~CheckOpMessageBuilder(void)” (__imp_??1CheckOpMessageBuilder@base@google@@QAE@XZ) C:\Users\mhsc\Documents\Visual Studio 2012\Projects\caffe-master\caffe-master\caffe\caffe\solver.obj
I don´t know what I am missing and i´m going crazy…
Please, any suggestion is very welcomed and I thank you so much.
I think thats related to google glog. Pls check if the right library is specified in the linker options of caffe
Thanks for this helpful guide! One comment I would make is that it’s important to get the runtime library correct across all of the packages. As an example, I had downloaded OpenCV 3.0.0 and found the the only location for the OpenCV libs mentioned in the guide were in the staticlib dir, which doesn’t work well with the other precompiled 3rd-party libs provided. Fortunately, OpenCV 2.4.11 has the proper libs.
Also, is there an alternative location to get the OpenBLAS package? SourceForge seems to be having troubles at the moment. I’ve found some other OpenBLAS distributions (such as openblas.net) but it’s not clear they are related to the one from SourceForge.
Thanks!
Jeff
Hi, I’ve successfully finished your tutorial, it works fine. However, when I try to run example of cifar10, it always show”Check Failed: ReadProtoFromBinaryFile”. Have u ever bumped into such a situation? Can u provide me with some suggestions? Thanks in advance:D
Hi. I’m not sure what happened here. Maybe the function “ReadProtoFromBinaryFile” in io.cpp file failed. In this line:
int fd = open(filename, O_RDONLY | O_BINARY);
the “O_BINARY” sign is necessary in windows environment.
If you still cannot find what went wrong, you may try to follow my next tutorial to start the training process from Visual Studio. Maybe debug mode will show you the line that generated the error.
Hi,I followed your guidelines to install caffe on windows. Now all the files in util folder are getting compiled but signal_handler.cpp the following errors appears when i try to compile this.
SIGHUP,sigaction,SA_RESTART,sigfillset not declared,not difined
Have u ever bumped into such a situation? Can u provide me with some suggestions? Thanks in advance:D
Hi,
Can someone please explain in detail how can I use one of the famous pretrained nets (e.g AlexNet)?
That includes:
1) How can I download the pretrained net? (Is it already included in the windows library?) I didn’t find any suitable script in the caffe-windows folder.
2) How can I give an input to the net? (like RGB of pixels in an image)- I want to give the input myself
3) How can I extract features from “middle” layers of the net? (when I test the net after I give my input)
This information will be very helpful to me, thanks in advance!
Hi,
Can somebody explain why SIGHUP,sigaction,SA_RESTART,sigfillset not declared is not difined in file signal_handler.cpp.
Thanks in advance.
Hi,I meetting this question that days too.SIGHUP,sigaction,SA_RESTART,sigfillset not declared,Do you had solve this question?Thanks in advance
Maybe you can see this , https://msdn.microsoft.com/en-us/library/61af7cx3.aspx , just make the declaration enclosed in a block
Most of caffe windows repositories seems to be out of date and not maintained. So, I converted the latest version of caffe thanks to Neil Z. Shao and this blog post. http://github.com/woozzu/caffe
Hi,
Has anyone tried HDF5 file input on Windows? I got a lot of errors using Neil’s 3rd party library. I tried to fix it by re-building the hdf5 libraries from a more recent release. But I got a lot of unsatisfied externals.
Detailed messages are attached here:
………….. Caffe error at runtime ……………………………
I0728 15:03:48.645072 8684 net.cpp:368] data -> label
I0728 15:03:48.647073 8684 net.cpp:120] Setting up data
I0728 15:03:48.647073 8684 hdf5_data_layer.cpp:80] Loading list of HDF5 filenam
es from: hdf5_classification/data/test.txt
I0728 15:03:48.648072 8684 hdf5_data_layer.cpp:94] Number of HDF5 files: 1
HDF5-DIAG: Error detected in HDF5 (1.8.14) thread 0:
#000: ..\..\src\H5Dio.c line 173 in H5Dread(): can’t read data
major: Dataset
minor: Read failed
#001: ..\..\src\H5Dio.c line 550 in H5D__read(): can’t read data
major: Dataset
minor: Read failed
#002: ..\..\src\H5Dchunk.c line 1872 in H5D__chunk_read(): unable to read raw
data chunk
major: Low-level I/O
minor: Read failed
#003: ..\..\src\H5Dchunk.c line 2902 in H5D__chunk_lock(): data pipeline read
failed
major: Data filters
minor: Filter operation failed
#004: ..\..\src\H5Z.c line 1357 in H5Z_pipeline(): required filter ‘deflate’ i
s not registered
major: Data filters
minor: Read failed
#005: ..\..\src\H5PL.c line 298 in H5PL_load(): search in paths failed
major: Plugin for dynamically loaded library
minor: Can’t get value
#006: ..\..\src\H5PL.c line 466 in H5PL__find(): can’t open directory
major: Plugin for dynamically loaded library
minor: Can’t open directory or file
F0728 15:03:48.665073 8684 io.cpp:273] Check failed: status >= 0 (-1 vs. 0) Fai
led to read float dataset data
*** Check failure stack trace: ***
…………. Error linking with new HDF5 library ………………………
1>—— Build started: Project: caffe, Configuration: Release x64 ——
1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppBuild.targets(364,5): warning MSB8004: Output Directory does not end with a trailing slash. This build instance will add the slash as it is required to allow proper evaluation of the Output Directory.
1> Creating library ../bin\caffe.lib and object ../bin\caffe.exp
1>libhdf5.lib(H5Z.obj) : error LNK2001: unresolved external symbol SZ_encoder_enabled
1>libhdf5.lib(H5Zdeflate.obj) : error LNK2001: unresolved external symbol inflate
1>libhdf5.lib(H5Zdeflate.obj) : error LNK2001: unresolved external symbol inflateEnd
1>libhdf5.lib(H5Zdeflate.obj) : error LNK2001: unresolved external symbol compress2
1>libhdf5.lib(H5Zdeflate.obj) : error LNK2001: unresolved external symbol inflateInit_
1>libhdf5.lib(H5Zszip.obj) : error LNK2001: unresolved external symbol SZ_BufftoBuffCompress
1>libhdf5.lib(H5Zszip.obj) : error LNK2001: unresolved external symbol SZ_BufftoBuffDecompress
1>../bin\caffe.exe : fatal error LNK1120: 7 unresolved externals
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
In your description, one of the earlier steps is to set an include directory to ../include. However, there isn’t an include folder outside of the root directory. In fact, I can’t find an include folder anywhere inside the directory either, at least not until the 3rdparty is added as well.
Also, when compiling common.cpp, I get an error: std::Fill_n function parameters may be unsafe. To disable warning, use -D_SCL_SECURE_NO_WARNINGS. You don’t mention this as an error when compiling this part. Also, this is considered an invalid macro definition, so I’m not even sure how to simply ignore this error.
I’d like to ask if anyone has information on the following topic. I have DIGITS from NVIDIA and I trained my network and DIGITS generated a *.caffemodel file. I want to use this model for my windows Caffe to run test with. I can run tests. But I am not sure if the Google proto buffer can handle different byte orders (big endian and little endian issue) since DIGITS runs with Linux. The byte order on Linux is different from on Windows.
Thank you so much!
Bingcai
I have confirmed that Caffe model file is platform independent. My Windows Caffe generated model can be used by NVIDIA DIGITS, which is LINUX based.
Hi Neil,
great instructions. However, I got stuck while compiling blob.cpp.
When I hit right click on caffe to run the pre-build command line I receive the notice that the files and have been generated, followed by a warning that a path has not been found. Here is my report:
1>—— Build started: Project: caffe, Configuration: Debug x64 ——
1> caffe.pb.h is being generated
1> The system cannot find the path specified.
1> caffe_pretty_print.pb.h is being generated
1> The system cannot find the path specified.
1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(122,5): error MSB3073: The command “”../../scripts/GeneratePB.bat”
1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(122,5): error MSB3073: :VCEnd” exited with code 1.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
Can you help with this? Thanks in advance!
Did you solve the problem?
i have a problem.
i run to process step.8 (blob.cpp)
if i compiled a error comes up
>> fatal error C1189: #error : This file was generated by an older version of protoc which is
i need your help.
thank you
I am running the mnist example: create_mnist.sh. It reports: error while loading shared library: ?: cannot open shared object file: No such file or directory
Thanks a lot for sharing. This has saved so much work.
Did you solve the problem?
Hi Neil,
the link for LMDB is not available anymore. Have you another link for wonload it ?
Thanks
I found that : https://github.com/LMDB/lmdb
Hi again
I have some issues with HDF5 library.
I compile it, but I obtain errors in caffe project :
Erreur 248 error LNK2001: symbole externe non résolu “int __cdecl caffe::hdf5_get_num_links(int)” (?hdf5_get_num_links@caffe@@YAHH@Z) E:\cpp-librairies\caffe-git\caffe\solver.obj caffe
In caffe\util\hdf5.hpp, there is hdf5_get_num_links, but I have no definition in hdf5 project.
Any idea ?
I have A problem with my caffeModel binary file. A can’t read properly this file with caffe library compiled in windows.
I made train processus under ubuntu and when i want to use xxxx.caffemodel in windows, ifind that the number of layers (layers_size()) is zero
There the content of binary file obtained with caffe library compiled under ubuntu:
name: “CaffeNet”
100 {
1: “data”
2: “Data”
4: “data”
4: “label”
8 {
1: 0
}
10: 0
100 {
2: 1
3: 227
4: “/home/deepworld/CAFFE_ROOT/data/imagenet/image_mean.binaryproto”
}
107 {
1: “/home/deepworld/CAFFE_ROOT/examples/imagenet/image_train_lmdb”
4: 256
8: 1
}
}
100 {
1: “conv1”
2: “Convolution”
3: “data”
4: “conv1”
6 {
3: 0x3f800000
4: 0x3f800000
}
6 {
3: 0x40000000
4: 0x00000000
}
7 {
5: “\024\234\013=\341~\010=\332\0250=F\007\034=\352Oi=\n4o=\377\277\221=#\215\200=\237A\203=\t\340e=\205n\\=!\347\347<[\377\245<i\334\305<\217\310\346<\277HM=96d=\303\325O=\177\353K=\245\200B=\006\300,=XV==\026d\017<\020\251r<y\177\276\321\257<\347\224\344<\274I\322<\204\227\027=C,,=\315\"\347<\204\217\n=\301\247\227<z\225\021<\254\263\374;\243]\200<KO\220<\3572\216<y\330\314<\347\017\244<3\n\265<\372\254\324;\343\363a<Y\337\321;\241}\255;\360\263\364:RXN9n\325\004\2743\2776\273\305\2113;/m\321;\374\240\200\274\025\376\362\273\”e\201\274\2112\243\274\313A\004\275\325\017\240\274\353\327\252\274E\023\266\274\216\307A\274^\351\300\356\272\335\254\\\274\352\231\216\274\274\217\017\275\324z\021\275\006E\033\275\t;\022\275\026\214\341\274Bv\r\275\317\230o\274\354\010\272\274\026w\264\274\206&\224\274?\021\230\274\274\257@\275\247\351\004\275\021z>\275E`\037\275p\270\013\275\250g\006\275\331%\225\274\240\203\244\274.\226;\274j\252\231\274\253\\\’\275\036\014Q\275\321I$\275\265H<\275fYD\275\005\363\352\274\222z\353\274\333|x\274\007\352\217\274\231\233T\2743a\352\274\232\3652\275p\347K\275\324*J\275\345\314P\275\316\321b\275\333c\035\275t\214\t\275o\367\222\274\232\361\275\274Z\224\002\275\350ZA\275\017\tg\275<\250O\275\277\205S\275\315\346~\275\017iR\275\306d\t\275\253\316.\275\016\311\257<v\360m<<\014\361<\362\312\233<\324\263\274<\346a\021=S\024\n=\210d+=l\305\352<\234\242\360<\007\256\010=GA\271<\365h\250<\207O\352<1I~<U\231\313<\313\206\243<\204\373\026=\253_\333<7y\317<tZ\330<N\251\212<\246\036\247<\235\372\034<\037\246\226<3J\274<,\000\312<\212H}<{\233\314<2!\265<\t\217\225;\224\030\254<\336H*<bt\223;\306\333?<\037\217w<\360i[<\005\r\350<\262\352\255<\220\007\223<\315\352\217<\363\334\305; \247\321\270\334\3702<L\016\235;\010\0030<\0343B<\026\316\217<\360\241\213<5\335i<\332\364];d\205J<J\347\032\274\222M\346\273\210\332\302\272\327\304\034<I\000i<\342\273g<}\t\216;V\r\247\273_\247\010\274\236C\014\2748=/\274\327{\227\274\242\204f\274!!\332\273\303G\255;J2\267;\260u\225;\2361\364:s\272\01427\2746+\004\274\036\226c\274\274\234\262\273\271\220\203\274\333K\220<\0269\331<\"\325\217<q\262\376;\035\003Y<\302n\222\274\272\025k\274\013e\232\274\221\257\220\274X\272\304\274\260\214-\274u}o<\25547<^8,<\037~\037<<\025\365;\223\350,\274\257,\211;@\250\022\274\026q\261\274\327\254R\274\004\312%\274\304\314\026<\032\210\016<(\217\316;\217LO;\354w2<%8\321\273n\213T9\365\363V\274y\361[\274&&/;v\334\245;\016R\352;\353\254\315<\335K\224<\261\303\204<\272;\374:\202\377\223;(~\202;\005D\257\274\035(\272\273\355\0350\274\006×8<\203\367g\274\023Z\264\274\325\257\216\274-Q\344\274\231\335\246\274\254%\246\274\3670\262\274h\017\223\274\347\270\n\274\364\225\210\274?o\210\274\363\371\324\274\313t\333\274\241~\264\274HI\"\275\3112\001\275\240\264\306\274\240%\222\274iE\256\274\\\337\027\275\"\034\275\274\203\240G\274\\\313\267\274WM\257\274=\250\t\275\216l+\274\343\302\316\273-|\364\274\224\212\217\274d\356\256\274\211\317\024\274\202\366\336\273\262\273\030\274^\261\242\274\027\037\227\274f72\274\230J\001\274|\312\206\274xW\214\274.\266f\274\213\243\206\274Q\344-\274s\227W\274^\234I\274U4A\274\223\320\312\274\367.\026\274\224b/\274\346n\362:\324K\272\271\n0\t\274n\354|\273\205\232d;8\317\247\272\271\237\252\273\324P\203\274\260\2173\274Hj\032\274\237f\030\274\351I\201:\014G\215;\33642:lb\245\273\301v\2059\352b\357\271\346\200\215;\016Ra\272\027\335\221\274\274\023\216\274\217\262\242:U\014\305;\347\257_\273\021\363\'\273\316,\210\273^V\374;E~\303;\223+\370;\234\332\355\273%\343G\273~DT\274\362o\310;\341\242\230:cA\301\273$\233r<\246\235\004<X\031j;\311}\t\273+\357<<q\\\251\274\361\342\022\274\366\035\333:}\254Q\273\226U\016<\347\r!;\234S\366;\372U\304:\023\300\257;\242\255\336;\n\264\002\273\267\031|\274\247\273D\274&N\005\274ES,\007<\035\343\006<?\354\207<\332\257\204<\374B\243<\320\343\212<\311y\236<\224\316\214:$\361\225\273\340*B<\314\264\233<A\234\253<\227\034\257<\3426\374<\360\243\t<Z\345\226<n|\177\355\214\274\231\325\327\274\230q\022\275~\367\264\274\026,\316\274\255\377\231\274\226\315\304\274\260\347\357\274\006\363\343\274\3355\266\274\3023\307\274\245pY\274\221\300\223\274\346\025\304\274*\242\303\274\374[\355\274\217\004\321\274{\307\304\274u\257\356\274\255\300\000\275\343\017\026\275\346\317\320\274\342\354\333\274\361\363\255\274\330\277\347\274o(\322\274\252\021\342\274p\244\274\274\330N0\275\347\375\351\274\”\374\353\274,\256\272\274\2762\014\275[M\275\274\235]\004\275\300^\353\274pB\375\274\3171\021\275\244 \321\274\”\312\272\274K\370\242\274\315\377\320\274\035\301x\274\020\234\356\274\242(\337\274\234\273\372\274k]\025\275\364D\354\274\270\246\235\274\224\323\351\274w`\370\274\014A\361\2749\263\301\274\205b\353\274\020O\002\275\242W\366\274{\355\007\275\306L\363\274\200X\253\274\022;\311\274>\302\263\274\225V\350\274\246(\022\275\367[\006\275\320W\017\275\251\303\210\274/+\226\274J\265D\274\373\252\277\274\206O\024\2755\005\326\274\354k\235\274r\320\272\274\006O\321\274Ct\024\275\336\317\355\274\301[\320\274\276>\251\274\225\230\332\274\250\326\336\274&j\212\274\370\200\340\274vi\315\274\226\304\221\274\267|T\274\177\223\321\274\372\375\023\275yG(\275\313\030\262\274\257X\244\274_\303\020\275CQ\300\274\262\377\274\274l/\311\274\211\320\301\274$\327\244\274\225\352\260\274{\247\241\274K\330\351\274\255\010\204\2740VG\274\240\020P\274\344\023\013\274\311\236\252\274Z\245|\274\325\226\267\274I\356\212\274\323\005\203\274\0049\203\274\n\243\232\274\020\237\351<_$\013=\344\357\025=\371\300&=.\2066=H\223\024=\300\325D=)\'+=\372\266\024=D\332\027=\313\214\224<\251\326\3649=\374\0324=y\211N=\344D9=sld=;\361}=\273\357.=B\327#=\251_@=h=\2734P=\205P_=y\353L=G\307)=$\264\025=\242\212!=\305m9=\215\250$=\204\236\007=$q\020=L\254\003=k\2668=:\317\034=\352M\”=\371\377D=\\\361G=I\237H=\343\240\026=\254\253\026=\365J\024=3|*=\037\234%=\027\231\”=\367\230#=B\370#=L\350\”=\303c$=\021d\020=\202i7=\004\262\355\2745\224\023\275R\211\’\275\224\366#\275t\334\022\275\357p\013\275\017>\261\274y\335\206\274\2037\n\275\224\363\314\274\334T\343\274\237\327\334\274V\022\254\274a\301\365\274\3053\034\275\024\312z\274?\225\224\274$J\363\274P\025\326\274\t\212\003\275o\334\217\274Y\361\267\274\035]\255\274\334\355.\274Q\206\237\273\343\307g\274\310\357\002\275\013\245\352\274\254[\026\275y\212\272\274h\342\010\275\262W\034\275\361a\027\275FC\353\274\226\312\303\274\304\326z\274_\363\274\367:\333\274\350\035\227\274`\031\311\274\266\032o\274&\367\016\275\034\305\360\274X|\375\274\310}\257\274\360\211\327\274\256R\027\275\221\221\310\274s:\301\274@\025\270\274|\355\242\274\207\333)\275\021\214\001\275\2740\022\275\344\272\363\274\315\372\244\274\322V\371\274\271e\326\274\\`\010\275e\026\377\274\2005\036\275\272\300\354\274K\”\365\274\365\371\006\275\337:\312\274\307~P\274\305\374#\275\360\247J\275\355d\’\275\226\351:\275\362\347Z\275c\350J\275N\320-\275\233\365\021\275\372]\210\274>\224\342\274\314;\3677s=tHc=]!6=N\”\216=\306\212X=\324\026)=V\226b<\006\307\205;\213RQ;R\237\220<\361P\004=\021H\033=\341\037\t=\342n\363<I\317\315<\271\374\240<%\2166<1\007s<H\246\016\275s\274E\275\225\004i\275\272p\203\275\250\022\226\275\271a\224\275Te\227\275\344b|\275\316#y\275_$\231\275\256\356\022\275r\225\213<.Xv<\320@\227<0\311\247;\336\364\370\271@\236R<\261\020\213\272tQ\347;~6\r<j\306=<\234H\225<j?\000\273`\207\205;\030\304\214\274\206\266\007\274\304\375\016\274\313uN\274\361`\222\274\343\362\204\273w\335\201;\332&H\206\274j\332\304\274,\217\232\274e5\006\275T\356\022\275\204K\367\345x\017>[/\013>\177\250\275=\323b\230=n\324\372;\305\2753;#XH\274\033\221\252\2746\014\022\275D\224*\275\\\327\230\275\344aE\275\367\013G\275\225 h\274\220sK\273\233;\265\274\024xf\275!\227\227\275\340\354\277\275\256\035\317\275?\204\315\275\001\300\300\275Pz\263\275\220\257\200\275\314bK\275K-\330\274\252\000\030=)}e=\211\373\220=\375\205\261=\360{\312=\321t\307=*\340\250=\342\364\243=+\214\220=\024b)=\217I\377<\225\337\317\273\006\335\264\274\214R/\275\243\210=\275\364{\021\275,?V\275\317\370U\2750\026C\275\206_+\275\260E\000\275)\301j\274\262\264\020=K\322\203=\254A]=n\034\205=%!\236=\211\311\234=yi\210=f\354\250=\325\363\227=q\276P=\213\356\302<W\211\266;\370\324|<N\354\312<,/:=\001\261\206=|+k=M\001c=5d\027=wi\014=-9\344<\026Z\352<\240\262\010\275s4;\275\246A\037\275(\271\220\275\321\002\263\275cO\246\275?\235\236\275\204\315\256\275_\031\265\275\253\216\222\275\026$l\275\224\333\206<@OK;\276\251\220<\231\311\221\273\001\013\3528\314\273\030\274\006\224x\274\241\361S\274\317R3\273-\023+\274\t*\232\272\316\277\377\272\267\331a\273o\350V\274\037L\337\274 a\234\2748\201\345\274\314\\\014\275\017\3153\274\335\272g\274\215\235\221\273S\335\202;\361\027\013\274\303\022\276\2731\022\241\273\330J\026\273\201\221\252;\347.\360:.\003m\274FQ\022\2753})\275}\206\033\275\320\217/\275\255\037\304\227=y\020\306=\337\233\357=\303\001\016>\006\226\033>\200\177\036>\253\321\002>-g\331=_\320|=\250\264\026\274\025\025\337:M\230X\274:\213\342\274\272OL\275\017\300b\275\260>\211\275\371\275I\275\317\336I\2756\033c\274\”D\363:tz\023\275w\347t\275xP\266\275\316b\341\275\303J\342\275\273\331\334\275\203\304\324\275\351\344\322\275]\370\264\275\226He\275c\255\341\274CE\013=\’2K=\222oV=\000\211\224=vC\260=\016\267\245=\3538p=\264\370\204=d\207\025=\226\301\223<\204\231\233<K\351\267\274p\0308\275\212\343H\275\177&`\275\350\243t\275\371%\213\275%+\275\261\226\305\274\245\343\356R=\035\216\177=\026\020\201=tF\211=\314\264\204=\032\347<=.\035f<\226L\352;\354\327\320<\177\354\250\272\312<\n\352\366<\255z\343\274\222\373A\275{\327\200\275K\245\223\275\351\266\247\275\001\367\245\275C7\200\275\344\267\213\275\320F\215\275\270\307]\275D\237$\275\352\272\032\273\335.;<\371ys<\247Y\014<P\304\302;9\234w;R\260^<\r\021\013\274\006\002\327<\202^\217<\277\261\017;\010H\343;\213N\321;\225E\253:\302\206\221\273\247\353\234\274hg\222\274\302X\355\274\322\"\023\272r\220\327;\374H\035>\275\362H;\275:\332M\275\0322Q\025\020>n\241\005>\031\331\357=Z\307\275=\025\013\200=\374\237v\273\274\322V\274\356m\207\274\262\315\016\275D\300\\\275a\377`\2750\362\201\275z\n\213\275\237\270\026\275\200 \323\274KHC\274=\230`\275efz\275\214:\233\275\305\353\273\275\”!\314\275\234Z\350\275\271\202\266\275@\013\266\275\311\261\235\275]Gu\275\342\272\n\275\352h\372:5e?=\t\243[=K\234}=\351\022\220=s\256\213=\034\337\211=cO`=v\227+=\377\233\260\343\274fH\255\274\263C\201\274\275x\352\274\330\317\347\274`\263\237\274\0073@\274\223GL\2740_\343\274B\241\240\274(4#\275\256\241\376\274\007\031\034\275$\210\034\275\004\235\361\274\207n\032\275\203\3059\275\353\325\375\274\372;\305\274\342\031r\274`\275y\274b^:\275\2526V\275\312\314\241\275EE\247\275\020\206\235\275\222\244\234\275m>v\275\316\350\\\275\321[\257\274\247O/;\003\320\266;\207\270\244\274\016\260\005\275%\213S\275\301\357p\275\233\270T\275\317![\275\331qH\275\305E\240\274Z\273O\273\336\212\036<ky9=h\026\362<\021\240Y<\225[\303;Y\323\350<\223_\177<\322\337\222<[B\201<p\213\034=\266\371\220<\036+(=@\317\207=$[p=\236\017\214=\214$\212=\322\322\241=\221Q\234=m}x=\350\342\216=\332\355\245=\240\026\224=2Rz=\004\227\235=2\215\220=%\233\225=\332V\235=[\244\321=\332\253\313=k\r\313=P\247\270=\031\020\273=\241\373\243=\370\312\246=\245M\267=hi\224=\037\335\243=0y\302=\373\241\277=5 \311=|\212\310=}\236\264=\000z\263=q\207\237=\271:\227=\306i\231=x\026\226=\230\204\236=0?\220=m\244\233=\275\375\223=g\003\225=)\272\222=\374\r\241=\255as=\355O\226=\003\337\205=\232\260X=\262\2064=\t\336l=\234\214z=\t\342m=\247V\213=\350\317o=\213\366@=\327\367[=\302{~=N\331T=\355\001G=\003\340\020=*\017&=\271\374\354<\0345\263<\315\321\275<\317\243\020=~=\030=\220\032\n=\205\352<=\3236,\275\272\317\023\275N\274o\300}\274o\215\027\273\235\250\277\273\276\303\311\273S\265,\272>1\241\273Jw>\272E\204r;7c\326\273\330\370\’\273o\347L\274\024\013\241:\226\313\257\274\201m\220\274\262\n\271;4W+\274 m\347\273\016\205\304\272\263\351Q;\036\322\306\273k\330\212\274Jge\274\260\260\206\274^\310\271\274\235(\205\274\226\266\204\273bz.\274\272C\214\274|\337\352\273\300>\000\274\307e\206\274 \007\233\274!O\274\274p.\260\272\244d\231\274J\302\005\274Y\035\3559\276]\267\274\274~\335\274B/\337\2745\241\302\274\234\213\314\274,\312\326\274l\340\336\274\002\013\303\274\000\\\275\274\245Y\033\274\214[\266<\n\242\360<\222\204==\274Q^=\277\020\202=\307\300\017=\037\024H=6\014P=\3769l=[`\r=\266\200\314<.\266G=\"\235h=\372\273c=\245\217\216=\326\246\227=\3778\263=\332\312\217=U\310\206=u\222x=\354\210O=\201M,=\t\311.=\265\233;=?\027Z=\323Fq=\375\250\\=\342\021V=\376\204d=\331\222(=\272\262I=[E\002=\016\t\350<\253\205\216<O\337\332<\031\276\274<\024g\377<7-$=\260\275\315<\273\034\005=\332*\"=\210p\023=\275\340\235<\256\377\371<P\237\221<\272\223\317;!(\315<:3\367<<\\\351\341<\275\304\026=\201*\242<`\354\226<\036&7\273*j\231;\352\277\221<7\272\323;\273sz<\030P|<\212\226\255<\\\201\365<\277\205\001<\325\237\001<\242\244\001<\nMt;k\312_\274\215b`\274L\347\267\2742H\211\274\312\353\001\274\n\325:\2745\202R\2748\277H\274\224\337f\274\245\307\200\274\250\017\257\274\003\257\"\274.ek\274\364Z\343\274k\"!\274M\325\220\274\211\246-\274~\241\264\274\244\023t\274G\203\261\274\353\020R\274\007\243 \275_g\241\274\206\263\221\274\374\221\241\274=\351\201:\346\'Y\274\376U\347\274g[\370\274\357l\204\274\374Z.\274\313h\301\274%\000\013\275\021\221\377\273\346\034\206\273\370\251&\274\224\262P\274\257\035\234\273\202rH\274\311\257.\274\206B\033\274fDI: :0\273\341;\316\273\2051\230\274\375\357\223\273\2215\202:\275F\270\273\263B\013\274\260\356\243\273\272\302\303\273\037\013B\273{R\233\274I\347A\274\310\226\030\274\255\334*=\351\2227=)\3077=\\\023]=p\341R=@kc=\202\347c=h^G=\n\314i=\004n\026=\035n\030=\246m\307<\021q\027=\300\242G=\341\263Y=H\242\\=f\331$=\"\303D=\033\225,=\367\272K=;\310\000=\320\265\261<\263\034V<\030?d<eU\252<1S\361<\211/\010=\232\022.=@\276\302<\010,\247<\332\224\244S\275\276\360P\275\263\262E\275u\360N\275g\271%\275\302\373E\275\270\314m\275\356\r\201\2758\017\213\275Y\232R\275%\204Q\275NgJ\275\342)i\275v!]\275X\346X\275\235iY\275\025\233T\275\262\314j\275~~\210\275\235h\213\275\226%^\275\226Br\275]\034\200\275\205\252\214\275)if\2751z~\275\257\036r\275\360\200O\275&r\221\275\306fV<0\017\242<\324L\221<\373}\346;\247\344\304<\220m\312<\217\214r<\244\232\227<}\003{<-t\242<\022\036\240<N\265\232<\\\340\206<\"\244\205;\337m\265<\351\264\335<\315\345\225<-p\203<\310\225\271<Q\320T<\233-\266<\274\211\252;\267\302i<\336\357\204<;\327\330<z\022\311<\242\326\244<N\241\234<\302+\235<\205\301\213<9\214`<\366fH<\261\321j<H3\237<\036\304\273<\350\334\026<\257\024\364<\213\263\257<YS\326<\241\330\244<\336O\352<\222\r\001=5d\347<\306]\005=@6\027=\371\t\363<0 \261<\336\345t<\361\227\342<\300:\360<\351H\310<\034\007\000=h/\370<\261\022\361<\357\022&=\337\305\007=\3735\"=\334Y\034=\264\356\363<\203\331\330<fe\356<T\326\305\n=sD\274<\274\214\342<dK\355<\251\206\016=.\2156=x\364\374<\210\222\022=9\326\030=\257\307\006=\201\317\031=\347\360\004=\2706\'=\3324\"=\333\237\n=\254\372L=EsS=\275\214d=A\371K=\022\263D=\247\026E=-gT=\303|\n=U/U=F\353V=\317z3=\357\220\037=\235\270@=\212F5=q\241\'=\272\tC=\027\3574=\260#8=\365\2768=\243\214\037=Y\204?=\364\0068=\260\241\014=XG5=\332\345\021=\022f4=\350\273D=\323\354\335<\334J<=?\201/=^}H=\236BM=\026\255N=#-/=\375gC=\301\2635=b\351\r=\241<\027=L\2735=\254\211\033=\257\374;=\010\224/=\356\360A=M\2119=F\016\013\275\005\2409\275+T:\275\215\312D\275\t\3167\275ghI\275\003S;\275\355\276\275\374\034\031\275\032\200D\275\323ff\275a\251=\275\362\2641\275>\211T\275C\317o\2751\210M\275V\217B\275H\276\037\275}j\007\275\207\206\035\275\332()\275\260\226\373\274O\367\360\274\222<\376\274\3274\032\275\3228\006\275\245R\004\275\330\306\024\275\373\251\031\275H(\225\274\231@\212\274G^\234\274\231\n]\274\202\212r\274\t\234U\274\007%\367\273]@\254\272\231\301\330\273\245C\203\274hy\336\274\334\230\\\274\243-\023\274O\314\034;\372\253\214:\177\003\026\273F6\361\273\357\375\374\273\345F\245\273\2409\024\273G\"A;y\nn\273\274s\027;i\262I<b\t\276:\324?d;6\014\023<\014]q\272\204\005!;\330l\235:E\203\245<(\275g\273\3629\304\273\264\203\010<\207\266A;\022\202f<*\205\377;\234,@X<}<A\273\235\242M<*\030\232<\n\356\341;\303P\230:`\0161<c\242m<Y\376\343;@\014\221\273H\024\224<Y\010\031\274E3\247;\\\n<\273\327\372\036<\006\352\224;\274\244\343\273\340+s8\010\271d:\312\251C;v\247\207\274\0021\347:\305\240\262;\211;\204;R\211\266;Qr\307\271b\250s<\234_\007\273\242a);\005\235\212;\254&\343\272\372\253\212\273c\037#<\246\013o<\264\245\300;~\343U;l]\317;\t\203?<\264\345\005\273\243\354\017:\246\244G<)\312\362\273e\321i\273c\244\026=N\315\274<\266\277!=\337e\232\273\333\004@\274\352+\027\275\361V\354\274\367\253\n\275,t*\275\324\007\341\273UA\361<\347\241e<\311l\263<\"\262:=\372n\247;\356\350\034\274\204s0\275\240\374F\275\357-3\275\322\017\014\275\271\262o<S\324\333<9\rz<\2718\224<j/\206=\014*-=\337_\272\273\327\207\220\275\362\273\233\275\307\254\210\275\014\227\033\2731\230\272<\317,\260<C\354\352\272\344/f<\2018f=\355Kf=\313ex<\335\221\227\275\235\360\023\276\337/\213\275!;\316<\255\026]=(\006\325<b\342\322\272N,K<\"+}=\0024\235=\331\250\312<\233\215\220\275\247L4\276\227\213\226\275+i\016=\271\023n=I\247\252<\352h\\;\375*\252<\261Cz==x\265=I\016\314<f\351\237\275\033\365D\276\267\004x\275Hp9=\261\251\213=\376T\014=\025\031\"<\242\005\262<g\n\226=\017\277\305=\237Y\303<\233\350\220\275\311\366%\276@\311P\275[\034@=\357\017\201=\"P\251<a(\220;W\224\267<d\256\206=\372\366\230=\350\226\t<P\n\226\275\004u\023\276\244\226\212\275*\231\321<\366m-=\334\036\247<\301\013\345<:\n\220<\211\347:=V\314\224=03\313\273\036\226i\275\274\177\275\275\227\231A\275,\222\024<\320\347*=\245\r\261<P\225\315<\344\242\004=\361\257\204=\326\tT=\331\267\007\274\215\3348\275\014\304\213\275\366\327<\275)\233\237\272\307\362\314<\223\216\006=\367\205\341<\023_\262<\025\265%=\222G\355<o\275\247\274\212\200)\275xI|\275\217\277N\275\203H\327\274~\262\224<\325\242\300<\333\214\344;\255\325\020<\372Z#=M<X<\327=\277\272J\333\301\273\\\315#\274\031\007P\274\260r\242\274\215\331s;Q\312\350<OJ <\347\255\201<[\324+=\n\327\302<&\222l\2734d\345\274\010!\362\274\363\353\001\275\265f\264:\022\363\256<\272\272 =\265[+:\370\261\302:wza=z\220M=U\252j<\0274\'\275u\001M\275~3\006\275Y\204\275\346<\330\n\350;\006\224\205\273R/_=8\202\210=\037\323\033=\262\2254\275%\010\366\275\265\035\016\275\206\252\033=&\3132=\311/\016=zD\311\272O\036\267:P\271G=w\034\272=\025\362\022=(\215H\275\177\223\036\276\364\t\020\275r$I=\304\345\204=\022\364\226<[e\330\274\237\334\354\272\005\255Y=\233~\260=\366\016\035=\371\256\206\275wm\'\276\246\315\022\275\337\275^=\3246\210=\366\246K<\361J\342\273e\332\352;1\267Z=\310\001\262=pG\326<\343@\223\275\321\001 \276{G0\275\"\346\222=\326\033x=\035\"\006=d\265\316\272\22
……………………………..
//////
I have a look on bvlc_reference_caffenet.caffemodel, the content it's more logical:
name: "CaffeNet"
layers {
top: "data"
top: "label"
name: "data"
type: DATA
data_param {
source: "/home/jiayq/caffe-train-leveldb"
mean_file: "/home/jiayq/ilsvrc2012_mean.binaryproto"
batch_size: 256
crop_size: 227
mirror: true
}
}
layers {
bottom: "data"
top: "conv1"
name: "conv1"
type: CONVOLUTION
blobs {
num: 96
channels: 3
height: 11
width: 11
data: -0.0012135869
data: 0.0032365269
data: 0.0067056264
data: 0.0013365269
data: 0.00015098628
data: -0.016105399
data: -0.027094858
data: -0.041366898
data: -0.0404647
data: -0.033410437
data: -0.020661572
data: -0.0015275782
data: 0.0023613083
data: 0.0127269
data: 0.027090358
data: 0.057741854
data: 0.071556665
Can anyone help me please?
the same caffemodel work fine for a classification task under ubuntu.
I have A problem with my caffeModel binary file. A can’t read properly this file with caffe library compiled in windows.
I made train processus under ubuntu and when i want to use xxxx.caffemodel in windows, ifind that the number of layers (layers_size()) is zero
There the content of binary file obtained with caffe library compiled under ubuntu:
name: “CaffeNet”
100 {
1: “data”
2: “Data”
4: “data”
4: “label”
8 {
1: 0
}
10: 0
100 {
2: 1
3: 227
4: “/home/deepworld/CAFFE_ROOT/data/imagenet/image_mean.binaryproto”
}
107 {
1: “/home/deepworld/CAFFE_ROOT/examples/imagenet/image_train_lmdb”
4: 256
8: 1
}
}
100 {
1: “conv1”
2: “Convolution”
3: “data”
4: “conv1”
6 {
3: 0x3f800000
4: 0x3f800000
}
6 {
3: 0x40000000
4: 0x00000000
}
7 {
5: “\024\234\013=\341~\010=\332\0250=F\007\034=\352Oi=\n4o=\377\277\221=#\215\200=\237A\203=\t\340e=\205n\\=!\347\347<[\377\245<i\334\305<\217\310\346<\277HM=96d=\303\325O=\177\353K=\245\200B=\006\300,=XV==\026d\017<\020\251r<y\177\276\321\257<\347\224\344<\274I\322<\204\227\027=C,,=\315\"\347<\204\217\n=\301\247\227<z\225\021<\254\263\374;\243]\200<KO\220<\3572\216<y\330\314<\347\017\244<3\n\265<\372\254\324;\343\363a<Y\337\321;\241}\255;\360\263\364:RXN9n\325\004\2743\2776\273\305\2113;/m\321;\374\240\200\274\025\376\362\273\”e\201\274\2112\243\274\313A\004\275\325\017\240\274\353\327\252\274E\023\266\274\216\307A\274^\351\300\356\272\335\254\\\274\352\231\216\274\274\217\017\275\324z\021\275\006E\033\275\t;\022\275\026\214\341\274Bv\r\275\317\230o\274\354\010\272\274\026w\264\274\206&\224\274?\021\230\274\274\257@\275\247\351\004\275\021z>\275E`\037\275p\270\013\275\250g\006\275\331%\225\274\240\203\244\274.\226;\274j\252\231\274\253\\\’\275\036\014Q\275\321I$\275\265H<\275fYD\275\005\363\352\274\222z\353\274\333|x\274\007\352\217\274\231\233T\2743a\352\274\232\3652\275p\347K\275\324*J\275\345\314P\275\316\321b\275\333c\035\275t\214\t\275o\367\222\274\232\361\275\274Z\224\002\275\350ZA\275\017\tg\275<\250O\275\277\205S\275\315\346~\275\017iR\275\306d\t\275\253\316.\275\016\311\257<v\360m<<\014\361<\362\312\233<\324\263\274<\346a\021=S\024\n=\210d+=l\305\352<\234\242\360<\007\256\010=GA\271<\365h\250<\207O\352<1I~<U\231\313<\313\206\243<\204\373\026=\253_\333<7y\317<tZ\330<N\251\212<\246\036\247<\235\372\034<\037\246\226<3J\274<,\000\312<\212H}<{\233\314<2!\265<\t\217\225;\224\030\254<\336H*<bt\223;\306\333?<\037\217w<\360i[<\005\r\350<\262\352\255<\220\007\223<\315\352\217<\363\334\305; \247\321\270\334\3702<L\016\235;\010\0030<\0343B<\026\316\217<\360\241\213<5\335i<\332\364];d\205J<J\347\032\274\222M\346\273\210\332\302\272\327\304\034<I\000i<\342\273g<}\t\216;V\r\247\273_\247\010\274\236C\014\2748=/\274\327{\227\274\242\204f\274!!\332\273\303G\255;J2\267;\260u\225;\2361\364:s\272\01427\2746+\004\274\036\226c\274\274\234\262\273\271\220\203\274\333K\220<\0269\331<\"\325\217<q\262\376;\035\003Y<\302n\222\274\272\025k\274\013e\232\274\221\257\220\274X\272\304\274\260\214-\274u}o<\25547<^8,<\037~\037<<\025\365;\223\350,\274\257,\211;@\250\022\274\026q\261\274\327\254R\274\004\312%\274\304\314\026<\032\210\016<(\217\316;\217LO;\354w2<%8\321\273n\213T9\365\363V\274y\361[\274&&/;v\334\245;\016R\352;\353\254\315<\335K\224<\261\303\204<\272;\374:\202\377\223;(~\202;\005D\257\274\035(\272\273\355\0350\274\006×8<\203\367g\274\023Z\264\274\325\257\216\274-Q\344\274\231\335\246\274\254%\246\274\3670\262\274h\017\223\274\347\270\n\274\364\225\210\274?o\210\274\363\371\324\274\313t\333\274\241~\264\274HI\"\275\3112\001\275\240\264\306\274\240%\222\274iE\256\274\\\337\027\275\"\034\275\274\203\240G\274\\\313\267\274WM\257\274=\250\t\275\216l+\274\343\302\316\273-|\364\274\224\212\217\274d\356\256\274\211\317\024\274\202\366\336\273\262\273\030\274^\261\242\274\027\037\227\274f72\274\230J\001\274|\312\206\274xW\214\274.\266f\274\213\243\206\274Q\344-\274s\227W\274^\234I\274U4A\274\223\320\312\274\367.\026\274\224b/\274\346n\362:\324K\272\271\n0\t\274n\354|\273\205\232d;8\317\247\272\271\237\252\273\324P\203\274\260\2173\274Hj\032\274\237f\030\274\351I\201:\014G\215;\33642:lb\245\273\301v\2059\352b\357\271\346\200\215;\016Ra\272\027\335\221\274\274\023\216\274\217\262\242:U\014\305;\347\257_\273\021\363\'\273\316,\210\273^V\374;E~\303;\223+\370;\234\332\355\273%\343G\273~DT\274\362o\310;\341\242\230:cA\301\273$\233r<\246\235\004<X\031j;\311}\t\273+\357<<q\\\251\274\361\342\022\274\366\035\333:}\254Q\273\226U\016<\347\r!;\234S\366;\372U\304:\023\300\257;\242\255\336;\n\264\002\273\267\031|\274\247\273D\274&N\005\274ES,\007<\035\343\006<?\354\207<\332\257\204<\374B\243<\320\343\212<\311y\236<\224\316\214:$\361\225\273\340*B<\314\264\233<A\234\253<\227\034\257<\3426\374<\360\243\t<Z\345\226<n|\177\355\214\274\231\325\327\274\230q\022\275~\367\264\274\026,\316\274\255\377\231\274\226\315\304\274\260\347\357\274\006\363\343\274\3355\266\274\3023\307\274\245pY\274\221\300\223\274\346\025\304\274*\242\303\274\374[\355\274\217\004\321\274{\307\304\274u\257\356\274\255\300\000\275\343\017\026\275\346\317\320\274\342\354\333\274\361\363\255\274\330\277\347\274o(\322\274\252\021\342\274p\244\274\274\330N0\275\347\375\351\274\”\374\353\274,\256\272\274\2762\014\275[M\275\274\235]\004\275\300^\353\274pB\375\274\3171\021\275\244 \321\274\”\312\272\274K\370\242\274\315\377\320\274\035\301x\274\020\234\356\274\242(\337\274\234\273\372\274k]\025\275\364D\354\274\270\246\235\274\224\323\351\274w`\370\274\014A\361\2749\263\301\274\205b\353\274\020O\002\275\242W\366\274{\355\007\275\306L\363\274\200X\253\274\022;\311\274>\302\263\274\225V\350\274\246(\022\275\367[\006\275\320W\017\275\251\303\210\274/+\226\274J\265D\274\373\252\277\274\206O\024\2755\005\326\274\354k\235\274r\320\272\274\006O\321\274Ct\024\275\336\317\355\274\301[\320\274\276>\251\274\225\230\332\274\250\326\336\274&j\212\274\370\200\340\274vi\315\274\226\304\221\274\267|T\274\177\223\321\274\372\375\023\275yG(\275\313\030\262\274\257X\244\274_\303\020\275CQ\300\274\262\377\274\274l/\311\274\211\320\301\274$\327\244\274\225\352\260\274{\247\241\274K\330\351\274\255\010\204\2740VG\274\240\020P\274\344\023\013\274\311\236\252\274Z\245|\274\325\226\267\274I\356\212\274\323\005\203\274\0049\203\274\n\243\232\274\020\237\351<_$\013=\344\357\025=\371\300&=.\2066=H\223\024=\300\325D=)\'+=\372\266\024=D\332\027=\313\214\224<\251\326\3649=\374\0324=y\211N=\344D9=sld=;\361}=\273\357.=B\327#=\251_@=h=\2734P=\205P_=y\353L=G\307)=$\264\025=\242\212!=\305m9=\215\250$=\204\236\007=$q\020=L\254\003=k\2668=:\317\034=\352M\”=\371\377D=\\\361G=I\237H=\343\240\026=\254\253\026=\365J\024=3|*=\037\234%=\027\231\”=\367\230#=B\370#=L\350\”=\303c$=\021d\020=\202i7=\004\262\355\2745\224\023\275R\211\’\275\224\366#\275t\334\022\275\357p\013\275\017>\261\274y\335\206\274\2037\n\275\224\363\314\274\334T\343\274\237\327\334\274V\022\254\274a\301\365\274\3053\034\275\024\312z\274?\225\224\274$J\363\274P\025\326\274\t\212\003\275o\334\217\274Y\361\267\274\035]\255\274\334\355.\274Q\206\237\273\343\307g\274\310\357\002\275\013\245\352\274\254[\026\275y\212\272\274h\342\010\275\262W\034\275\361a\027\275FC\353\274\226\312\303\274\304\326z\274_\363\274\367:\333\274\350\035\227\274`\031\311\274\266\032o\274&\367\016\275\034\305\360\274X|\375\274\310}\257\274\360\211\327\274\256R\027\275\221\221\310\274s:\301\274@\025\270\274|\355\242\274\207\333)\275\021\214\001\275\2740\022\275\344\272\363\274\315\372\244\274\322V\371\274\271e\326\274\\`\010\275e\026\377\274\2005\036\275\272\300\354\274K\”\365\274\365\371\006\275\337:\312\274\307~P\274\305\374#\275\360\247J\275\355d\’\275\226\351:\275\362\347Z\275c\350J\275N\320-\275\233\365\021\275\372]\210\274>\224\342\274\314;\3677s=tHc=]!6=N\”\216=\306\212X=\324\026)=V\226b<\006\307\205;\213RQ;R\237\220<\361P\004=\021H\033=\341\037\t=\342n\363<I\317\315<\271\374\240<%\2166<1\007s<H\246\016\275s\274E\275\225\004i\275\272p\203\275\250\022\226\275\271a\224\275Te\227\275\344b|\275\316#y\275_$\231\275\256\356\022\275r\225\213<.Xv<\320@\227<0\311\247;\336\364\370\271@\236R<\261\020\213\272tQ\347;~6\r<j\306=<\234H\225<j?\000\273`\207\205;\030\304\214\274\206\266\007\274\304\375\016\274\313uN\274\361`\222\274\343\362\204\273w\335\201;\332&H\206\274j\332\304\274,\217\232\274e5\006\275T\356\022\275\204K\367\345x\017>[/\013>\177\250\275=\323b\230=n\324\372;\305\2753;#XH\274\033\221\252\2746\014\022\275D\224*\275\\\327\230\275\344aE\275\367\013G\275\225 h\274\220sK\273\233;\265\274\024xf\275!\227\227\275\340\354\277\275\256\035\317\275?\204\315\275\001\300\300\275Pz\263\275\220\257\200\275\314bK\275K-\330\274\252\000\030=)}e=\211\373\220=\375\205\261=\360{\312=\321t\307=*\340\250=\342\364\243=+\214\220=\024b)=\217I\377<\225\337\317\273\006\335\264\274\214R/\275\243\210=\275\364{\021\275,?V\275\317\370U\2750\026C\275\206_+\275\260E\000\275)\301j\274\262\264\020=K\322\203=\254A]=n\034\205=%!\236=\211\311\234=yi\210=f\354\250=\325\363\227=q\276P=\213\356\302<W\211\266;\370\324|<N\354\312<,/:=\001\261\206=|+k=M\001c=5d\027=wi\014=-9\344<\026Z\352<\240\262\010\275s4;\275\246A\037\275(\271\220\275\321\002\263\275cO\246\275?\235\236\275\204\315\256\275_\031\265\275\253\216\222\275\026$l\275\224\333\206<@OK;\276\251\220<\231\311\221\273\001\013\3528\314\273\030\274\006\224x\274\241\361S\274\317R3\273-\023+\274\t*\232\272\316\277\377\272\267\331a\273o\350V\274\037L\337\274 a\234\2748\201\345\274\314\\\014\275\017\3153\274\335\272g\274\215\235\221\273S\335\202;\361\027\013\274\303\022\276\2731\022\241\273\330J\026\273\201\221\252;\347.\360:.\003m\274FQ\022\2753})\275}\206\033\275\320\217/\275\255\037\304\227=y\020\306=\337\233\357=\303\001\016>\006\226\033>\200\177\036>\253\321\002>-g\331=_\320|=\250\264\026\274\025\025\337:M\230X\274:\213\342\274\272OL\275\017\300b\275\260>\211\275\371\275I\275\317\336I\2756\033c\274\”D\363:tz\023\275w\347t\275xP\266\275\316b\341\275\303J\342\275\273\331\334\275\203\304\324\275\351\344\322\275]\370\264\275\226He\275c\255\341\274CE\013=\’2K=\222oV=\000\211\224=vC\260=\016\267\245=\3538p=\264\370\204=d\207\025=\226\301\223<\204\231\233<K\351\267\274p\0308\275\212\343H\275\177&`\275\350\243t\275\371%\213\275%+\275\261\226\305\274\245\343\356R=\035\216\177=\026\020\201=tF\211=\314\264\204=\032\347<=.\035f<\226L\352;\354\327\320<\177\354\250\272\312<\n\352\366<\255z\343\274\222\373A\275{\327\200\275K\245\223\275\351\266\247\275\001\367\245\275C7\200\275\344\267\213\275\320F\215\275\270\307]\275D\237$\275\352\272\032\273\335.;<\371ys<\247Y\014<P\304\302;9\234w;R\260^<\r\021\013\274\006\002\327<\202^\217<\277\261\017;\010H\343;\213N\321;\225E\253:\302\206\221\273\247\353\234\274hg\222\274\302X\355\274\322\"\023\272r\220\327;\374H\035>\275\362H;\275:\332M\275\0322Q\025\020>n\241\005>\031\331\357=Z\307\275=\025\013\200=\374\237v\273\274\322V\274\356m\207\274\262\315\016\275D\300\\\275a\377`\2750\362\201\275z\n\213\275\237\270\026\275\200 \323\274KHC\274=\230`\275efz\275\214:\233\275\305\353\273\275\”!\314\275\234Z\350\275\271\202\266\275@\013\266\275\311\261\235\275]Gu\275\342\272\n\275\352h\372:5e?=\t\243[=K\234}=\351\022\220=s\256\213=\034\337\211=cO`=v\227+=\377\233\260\343\274fH\255\274\263C\201\274\275x\352\274\330\317\347\274`\263\237\274\0073@\274\223GL\2740_\343\274B\241\240\274(4#\275\256\241\376\274\007\031\034\275$\210\034\275\004\235\361\274\207n\032\275\203\3059\275\353\325\375\274\372;\305\274\342\031r\274`\275y\274b^:\275\2526V\275\312\314\241\275EE\247\275\020\206\235\275\222\244\234\275m>v\275\316\350\\\275\321[\257\274\247O/;\003\320\266;\207\270\244\274\016\260\005\275%\213S\275\301\357p\275\233\270T\275\317![\275\331qH\275\305E\240\274Z\273O\273\336\212\036<ky9=h\026\362<\021\240Y<\225[\303;Y\323\350<\223_\177<\322\337\222<[B\201<p\213\034=\266\371\220<\036+(=@\317\207=$[p=\236\017\214=\214$\212=\322\322\241=\221Q\234=m}x=\350\342\216=\332\355\245=\240\026\224=2Rz=\004\227\235=2\215\220=%\233\225=\332V\235=[\244\321=\332\253\313=k\r\313=P\247\270=\031\020\273=\241\373\243=\370\312\246=\245M\267=hi\224=\037\335\243=0y\302=\373\241\277=5 \311=|\212\310=}\236\264=\000z\263=q\207\237=\271:\227=\306i\231=x\026\226=\230\204\236=0?\220=m\244\233=\275\375\223=g\003\225=)\272\222=\374\r\241=\255as=\355O\226=\003\337\205=\232\260X=\262\2064=\t\336l=\234\214z=\t\342m=\247V\213=\350\317o=\213\366@=\327\367[=\302{~=N\331T=\355\001G=\003\340\020=*\017&=\271\374\354<\0345\263<\315\321\275<\317\243\020=~=\030=\220\032\n=\205\352<=\3236,\275\272\317\023\275N\274o\300}\274o\215\027\273\235\250\277\273\276\303\311\273S\265,\272>1\241\273Jw>\272E\204r;7c\326\273\330\370\’\273o\347L\274\024\013\241:\226\313\257\274\201m\220\274\262\n\271;4W+\274 m\347\273\016\205\304\272\263\351Q;\036\322\306\273k\330\212\274Jge\274\260\260\206\274^\310\271\274\235(\205\274\226\266\204\273bz.\274\272C\214\274|\337\352\273\300>\000\274\307e\206\274 \007\233\274!O\274\274p.\260\272\244d\231\274J\302\005\274Y\035\3559\276]\267\274\274~\335\274B/\337\2745\241\302\274\234\213\314\274,\312\326\274l\340\336\274\002\013\303\274\000\\\275\274\245Y\033\274\214[\266<\n\242\360<\222\204==\274Q^=\277\020\202=\307\300\017=\037\024H=6\014P=\3769l=[`\r=\266\200\314<.\266G=\"\235h=\372\273c=\245\217\216=\326\246\227=\3778\263=\332\312\217=U\310\206=u\222x=\354\210O=\201M,=\t\311.=\265\233;=?\027Z=\323Fq=\375\250\\=\342\021V=\376\204d=\331\222(=\272\262I=[E\002=\016\t\350<\253\205\216<O\337\332<\031\276\274<\024g\377<7-$=\260\275\315<\273\034\005=\332*\"=\210p\023=\275\340\235<\256\377\371<P\237\221<\272\223\317;!(\315<:3\367<<\\\351\341<\275\304\026=\201*\242<`\354\226<\036&7\273*j\231;\352\277\221<7\272\323;\273sz<\030P|<\212\226\255<\\\201\365<\277\205\001<\325\237\001<\242\244\001<\nMt;k\312_\274\215b`\274L\347\267\2742H\211\274\312\353\001\274\n\325:\2745\202R\2748\277H\274\224\337f\274\245\307\200\274\250\017\257\274\003\257\"\274.ek\274\364Z\343\274k\"!\274M\325\220\274\211\246-\274~\241\264\274\244\023t\274G\203\261\274\353\020R\274\007\243 \275_g\241\274\206\263\221\274\374\221\241\274=\351\201:\346\'Y\274\376U\347\274g[\370\274\357l\204\274\374Z.\274\313h\301\274%\000\013\275\021\221\377\273\346\034\206\273\370\251&\274\224\262P\274\257\035\234\273\202rH\274\311\257.\274\206B\033\274fDI: :0\273\341;\316\273\2051\230\274\375\357\223\273\2215\202:\275F\270\273\263B\013\274\260\356\243\273\272\302\303\273\037\013B\273{R\233\274I\347A\274\310\226\030\274\255\334*=\351\2227=)\3077=\\\023]=p\341R=@kc=\202\347c=h^G=\n\314i=\004n\026=\035n\030=\246m\307<\021q\027=\300\242G=\341\263Y=H\242\\=f\331$=\"\303D=\033\225,=\367\272K=;\310\000=\320\265\261<\263\034V<\030?d<eU\252<1S\361<\211/\010=\232\022.=@\276\302<\010,\247<\332\224\244S\275\276\360P\275\263\262E\275u\360N\275g\271%\275\302\373E\275\270\314m\275\356\r\201\2758\017\213\275Y\232R\275%\204Q\275NgJ\275\342)i\275v!]\275X\346X\275\235iY\275\025\233T\275\262\314j\275~~\210\275\235h\213\275\226%^\275\226Br\275]\034\200\275\205\252\214\275)if\2751z~\275\257\036r\275\360\200O\275&r\221\275\306fV<0\017\242<\324L\221<\373}\346;\247\344\304<\220m\312<\217\214r<\244\232\227<}\003{<-t\242<\022\036\240<N\265\232<\\\340\206<\"\244\205;\337m\265<\351\264\335<\315\345\225<-p\203<\310\225\271<Q\320T<\233-\266<\274\211\252;\267\302i<\336\357\204<;\327\330<z\022\311<\242\326\244<N\241\234<\302+\235<\205\301\213<9\214`<\366fH<\261\321j<H3\237<\036\304\273<\350\334\026<\257\024\364<\213\263\257<YS\326<\241\330\244<\336O\352<\222\r\001=5d\347<\306]\005=@6\027=\371\t\363<0 \261<\336\345t<\361\227\342<\300:\360<\351H\310<\034\007\000=h/\370<\261\022\361<\357\022&=\337\305\007=\3735\"=\334Y\034=\264\356\363<\203\331\330<fe\356<T\326\305\n=sD\274<\274\214\342<dK\355<\251\206\016=.\2156=x\364\374<\210\222\022=9\326\030=\257\307\006=\201\317\031=\347\360\004=\2706\'=\3324\"=\333\237\n=\254\372L=EsS=\275\214d=A\371K=\022\263D=\247\026E=-gT=\303|\n=U/U=F\353V=\317z3=\357\220\037=\235\270@=\212F5=q\241\'=\272\tC=\027\3574=\260#8=\365\2768=\243\214\037=Y\204?=\364\0068=\260\241\014=XG5=\332\345\021=\022f4=\350\273D=\323\354\335<\334J<=?\201/=^}H=\236BM=\026\255N=#-/=\375gC=\301\2635=b\351\r=\241<\027=L\2735=\254\211\033=\257\374;=\010\224/=\356\360A=M\2119=F\016\013\275\005\2409\275+T:\275\215\312D\275\t\3167\275ghI\275\003S;\275\355\276\275\374\034\031\275\032\200D\275\323ff\275a\251=\275\362\2641\275>\211T\275C\317o\2751\210M\275V\217B\275H\276\037\275}j\007\275\207\206\035\275\332()\275\260\226\373\274O\367\360\274\222<\376\274\3274\032\275\3228\006\275\245R\004\275\330\306\024\275\373\251\031\275H(\225\274\231@\212\274G^\234\274\231\n]\274\202\212r\274\t\234U\274\007%\367\273]@\254\272\231\301\330\273\245C\203\274hy\336\274\334\230\\\274\243-\023\274O\314\034;\372\253\214:\177\003\026\273F6\361\273\357\375\374\273\345F\245\273\2409\024\273G\"A;y\nn\273\274s\027;i\262I<b\t\276:\324?d;6\014\023<\014]q\272\204\005!;\330l\235:E\203\245<(\275g\273\3629\304\273\264\203\010<\207\266A;\022\202f<*\205\377;\234,@X<}<A\273\235\242M<*\030\232<\n\356\341;\303P\230:`\0161<c\242m<Y\376\343;@\014\221\273H\024\224<Y\010\031\274E3\247;\\\n<\273\327\372\036<\006\352\224;\274\244\343\273\340+s8\010\271d:\312\251C;v\247\207\274\0021\347:\305\240\262;\211;\204;R\211\266;Qr\307\271b\250s<\234_\007\273\242a);\005\235\212;\254&\343\272\372\253\212\273c\037#<\246\013o<\264\245\300;~\343U;l]\317;\t\203?<\264\345\005\273\243\354\017:\246\244G<)\312\362\273e\321i\273c\244\026=N\315\274<\266\277!=\337e\232\273\333\004@\274\352+\027\275\361V\354\274\367\253\n\275,t*\275\324\007\341\273UA\361<\347\241e<\311l\263<\"\262:=\372n\247;\356\350\034\274\204s0\275\240\374F\275\357-3\275\322\017\014\275\271\262o<S\324\333<9\rz<\2718\224<j/\206=\014*-=\337_\272\273\327\207\220\275\362\273\233\275\307\254\210\275\014\227\033\2731\230\272<\317,\260<C\354\352\272\344/f<\2018f=\355Kf=\313ex<\335\221\227\275\235\360\023\276\337/\213\275!;\316<\255\026]=(\006\325<b\342\322\272N,K<\"+}=\0024\235=\331\250\312<\233\215\220\275\247L4\276\227\213\226\275+i\016=\271\023n=I\247\252<\352h\\;\375*\252<\261Cz==x\265=I\016\314<f\351\237\275\033\365D\276\267\004x\275Hp9=\261\251\213=\376T\014=\025\031\"<\242\005\262<g\n\226=\017\277\305=\237Y\303<\233\350\220\275\311\366%\276@\311P\275[\034@=\357\017\201=\"P\251<a(\220;W\224\267<d\256\206=\372\366\230=\350\226\t<P\n\226\275\004u\023\276\244\226\212\275*\231\321<\366m-=\334\036\247<\301\013\345<:\n\220<\211\347:=V\314\224=03\313\273\036\226i\275\274\177\275\275\227\231A\275,\222\024<\320\347*=\245\r\261<P\225\315<\344\242\004=\361\257\204=\326\tT=\331\267\007\274\215\3348\275\014\304\213\275\366\327<\275)\233\237\272\307\362\314<\223\216\006=\367\205\341<\023_\262<\025\265%=\222G\355<o\275\247\274\212\200)\275xI|\275\217\277N\275\203H\327\274~\262\224<\325\242\300<\333\214\344;\255\325\020<\372Z#=M<X<\327=\277\272J\333\301\273\\\315#\274\031\007P\274\260r\242\274\215\331s;Q\312\350<OJ <\347\255\201<[\324+=\n\327\302<&\222l\2734d\345\274\010!\362\274\363\353\001\275\265f\264:\022\363\256<\272\272 =\265[+:\370\261\302:wza=z\220M=U\252j<\0274\'\275u\001M\275~3\006\275Y\204\275\346<\330\n\350;\006\224\205\273R/_=8\202\210=\037\323\033=\262\2254\275%\010\366\275\265\035\016\275\206\252\033=&\3132=\311/\016=zD\311\272O\036\267:P\271G=w\034\272=\025\362\022=(\215H\275\177\223\036\276\364\t\020\275r$I=\304\345\204=\022\364\226<[e\330\274\237\334\354\272\005\255Y=\233~\260=\366\016\035=\371\256\206\275wm\'\276\246\315\022\275\337\275^=\3246\210=\366\246K<\361J\342\273e\332\352;1\267Z=\310\001\262=pG\326<\343@\223\275\321\001 \276{G0\275\"\346\222=\326\033x=\035\"\006=d\265\316\272\22
……………………………..
//////
I have a look on bvlc_reference_caffenet.caffemodel, the content it's more logical:
name: "CaffeNet"
layers {
top: "data"
top: "label"
name: "data"
type: DATA
data_param {
source: "/home/jiayq/caffe-train-leveldb"
mean_file: "/home/jiayq/ilsvrc2012_mean.binaryproto"
batch_size: 256
crop_size: 227
mirror: true
}
}
layers {
bottom: "data"
top: "conv1"
name: "conv1"
type: CONVOLUTION
blobs {
num: 96
channels: 3
height: 11
width: 11
data: -0.0012135869
data: 0.0032365269
data: 0.0067056264
data: 0.0013365269
data: 0.00015098628
data: -0.016105399
data: -0.027094858
data: -0.041366898
data: -0.0404647
data: -0.033410437
data: -0.020661572
data: -0.0015275782
data: 0.0023613083
data: 0.0127269
data: 0.027090358
data: 0.057741854
data: 0.071556665
Can anyone help me please?
The same caffemodel work fine for a classification task under ubuntu.
Thank you very much. Here I want to ask for help to solve an error when I tried to do training using train_lenet.bat.As I have no GPU, I set caffe to CPU mode however it occurs this error. } inner_product_param { num_output: 10 weight_filler { type: “xavier” } bias_filler { type: “constant” } } } layer { name: “loss” type: “SoftmaxWithLoss” bottom: “ip2” bottom: “label” top: “loss” } I1105 02:08:03.815011 12212 layer_factory.hpp:76] Creating layer mnist F1105 02:08:03.816010 12212 internal_thread.cpp:26] Check failed: error == cudaS uccess (35 vs. 0) CUDA driver version is insufficient for CUDA runtime version *** Check failure stack trace: Can you help me to solve it? Thank you
Thanks for you help. I am stuck in training. I have generate mnist_train_leveldb and mnist_test_leveldb. And modified caffe/examples/mnist/lenet_train_test.prototxt, substituted “lmdb” with “leveldb”. when i do train i got error.max_iter: 10000
lr_policy: “inv”
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: “examples/mnist/lenet”
solver_mode: CPU
net: “examples/mnist/lenet_train_test-leveldb.prototxt”
I1213 10:30:15.756268 9672 solver.cpp:72] Creating training net from net file:
examples/mnist/lenet_train_test-leveldb.prototxt
I1213 10:30:15.758268 9672 net.cpp:274] The NetState phase (0) differed from th
e phase (1) specified by a rule in layer mnist
I1213 10:30:15.759268 9672 net.cpp:274] The NetState phase (0) differed from th
e phase (1) specified by a rule in layer accuracy
I1213 10:30:15.760268 9672 net.cpp:67] Creating Layer mnist
I1213 10:30:15.760268 9672 net.cpp:355] mnist -> data
I1213 10:30:15.760268 9672 net.cpp:355] mnist -> label
I1213 10:30:15.761268 9672 net.cpp:96] Setting up mnist
F1213 10:30:15.774269 9672 data_layer.cpp:84] Unknown database backend
*** Check failure stack trace: ***
Can you help me? Thank you!
For matlab wrapper do the following step,
1) In tools Drag matlab/caffe/matcaffe.cpp file into Visual Studio’s Source Files.
2) C/C++ Add include path of matlab to Additional Include Directories in my case “C:\Program Files\MATLAB\R2011b\extern\include”
3) Add matlab lib path to Additional Library Directories in Configuration Properties -> Linker -> General: In my case “C:\Program Files\MATLAB\R2011b\extern\lib\win64\microsoft”.
4) Configuration Properties -> General set Target Extension to “.mexw64” and configuration type “Dynamic Library (.dll)”
5) under “Additional Dependencies” add libmx.lib; libmex.lib; libmat.lib;
Im missing a libopenblas.dll for win32, can you add that to the 3rd party zip?
Thanks for the great tutorial.
I could manage to set everything up as mentioned here, But I am getting wierd errors . which are :
Error 103 error MSB3073: The command “”G:\Caffe\caffe-master\scripts\GeneratePB.bat”
:VCEnd” exited with code 1. C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets 122 5 Caffe
104 IntelliSense: identifier “_open” is undefined g:\Caffe\caffe-master\src\caffe\util\io.cpp 40 12 Caffe
105 IntelliSense: identifier “_close” is undefined g:\Caffe\caffe-master\src\caffe\util\io.cpp 45 3 Caffe
106 IntelliSense: identifier “_open” is undefined g:\Caffe\caffe-master\src\caffe\util\io.cpp 51 12 Caffe
107 IntelliSense: identifier “_close” is undefined g:\Caffe\caffe-master\src\caffe\util\io.cpp 55 3 Caffe
108 IntelliSense: identifier “_open” is undefined g:\Caffe\caffe-master\src\caffe\util\io.cpp 59 11 Caffe
109 IntelliSense: identifier “_close” is undefined g:\Caffe\caffe-master\src\caffe\util\io.cpp 69 3 Caffe
Every single file in the project can be compiled individually just fine except io.cpp which as you can see, complains about open and close,(by the way, the files are generated using GeneratePB.bat as well )
Whats wrong here ?
Thanks in advance
You should #include in windows 10, this header file has the open and close method’s definition; it takes me so long to find this file in the “C:\Program Files (x86)\Windows Kits\10\Include\10.0.10240.0\ucrt” folder.
sorry, you should #include file, the corecrt_io.h header file.
I build it successfully on windows 10. But when I use it on matlab it is giving me this error. And it seemd no body has a solution on windows.
Mex file entry point is missing. Please check the (case-sensitive)
spelling of mexFunction (for C MEX-files), or the (case-insensitive)
spelling of MEXFUNCTION (for FORTRAN MEX-files).
Invalid MEX-file ‘…..\caffe.mexw64’
can Anyone help me ?
can i build caffe without using CUDA then please let me know the steps
Those of you who have problem compiling caffe use this windows branch:
It uses the latest version of caffe and Has just everything configured, just download and compile 🙂
https://github.com/happynear/caffe-windows
Yes, you can, Just remove the CUDNN preprocessor from the project propertise (C/C++ preprocessor section I guess )
You may also use this : https://github.com/happynear/caffe-windows
please reply to question i posted regarding caffe windows master
i have done it with caffe windows which gives error after building
ImportError Traceback (most recent call last)
in ()
—-> 1 import caffe
F:\python\lib\site-packages\caffe\__init__.py in ()
—-> 1 from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver
2 from ._caffe import set_mode_cpu, set_mode_gpu, set_device, Layer, get_solver, layer_type_list
3 from .proto.caffe_pb2 import TRAIN, TEST
4 from .classifier import Classifier
5 from .detector import Detector
F:\python\lib\site-packages\caffe\pycaffe.py in ()
11 import numpy as np
12
—> 13 from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
14 RMSPropSolver, AdaDeltaSolver, AdamSolver
15 import caffe.io
ImportError: DLL load failed: The specified module could not be found.
Thanks for the instructions. Everything compiles for me successfully and I am able to generate leveldb train and test folders. However when I try to run “train_lenet.bat”, I get following errors:
I0112 18:55:23.433831 8052 caffe.cpp:184] Using GPUs 0
I0112 18:55:23.759487 8052 common.cpp:32] System entropy source not available, using fallback algorithm to generate seed instead.
F0112 18:55:23.761483 8052 solver_factory.hpp:76] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown solver type: SGD (known types: )
*** Check failure stack trace: ***
This is a linking problem. Drag and drop the sources from caffe-master/src/caffe/solvers (adadelta_solver.cpp, adagrad_solver.cpp, adam_solver.cpp, nesterov_solver.cpp, rmsprop_solver.cpp, sgd_solver.cpp) intro your vs project and compile them one by one, then rebuild. Now when you run “train_lenet.bat” the problem should be gone 🙂
I have a hopefully simple question
re: installing cuDNN on Visual Studio 2013 with Windows 7 system having CUDA 7.0 already installed.
1. Downloaded cudnn Library for Windows and extracted it into directory C:\cuDNNExtracted
2. That generates a new folder C:\cuDNNExtracted\cuda which contains 3 more sub-folders: bin, include, lib.
NV installation instructions say to
A) Add to the PATH environment variable.
I add: C:\cuDNNExtracted\cuda
is this correct?
B) In VS project properties NV says add to Include Directories
C) In VS add to Library Directories
In both cases B,C, I added C:\cuDNNExtracted\cuda without any $ or ()
Is this correct?
D) NV says add cudnn.lib to Linker-> Input->Additional Dependencies which I do.
Is this correct?
When I try to run the program always get linker error that can’t link cudnn.lib
What am I doing wrong?
Thanks in advance,
Windows Nvidia experts.
For Correction from my Post above,
A) C:\cuDNNExtracted since that was the Directory extracted into
B)C:\cuDNNExtracted\cuda\include
C)
C:\cuDNNExtracted\cuda\lib
For Correction from my Post above,
A) PATH = “C:\cuDNNExtracted” since that was the Directory extracted into
B) C:\cuDNNExtracted\cuda\include
C) C:\cuDNNExtracted\cuda\lib
D) cudnn.lib
THE ERROR that can’t link cudnn seems to be related to the fact that cuda 7.5 not cuda 7.0 is being executed?
HOW can I change that and how is it related to above
fatal error LNK1104: cannot open file ‘cudnn.lib’
If I do just steps A,B,C above then sample program from Cuda Samples Runs OK (not trying cuDNN samples yet)
but when add step D after Steps A,B,C it won’t run because can not open that file cudnn.lib?
I added cudnn.lib to Linker-> Input->Additional Dependencies in step D as Nvidia said to do for Windows Installation. I’m stuck.
Any suggestions?
Is this right?
Part D Linker-> Input->Additional Dependencies :
I changed cudnn.lab to
C:\cuDNNExtracted\cuda\lib\x64\cudnn.lib
and it ran cuda sample
but when I tried to run mnistCUDNN_vs2010 project get error again that it can’t find cudnn.lib
Hi everybody,
I used the convert_imagenet code under windows to build the training base. I have a base of images having 20 go, but the program output is a file with a file Data.mdb having 1To . In convert_imagenet.cpp it intialement reserve a space of 1 TB, but it is not able to re-adjust this space depending on the actual size of training set.
can anyone help me?
Hi, I tried to compile the code but got this error:
error : inherited member is not allowed data_layer.cu
This is pointing to this line in the file:
Dtype DataLayer::Forward_gpu(const vector<Blob*>& bottom,
vector<Blob*>* top) {
Any help would be most appreciated, thank you.
i have a problem, so i need your help.
1>—— 已開始建置: 專案: caffelib, 組態: Release x64 ——
1> caffelib.vcxproj -> D:\caffe\build_cpu_only\caffelib\../../bin\caffelib.lib
2>—— 已開始建置: 專案: caffe, 組態: Release x64 ——
2> 正在建立程式庫 ../../bin\caffe.lib 和物件 ../../bin\caffe.exp
2>caffe.obj : error LNK2001: 無法解析的外部符號 “__declspec(dllimport) void __cdecl google::ShowUsageWithFlagsRestrict(char const *,char const *)” (__imp_?ShowUsageWithFlagsRestrict@google@@YAXPEBD0@Z)
2>caffe.obj : error LNK2001: 無法解析的外部符號 “__declspec(dllimport) void __cdecl google::SetUsageMessage(class std::basic_string<char,struct std::char_traits,class std::allocator > const &)” (__imp_?SetUsageMessage@google@@YAXAEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z)
2>caffe.obj : error LNK2001: 無法解析的外部符號 “__declspec(dllimport) public: __cdecl google::FlagRegisterer::FlagRegisterer(char const *,char const *,char const *,char const *,void *,void *)” (__imp_??0FlagRegisterer@google@@QEAA@PEBD000PEAX1@Z)
2>caffe.obj : error LNK2001: 無法解析的外部符號 “__declspec(dllimport) void __cdecl google::SetVersionString(class std::basic_string<char,struct std::char_traits,class std::allocator > const &)” (__imp_?SetVersionString@google@@YAXAEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z)
2>common.obj : error LNK2001: 無法解析的外部符號 “__declspec(dllimport) unsigned int __cdecl google::ParseCommandLineFlags(int *,char * * *,bool)” (__imp_?ParseCommandLineFlags@google@@YAIPEAHPEAPEAPEAD_N@Z)
2>../../bin\caffe.exe : fatal error LNK1120: 5 個無法解析的外部符號
Hi,
Now I am able to compile all the files but when I try to build the caffe project, I met some problems.
The errors are like below:
1>caffe.obj : error LNK2001: Unresolved external symbol “private: void __cdecl caffe::Layer::Lock(void)” (?Lock@?$Layer@M@caffe@@AEAAXXZ)
1>sigmoid_cross_entropy_loss_layer.obj : error LNK2001: Unresolved external symbol “private: void __cdecl caffe::Layer::Lock(void)” (?Lock@?$Layer@M@caffe@@AEAAXXZ)
1>softmax_loss_layer.obj : error LNK2001: Unresolved external symbol “private: void __cdecl caffe::Layer::Lock(void)” (?Lock@?$Layer@M@caffe@@AEAAXXZ)
I have no idea how to solve the problem. Any suggestion is welcomed and thank you very much.
how you solved this problem
https://github.com/happynear/caffe-windows
can you please help me im stuck with this line.
GFlags + GLog + ProtoBuf + LevelDB
Download source code from the internet.
Use CMake to generate .sln for vs2013. Remember to set “CMAKE_INSTALL_PREFIX”, which is where the output files will be generated by build “INSTALL”.
Build in vs2013. Usually build “BUILD_ALL” first, then “INSTALL”. Both Debug and Release mode.
Copy compiled files to caffe/3rdparty. Debug versions should be renamed “+d” before copy, e.g. “lib” -> “gflagsd.lib”.
i’ve build GFlags + GLog + ProtoBuf + LevelDB those lib using cmake. Didnt understand this line “Copy compiled files to caffe/3rdparty. Debug versions should be renamed “+d” before copy, e.g. “lib” -> “gflagsd.lib”.”
please if possible explain a bit.
Like i need to copy the whole cmake build files to 3rdparty?
hi I’m beginner. I use visual studio 2012. Can I use your 3rdpary?
It’s going to be ending of mine day, but before finish I am
reading this impressive piece of writing to improve my experience.
site-packages\sklearn\cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
“This module will be removed in 0.20.”, DeprecationWarning)
what is the problem please ?
i m not much clear in step 5,hw to set the output Directory to ‘…/bin’.As m not getting bin folder anywhere.plz help me what exactly i hv to write.