
Reading videos and cameras
This section introduces you to reading a video and camera with this simple example:
#include <iostream> #include <string> #include <sstream> using namespace std; // OpenCV includes #include "opencv2/core.hpp" #include "opencv2/highgui.hpp" using namespace cv; // OpenCV command line parser functions // Keys accecpted by command line parser const char* keys = { "{help h usage ? | | print this message}" "{@video | | Video file, if not defined try to use webcamera}" }; int main( int argc, const char** argv ) { CommandLineParser parser(argc, argv, keys); parser.about("Chapter 2. v1.0.0"); //If requires help show if (parser.has("help")) { parser.printMessage(); return 0; } String videoFile= parser.get<String>(0); // Check if params are correctly parsed in his variables if (!parser.check()) { parser.printErrors(); return 0; } VideoCapture cap; // open the default camera if(videoFile != "") cap.open(videoFile); else cap.open(0); if(!cap.isOpened()) // check if we succeeded return -1; namedWindow("Video",1); for(;;) { Mat frame; cap >> frame; // get a new frame from camera imshow("Video", frame); if(waitKey(30) >= 0) break; } // Release the camera or video cap cap.release(); return 0; }
Before we explain how to read video or camera inputs, we need to introduce a useful new class that will help us manage the input command line parameters; this new class was introduced in OpenCV version 3.0 and is called the CommandLineParser
class:
// OpenCV command line parser functions // Keys accepted by command line parser const char* keys = { "{help h usage ? | | print this message}" "{@video | | Video file, if not defined try to use webcamera}" };
The first thing that we have to do for a command-line parser is define the parameters that we need or allow in a constant char vector; each line has this pattern:
{ name_param | default_value | description}
The name_param
can be preceded with @
, which defines this parameter as a default input. We can use more than one name_param
:
CommandLineParser parser(argc, argv, keys);
The constructor will get the inputs of the main function and the key constants defined previously:
//If requires help show if (parser.has("help")) { parser.printMessage(); return 0; }
The .has
class method checks the parameter's existence. In this sample, we check whether the user has added the –help
or ? parameter
, and then, use the printMessage
class function to show all the description parameters:
String videoFile= parser.get<String>(0);
With the .get<typename>(parameterName)
function, we can access and read any of the input parameters:
// Check if params are correctly parsed in his variables if (!parser.check()) { parser.printErrors(); return 0; }
After getting all the required parameters, we can check whether these parameters are parsed correctly and show an error message if one of the parameters is not parsed. For example, add a string instead of a number:
VideoCapture cap; // open the default camera if(videoFile != "") cap.open(videoFile); else cap.open(0); if(!cap.isOpened()) // check if we succeeded return -1;
The class to read a video and camera is the same. The VideoCapture
class belongs to the videoio
submodel instead of the highgui
submodule, as in the former version of OpenCV. After creating the object, we check whether the input command line videoFile
parameter has a path filename. If it's empty, then we try to open a web camera and, if it has a filename, then we open the video file. To do this, we use the open function, giving the video filename or the index camera that we want to open as a parameter. If we have a single camera, we can use 0
as a parameter.
To check whether we can read the video filename or the camera, we use the isOpened
function:
namedWindow("Video",1); for(;;) { Mat frame; cap >> frame; // get a new frame from camera if(frame) imshow("Video", frame); if(waitKey(30) >= 0) break; } // Release the camera or video cap cap.release();
Finally, we create a window to show the frames with the namedWindow
function and, with a non-finish loop, we grab each frame with the >>
operation and show the image with the imshow
function, if we correctly retrieve the frame. In this case, we don't want to stop the application, but we want to wait for 30 milliseconds to check whether users want to stop the application execution with any key using waitKey(30)
.
Note
To choose a good value to wait for the next frame, using a camera access is calculated from the speed of the camera. For example, if a camera works at 20 FPS, a great wait value is 40 = 1000/20.
When the user wants to finish the app, he has to only press a key, and then we have to release all the video resources using the release
function.
Note
It is very important to release all the resources that we use in a Computer Vision application; if we do not do it, we can consume all the RAM memory. We can release the matrices with the release
function.
The result of the code is a new window that shows a video or web camera in the BGR format, as shown in the following screenshot:
