Computer Vision News - July 2019

40 Computer Vision News Focus on 1 tracker = cv2.TrackerKCF_create() Now we are ready to preprocess the video. Essentially, we open the video and read the first frame in the video; in this example, I'll also show you how to rotate each frame in case your video is not aligned. This can all be done in the following code: In the above code, the first line loads the video from a file in my computer. The second line is a standard extraction of the frame from a video. The flag argument specifies if the frame extracted successfully. The last 4 rows just rotate the given frame by 270 degrees. At this stage we are ready to begin. In this code, the region of interest, i.e. the initial bounding box, will be defined manually. It is not very hard to use some pre-trained network to perform the initial detection. For example, YOLO network can extract a good bounding box for the object; however, we keep it for future articles in order to explain detection methods in details. The ROI selector of cv2 opens the frame in a dialog box and allows us to mark a rectangle over the object. The output will be a tuple with four numbers, two for the upper left side and two for the lower right. We use the ROI selector as: 1 box = cv2.selectROI(frame, False ) The last thing that we still need to do is to use the box that we defined. Then, we will iterate through the frames and for each frame we update the tracker. If the tracking succeeds, we draw it on the frame and show it. Otherwise we just show the frame. At the end we make an option to break the while loop in the case where esc is pressed or no frames are available. This code will look like this: 1 2 3 4 5 6 7 video = cv2.VideoCapture( "IMG_1003.MOV" ) flag, frame = video.read() (h, w) = frame.shape[: 2 ] center = (w / 2 , h / 2 ) M = cv2.getRotationMatrix2D(center, 270 , 1 ) frame = cv2.warpAffine(frame, M, (h, w)) Tool

RkJQdWJsaXNoZXIy NTc3NzU=