Camera platform system

FIELD: physics, video.

SUBSTANCE: invention relates to a video surveillance and camera control system capable of performing panoramic turning and tilted turning of the camera. The camera platform system has a camera which captures an object image to generate a frame image, camera platforms which turn a camera about a panning axis and a tilt axis and image processors which generate a visual image based on the frame image. When a camera passes through a predefined angular position for turning about the tilt axis, an image processor generates a first visual image corresponding to the image formed by turning the frame image by an angle greater than 0 degrees but less than 180 degrees about the panning axis in a predefined angular position before generating a second visual image corresponding to the image formed by turning the frame image 180 degrees about the panning axis.

EFFECT: reducing unnaturalness of change in direction of movement of an object in a visual image in order to reduce errors when tracking an object.

8 cl, 15 dwg

 

The technical field

The present invention relates to a system operator's platform, capable of performing panoramic rotation and tilt of the camera rotation.

The level of technology

In the system operator's platform suspended from the ceiling type, such as a surveillance camera, when shooting an object approaching from the front with his tracking using directional rotating camera, an object image obtained after passing the position directly under the camera, flipped top to bottom, if any image processing. Therefore, when the camera angle becomes a predetermined fixed angle (for example, 90 degrees, at which the camera is pointed in the direction directly down), in many cases, there is a function that performs the processing coup 180 degree image frame to correctly display the top and bottom of the object.

Japanese laid patent No. 2003-289466 discloses a method of generating a visual display for which you are processing coup to have the normal attitude of the top-down, by changing the way of reading the image frame from the memory, which is provided to pre-save the image frame when the camera angle relative to the ceiling of the Aven or greater than 90 degrees.

However, in the method described in Japanese laid patent No. 2003-289466, despite the fact that the direction of rotation of the camera angle does not change, since the direction of the moving object in the visual display suddenly changes during the processing of a coup, the user observes the object is nervous and feels incompatibility in the work.

In addition, when the object goes or comes to the position directly under the camera, the direction of movement of the object cannot be determined, and in tracking the object can lead to confusion.

The invention

The present invention provides a system operator's platform, which reduces unnatural changes in the direction of movement of the object in the visual display, to prevent confusion in tracking object.

The system operator's platform, as one aspect of the present invention includes a camera configured to capture images of an object to generate image frame, operator's platform, configured to rotate the camera around the axis pan and tilt axis, and the image processor, configured to generate a visual display based on the image spacecraft is RA. When the camera passes a predetermined angular position of rotation around the axis of tilt, the image processor generates a first visual display corresponding to the image formed by the image rotation frame by an angle greater than 0 degrees and smaller than 180 degrees in a predetermined angular position, to generate a second visual display corresponding to the image formed by the image rotation of the frame 180 degrees.

The system operator's platform, as another aspect of the present invention, includes a reflecting lens configured to capture images of the object, and the operator platform, which includes a first drive mechanism configured to rotate a reflecting lens around a first axis perpendicular to the vertical direction to change the angle of the optical axis of the reflecting lens and the vertical direction, and a second drive mechanism configured to rotate a reflecting lens and the first drive mechanism around the second axis, perpendicular to the first axis. The system includes an image processor, configured to generate a first visual display corresponding to the image is, formed by the rotation of the image frame received before the angle of inclination reaches the first angle greater than 0 degrees and smaller than 180 degrees, in accordance with the tilt angle reaches the first angle, and generating the second visual display corresponding to the image formed by the image rotation frame received before the angle of inclination reaches the first angle of 180 degrees, in accordance with the tilt angle reaches the second angle, different from the first angle, when the angle of inclination.

Additional features and aspects of the present invention will become apparent from the following description of exemplary variants of embodiments with reference to the attached drawings.

Brief description of drawings

Figure 1 depicts a diagram illustrating a schematic configuration of a system operator's platform, which is a variation of embodiment 1 of the present invention.

Figa-2D depict diagrams illustrating the relationship of the object, the system operator's platform and control the display in the variant of embodiment 1.

Figa and 3B depict schematic description of the operation of the memory management image in the variant of embodiment 1.

Figure 4 depicts a block diagram describing the movement of a screen display in a variant of embodiment 1.

Figure 5 depicts a diagram describing Rel is the solution of the system operator's platform and angle on-screen display in the variant of embodiment 1.

6 depicts a diagram that describes how the interpolation pixel in the variant of embodiment 1.

Figa and 7B depict diagrams illustrating a screen display of an object in a variant of embodiment 2.

Figa-8C depict diagrams illustrating a screen display of an object in a variant of embodiment 3.

Detailed description of the variants of the incarnation

Exemplary embodiments of the embodiment of the present invention will be described below with reference to the attached drawings.

Variant embodiments 1

Figure 1 illustrates the system configuration operator's platform, which is a variation of embodiment 1 of the present invention. Figa-2D illustrate the relationship between the system operator's platform and the image screen of the display depending on its angle of inclination in the variant of embodiment 1.

In figure 1 reference number 1 denotes a system operator's platform, configured with a camera, which captures an object image to generate image frame, and an operator's platform, through which the camera is capable of performing panoramic rotation around the axis pan and tilt rotation around the tilt axis.

Reference symbol PA denotes the axis of the pan (the second axis), and the reference symbol TA denotes the tilt axis (the first axis). The tilt axis TA - axis, which is perpendicular to vertical is the direction and capable of tilting the optical axis showing the lens 11 by rotation of the tilt. Axis pan PA - axis, which is perpendicular to the tilt axis TA. In a variant embodiment, the term "panoramic turn" means the turning movement of the camera to adjust the image frame in the transverse direction, and the term "oblique rotation means rotating the camera motion to move the image frame in the direction up or down. The terms "transverse direction" and the direction " up or down" means the direction in the display image.

Panoramic rotation and oblique rotation system 1 operator's platform can be controlled via remote control, which is not shown, or by remote control via a network such as a LAN.

The light from the object passes through the reflecting lens 11 for forming an object image on the receiving picture element 2 such as a CCD sensor or a CMOS sensor. Perceiving the image element 2 performs photoelectric conversion of an object image to output the signal display. Scheme 3 division GBR divides the signal display of perceiving the image of the item 2 in the color signals G (green), B (blue) and R (red).

Scheme 4 processing image signal converts the color signals G, B and R (analog signals) into digital signals before performing various kinds of signal processing the digital signals to generate display signals (hereinafter called image frame). At least one image frame, which is the image of the frame is temporarily written in the memory 6 of the image. Scheme 5 arithmetic processing (hereinafter called CPU) performs management generating addresses or sampling memory 6 of the image.

The CPU 5 performs read / write control signals of an image frame memory 6 images. Scheme 4 processing image signal, converts the image frame read from the memory 6 of the image, in a predetermined output signal form to launch it from terminal 7 of the output image to the monitor, which is not shown. The image processor consists of schema 4 processing image signal, the Central processor 5 and the memory 6 of the image.

Thus, the image frame are sequentially displayed on the monitor to display the output image. The output image is an image obtained as a result of management read below for the image frame, which is in a state where the camera captured image (image frame is recorded in the memory 6 images), and may be the same as the image frame, or may be a different image.

The CPU 5 executes the drive control unit 12 of the tilt (T) (p is pout drive mechanism and a driving unit 13 pan (P) (second drive mechanism) or control zoom (Z) or focus (F) showing the lens 11. In particular, the CPU 5 receives the data P/T/Z/F control sent by the remote controller, not shown, etc. via the terminal 8 connected. Then, the CPU 5 outputs a signal P/T control to the drive unit 13 P and the drive unit 12 T in accordance with the control information, and outputs the signal Z/F control to display the lens 11. Drive unit 13 P and the drive part 12 T perform panoramic rotation and oblique rotation operator's platform, in accordance with the signal P/T control. Showing lens 11 performs zooming and focusing, in accordance with the signal Z/F control.

Reference item 14 indicates the sensor angle, which detects the angle of inclination relative to the standard installation operator's platform (the ceiling surface in this variant embodiment)to output data on the angle corresponding to the angle to the Central processor 5.

Will be described in detail memory management 6 images. As illustrated in figa, the number of effective pixels image frame (image frame)is assumed to be 1920 horizontal and 1080 vertical direction, and pixel data G, B and R, assumed to be 10 bits, respectively. Sampling is performed in the cycle in which the data amount of the frame is in, required per second, obtained from the top left on the screen, and the pixel data of the GBR, is equal to or larger than one frame are written in the memory 6, the image corresponding to the address of record (just 1920x1080* number of frames), which is generated relative to the number of pixels. Data width is 30 bits. The configuration can be changed in accordance with the limitation of hardware, such as the number of effective pixels generated by the address, data width and the number of CPUs.

Next will be described the control memory 6, the image corresponding to the rotation of system 1 operator's platform. As illustrated in Figa, the system 1 operator's platform has a main section of a drive unit 13, R, which is attached to the ceiling, to be suspended from the ceiling. In a variant embodiment will be described a case where the system 1 operator's platform tracks moving objects 21, 23, and 25 to perform oblique rotation, and the tilt angle (tilt angle) is changed from 0 degrees to 180 degrees when passing the location directly under the system 1 operator's platform on the way.

When the angle is between 0 degrees and 85 degrees as the first angle (first angle), read from the memory 6 of the image is performed in order of entry, that is performed is by sequential reading. Thus, the output image 26 of the object is displayed on the monitor. In the future (not rotated) output image, which coincides with the image frame, similar to the output image 26 is referred to as the non-rotated image.

When the angle is between 95 degrees, which is the second given angle (second angle), and 180 degrees, the object appears upside down, similar to the image 30 in a sequential read from the memory 6 of the image. Therefore, in particular, the reading is performed in the direction from the recorded 1920 1080 pixel of the second line to the first pixel of the first line, which is opposite to the order record. The result is an output image 35 (the second visual display), which corresponds to the image formed by the rotation of the image frame (image 30) 180 degrees around the axis of the pan, that is, the image, the top and the bottom of which is reversed. Further, the output image that corresponds to the image rotated 180 degrees relative to the image frame, similar to the output image 35 is referred to as rotated 180 degrees the image.

In the range from 85 degrees to 95 degrees (predefined angular position) angle, which is in close proximity under systems is th 1 operator's platform, reads from the memory 6 image to produce output image 33 (first visual display), which corresponds to the image formed by the rotation of the image frame (image 28) around the axis of the pan 90 degrees.

In particular, as shown in figv first read data 421 pixel 1080-th line stored in the memory 6 of the image, then the data 421 pixel 1079-th line, and at the end of the read data 1500 pixel of the first line. Further, the output image that corresponds to the image rotated by 90 degrees relative to the image frame, similar to the output image 33 is referred to as rotated by 90 degrees the image.

Figure 4 processing image signal position display with 1 to 420-th pixel and 1501 in 1920-th pixel, where do not exist pixel data with the 1st to 1080-th line are set as fixed data, such as the black level. Then the actual data read from the memory 6 images appear in the appropriate output, such as HD SDI to display them on the monitor.

The block diagram in figure 4 illustrates executed by the CPU 5 processing visual (reading from the memory 6 of the image), in accordance with the angle θ of the detection by a sensor 14 angular NAC is it. The detection angle by a sensor 14 angular tilt and processing of visual display, in accordance with the angle θ of the detection performed in each frame.

When processing starts, the CPU 5 determines, by a sensor 14 angle of inclination equal if the angle θ of the detection or greater than 85 degrees, which is the first given angle at the step S101. At step S102 pass, when the detection angle θ is equal to or greater than 85 degrees, that is, in accordance with the appearance of the angle θ of the detection and 85 degrees, and with the other hand moving to step S103, when it is less than 85 degrees.

At step S102, the CPU 5 determines, by a sensor 14 angle of inclination equal if the angle θ of the detection or greater than 95 degrees, which is the second given angle. At step S105 pass, when the detection angle θ is equal to or more than 95 degrees, that is, in accordance with the appearance of the angle θ of detection of 95 degrees, and with the other hand move on to step S104, when it is less than 95 degrees.

At step S103, the CPU 5 displays the rotated image on the monitor. At step S104, the CPU 5 displays rotated by 90 degrees, the image on the monitor. At step S105, the CPU 5 displays rotated 180 degrees, the image on the monitor./p>

Although the output image displayed on the monitor switches through the processes at the steps S103 to S105, in order of switching the image, the angle of lag h-C is set as the first angle and the second angle. The lag angle h-C is the angle specified for smooth switching of the visual display, which is defined as an angle less than the difference between the first predetermined angle and the second predetermined angle.

When the angle θ of the detection changes from 180 degrees to 0 degrees, the First angle - h-degrees and the Second angle - h-C are the angles that switches the image.

In a variant embodiment of the disclosed case, when rotated 90 degrees, the image is displayed between the non-rotated image and rotated 180 degrees the image. However, the output image corresponding to the image formed by the image rotation of the frame about the axis of the pan at an angle in the range from 0 degrees to 180 degrees except 90 degrees before and after rotated by 90 degrees the image can also be displayed between the non-rotated image and rotated 180 degrees the image. As a result, the image can be switched with a small feeling of inadequacy.

In h is particularly as illustrated in fig.2D, rotated 45 degrees, the image 32 corresponding to the image formed by the image rotation of the frame about the axis of the pan at a 45 degree angle appears between the non-rotated image 31 and rotated by 90 degrees by the image 33 with figs, in accordance with the case where the angle θ of the detection is determined by a sensor 14 of the angular inclination as 85 degrees. In addition, rotated 135 degrees image 34 corresponding to the image formed by the image rotation of the frame about the axis of the pan to 135 degrees, is displayed rotated by 90 degrees by the image 33 and rotated 180 degrees image 35, in accordance with the case where the angle θ of the detection is 95 degrees.

Figure 5 illustrates the relationship of the angle of inclination (angle θ of the detection system 1 operator's platform in this case, and rotation angle for the image frame displayed on the monitor in this case. Figure 5 the angle of lag in the h-C is omitted.

Usually do not rotated image suddenly switches to be rotated 180 degrees the image transferring position directly under the system 1 operator's platform. However, in a variant embodiment, in the output image, the rotation angle around the axis of the pan gradually uvelichenie is camping in the immediate vicinity under the system 1 operator's platform before as is shown rotated 180 degrees the image.

When the direction change angle is the direction from 180 degrees to 0 degrees, the output image, in which the angle of rotation around the axis of the pan gradually decreases in the vicinity under the system 1 operator's platform is rotated 180 degrees, the image is displayed before shows non-rotated image.

Rotated 45 degrees, the image 32 and rotated by 135 degrees, the image 34, illustrated in fig.2D, can not be achieved only by changing the read address from simple memory 6 image described above. However, rotated 45 degrees, the image 32 and rotated by 135 degrees, the image 34 can be formed by calculating the pixel data in the pixel position of the display based on the values of adjacent pixel data obtained during rotation of the data recorded in the memory image, and the distances from the pixels for interpolation.

For example, as illustrated in Fig.6, the image data recorded in the memory 6 of the image to be rotated in the memory 6, the image data of the pixel P that is required at the time of the conclusion of the system 1 operator's platform, are interpolated to obtain the rotated image around the axis of the panoramas of the simulation, centered around the pixel s. When the data values of the four pixels adjacent to the pixel P after the rotation, assumed as pll, pl2, p21, and p22, the pixel data of the pixel P can be obtained by downloading these values from the memory 6 image to the Central processor 5 and perform the calculation using the following expression with linear interpolation. The distance from the pixel is assumed as 1 (one positive pixel).

P=p11(1-d1)(1-d2)+p12(1-d1)d2+p21d1(1-d2)+p22d1d2

In terms of dl and d2 are the distances from the pixel P to four rotated pixels in two directions orthogonal to each other.

In a variant embodiment not described alternating way, but the main idea of alternating fashion such alternating method except that the inserted processing frame.

The pixel interpolation method is not limited by the method of interpolation of the four adjacent is exela, described as the example, but can also be used interpolation of higher order with a set of pixels or data of the front frame.

In addition, in order to reduce the processing time calculation can be performed parallel processing that uses a lot of memory images.

In addition, each angle (45 degrees, 90 degrees, 135 degrees, 180 degrees, 85 degrees and 95 degrees) and other numerical values in the variant embodiments described above are merely examples, and can also be installed in other corners or numerical values.

In a variant embodiment disclosed a case in which the image frame is rotated around the axis of the pan, but the axis of the pan should not be an exact axis of the pan and the frame image can be rotated around an axis tilted at an angle in the range from 0 to 50 degrees with respect to the exact axis of the pan, corresponding to the range from 85 to 95 degree angle described above. As a Central axis, which rotates the frame image, the image can be rotated around any axis, if the image is rotated without the feeling of inadequacy. However, it is desirable that the rotary Central axis was the axis that passes through one point in the image frame and which is arranged the and between the center of the image and the region, where you receive a moving object (more preferably between the center of the image and the position of the center of gravity of the moving object or the position of the center of gravity of the moving object). With regard to the center of rotation, the same applies to variants of the embodiments described below.

Variant of embodiment 2

The variant of embodiment 1 describes the case in which the center of rotation of the image around the axis of the pan, it is assumed as the center of the image, but for the output images 41, 43, and 45, illustrated in Figa, also deals with the case in which a moving object is not in the center of the image.

In this case, the detection function, which detects the region where the object moving in the frame image (preferably the position of the center of the field)will be transmitted to the Central processor 5 to be able to obtain the rotated image, centered around the area where the object, as illustrated in figv.

Function to detect a region where a moving object, for example, can be achieved as follows. The amount of movement of the object is calculated by the CPU 5 based on the difference between the image data of the current frame and the previous frame. Then, the amount of movement than is the magnitude of the change of angle of inclination, obtained by means of the output of the sensor 14 angular tilt, loaded into the CPU 5 and the region where the magnitude of the displacement is small, is detected as the tracked moving object with the field moving object (optional, centre region).

As in the variant of embodiment 1, rotated 90 degrees, the image 48 (optional, rotated 45 degrees and 135 degrees image) is displayed in the immediate vicinity under the system operator's platform, centered around the area of the moving object obtained earlier. In other words, non-rotated image 46, and is rotated by 90 degrees the image 48, and rotated 180 degrees, the image 50 is sequentially displayed on the monitor. As a result, the display with a feeling of inadequacy, from the fact that the output image suddenly switches from not rotated image 46 to be rotated 180 degrees, the image 50, and from the fact that the person in the image is moved suddenly to the right and left, can be prevented.

Variant of embodiment 3

Variants of embodiments 1 and 2 describe the case in which each rotated image was generated only by controlling the reading (image processing) from the memory 6 of the image. However, more blugaria the economic effect can be achieved by adding a move reflecting lens 11.

For example, in a variant of the embodiment 1, showing the lens 11 performs an operation on the optical zoom from a previous state scaling to the state of the zoom with wide-angle side by instruction from the CPU 5. Thus, as the object comes close to the centre of the image on the monitor, can be expected an effect similar to that which provides a way of rotation of the generated image, centered around the area of the object described in the variant embodiment 2. In addition, because the surrounding landscape is also displayed when the operation is performed on the optical zoom with wide-angle side, the processing is complete rotation around the axis of the pan becomes clearer.

Figa illustrates the output images 61, 62, and 63 displayed on the monitor when displaying lens 11 performs only an operation of the optical zoom (hereinafter called zoom lens with wide-angle side in the immediate vicinity under the system 1 operator's platform.

When the zoom lens is performed from the wide-angle side in the immediate vicinity under the system 1 operator's platform, as illustrated in the output image 63, and displays an image with a wide range of environment of the object, although the object in the image is small.

In variants of the embodiments 1 and 2 fixed data, such as black color, were mapped in the peripheral range without image data, when rotated 90 degrees, the image was displayed. However, as illustrated in figv, rotated 90 degrees, the image 68, when this image is also in the peripheral range, can be displayed with magnification (part rotated by 90 degrees image) in accordance with the size of the object is not rotated image 66 or rotated 180 degrees the image 70, through the operation of the electronic zoom.

In the operation of the electronic zoom, the distance between pixels is calculated by the CPU 5 in accordance with the degree of increase, as well as for the interpolation pixel data, described in the variant of embodiment 1, and these provisions are necessary to perform cutting display, centered around the object, are generated based on the data of 4 adjacent pixels. Rotated by 90 degrees, the image 68, illustrated in figv, is an image which is generated by the processing of increasing, by the electronic zoom operation after part, centered around the object output image 63 on figa, which is cut out to turn 90 degrees to the axis p is normirovanija.

Figs illustrates the attitude angle, the magnification of the electronic zoom and the state of the zoom lens in the system operator's platform. The relationship between the state of the zoom lens and the angle of field can be a table defined for conservation in the Central processor 5 or can gain control of scaling in order to obtain an image of the object is almost the same size, keeping the size of the object immediately before the operation of the zoom lens in the memory.

In addition, to perform switching of an image with less feeling inconsistencies can be detected moving speed of the object to change the angle to display the rotated image or the number of rotated images that appear between the non-rotated image and rotated 180 degrees the image in accordance with movement speed. If the velocity of an object is high (if it is a second speed which is higher than the first speed), the feeling of inconsistency will not easily formed even when the rotated image is displayed from a small angle (such as angle of inclination that is less than 85 degrees) or the number of rotated images is reduced. In addition, if the moving speed of the object is small (if it does is first rate), feeling inconsistencies will not easily be formed as an increasing number of rotated images in the immediate vicinity under the system operator's platform.

The function of detecting the moving speed of the object can be achieved by the CPU 5, which calculates the moving speed on the basis of the magnitude of change per unit time of the output signal from the sensor 14 angular tilt.

In addition, the system 1 operator's platform has an operating function of the rotation of the pan, in accordance with the signal instructions pan, put a remote control, but the signal instructions pan, introduced during the processing of displaying the rotated image in the immediate vicinity under the system 1 operator's platform can also be ignored. In other words, rotating the pan may not be performed, even if the signal instructions pan is entered. The result is a delay in processing when generating the rotated image or increase the size of circuits for processing can be prevented.

In addition, in each of the above embodiments, embodiments, although described case, when rotated image generated by the Central processor 5, a memory 6 images and diagrams 4 signal processing image the supply, built-in operator's platform, they can be provided outside of the operator's platform to generate the rotated image. Also in this case, the system operator's platform is configured with the possibility of inclusion of the Central processor, memory, image and signal processing image provided outside of the operator's platform.

While the present invention has been described with references to exemplary embodiments of the embodiments, it should be understood that the invention is not limited to the disclosed exemplary embodiments of the embodiments. Scope of the following claims should have the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

This application claims the priority of Japanese patent application No. 2010-000890, filed January 6, 2010, which is incorporated into the present application in its entirety by reference.

Industrial applicability

Can be provided by the system operator of the camera platform, which reduces unnatural changes in the direction of movement of the object in the visual display.

1. The system operator's platform, containing:
a camera configured to capture images of an object to generate image frame;
operator platform, konfigurera annoy with the ability to rotate the camera around the axis pan and tilt axis; and
the image processor, configured to generate a display image based on the image frame, while
when the camera rotates around the axis of tilt, the image processor generates the first display image corresponding to the image formed by the image rotation frame by an angle greater than 0 degrees and less than 180 degrees to generate a second display image corresponding to the image formed by the image rotation of the frame 180 degrees, and
in this case, the image processor generates the first display image before the camera will be the angle in which the camera is pointed in the direction directly down around the tilt axis.

2. The system operator's platform according to claim 1, the image processor has a detection function that detects a moving object in the frame image, and generates the first and the second display image corresponding to the image formed by the image rotation frame that passes the position at which the detected moving object, when the detection of a moving object.

3. The system operator's platform according to claim 1, the image processor performs the operation of optical mastubieren the or operation of the electronic zoom, which increases the portion of the image frame with the wide-angle side of the camera to generate the first display image in a predetermined angular position.

4. The system operator's platform according to claim 1, the image processor has a detection function, which detects the speed of a moving object in the frame image, and configured to increase the number of images that are generated as the first display image when the movement speed is a second speed which is higher than the first speed.

5. The system operator's platform according to claim 1, whereby the operator's platform has an operating function, which rotates around the axis of the pan, in accordance with the signal instructions pan, put in the operator platform, and
moreover, when the displayed first image, the operator's platform does not rotate around the axis of the pan, even if the signal instructions pan is entered.

6. The system operator's platform according to claim 1, and the axis pan and tilt axis perpendicular to the optical axis of the optical system provided by the camera.

7. The system operator's platform, containing:
depicting a lens configured to capture the object is;
the operator's platform, which includes a first drive mechanism configured to rotate the imaging lens around a first axis perpendicular to the vertical direction to change the angle of the optical axis of the imaging lens and the vertical direction, and a second drive mechanism configured to rotate the imaging lens and the first drive mechanism around the second axis perpendicular to the first axis; and
the image processor, configured to generate a first display image corresponding to the image formed by the image rotation frame received before the angle of inclination reaches the first angle greater than 0 degrees and smaller than 180 degrees, in accordance with the tilt angle reaches the first angle and the first angle different from 0 degrees, and generating a second display image corresponding to the image formed by the image rotation frame received before the angle of inclination reaches the first angle of 180 degrees, in accordance with the tilt angle reaches the second angle, different from the first angle when you change the angle.

8. The system operator's platform according to claim 7, where the first axis and a second axis perpendicular to the optical axis of the optical is eskay system, provided by the camera.



 

Same patents:

FIELD: physics, control.

SUBSTANCE: invention relates to a distributed node video surveillance system and a method of controlling said system. Disclosed is distributed node video surveillance control system which includes a plurality of video surveillance system (VSS) service nodes, a service centre control platform, a preliminary unit (PU) and a client unit (CU). Each of the video surveillance system service nodes connects the preliminary unit, which is used to collect video surveillance information, to the client unit which is used to display said video surveillance information; the client unit and the preliminary unit have direct access to each of the video surveillance system service nodes for collecting data; data in the plurality of video surveillance system service nodes are synchronised with data in the service centre control platform.

EFFECT: ensuring long-term stable operation of the video surveillance system with a high load, as well as processing data in real time with a high load.

12 cl, 6 dwg

FIELD: physics, optics.

SUBSTANCE: invention relates to video surveillance. proposed method comprises the steps of separation of data on points of presence (POP) to different types and formulation of measures for recovery of video surveillance in compliance with different failure types. Then, data of current POP failure is received to take the measures for system recovery in compliance with received data on failure type. Step of measures execution comprises switching of video surveillance service from current POP to standby POP compliance with the order of priority diminishing of standby POP.

EFFECT: higher fault tolerance.

12 cl, 6 dwg

FIELD: physics, signalling.

SUBSTANCE: invention relates to security signalling, particularly, to video surveillance means for detection and identification of trespassers that penetrate through security borders and actuate the detection means. Proposed system consists of central control board and sets of remotely addressed video cameras connected with central control board via switchboard channel. Switchboard channel is composed by digital data transmission network via two-way serial communication interface.

EFFECT: higher reliability owing to higher quality of video pictures, around-the-clock video surveillance with IR-floodlight illumination of video camera coverage area.

13 cl, 5 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to video surveillance systems using means of recognising hazardous at a secure facility. The device comprises at least one video camera which is capable of converting a video signal to a digital video signal, a video signal storage unit, a unit for converting the digital video signal to a sequence of images on a video monitor screen, and further includes a manipulator of a control unit of the conversion unit which is such that movement of the manipulator corresponds to change in images on the video monitor screen, and the direction and rotational speed of the wheel corresponds to the sequence and rate of change of successive images; the device includes an event analyser which picks up events from the digital video signal which require the attention of the operator, and generates a labelling signal upon picking up an event which requires the attention of the operator; the device further includes an event display in form of a wheel, which is such that its rotation corresponds to the rotation of the manipulator of the control unit, and the event display displays images of labels of events recorded by the camera which require the attention of the operator.

EFFECT: high reliability of security and accuracy of recognition.

8 cl, 3 dwg

FIELD: radio engineering, communication.

SUBSTANCE: playback control system comprises: a video recording means, a motion detecting means and a video playback means, wherein the video recording means is capable of recording video surveillance; the motion detecting means is capable of recognising a video image in real time and labelling a time index for a dynamic frame (dynamic frames) of the video when recording using the video recording means; and the video playback means is capable of obtaining the time index from the motion detecting means and playback the dynamic frame(s) of the video surveillance record in accordance with the time index when playing back the video surveillance record.

EFFECT: shorter time for playing back a video surveillance record without loss of significant information.

8 cl, 2 dwg

FIELD: radio engineering, communication.

SUBSTANCE: method involves forced "long" and "short" accumulation of signal charges in adjacent frames (half-frames) in the photodetector of a television camera; generating a multiplex image signal at the output of the photodetector; generating, at the "video" output of the television camera, a multiplex television signal which is transmitted to the "video" input of a personal computer; the computer demultiplexing the video signal by delaying the input multiplex television signal by a frame (half-frame) and performing weighted summation of the forward and delayed video signals through remote sampling from the computer for that delayed component of the video signal with optimum duration of "short" exposure of the CCD array in the television camera.

EFFECT: high accuracy of recursive filtering by doing away with light splitting while facilitating adjustment of the degree of filtration by controlling the accumulation time of the delayed component of the output video signal.

4 dwg, 1 tbl

FIELD: information technology.

SUBSTANCE: two signals are transmitted to an image blurring device of a digital device with a display. A first signal is transmitted from a sensor for detecting stress in eye muscles which control accommodation of the eyes of the user, located near the eyes of the user in form of glasses or a headset. A second signal is generated by a program of the digital device with a display. The image blurring device generates an output signal which controls the degree of image blurriness on the display, which is proportional to the difference between two input signals. The greater the difference between the magnitude of input signals, the greater the image blurriness on the display, and the lesser the difference, the lesser the image blurriness on the display.

EFFECT: stimulating eye muscles to perform more eye accommodation work.

1 dwg

FIELD: information technology.

SUBSTANCE: prototype television camera includes a reducer, a shutter and a shutter control unit; the computer includes a video card; in the television camera, a first television signal sensor is based on a "row-frame transfer" CCD array. The inclusion of a first sensor of an additional pulse former for clocked supply of the photodetector enables summation of charge packets of adjacent frames.

EFFECT: wider dynamic range of the image of monitored objects, transmitted in a composite image outside the window, by increasing the signal-to-noise ratio in the output video signal of the television camera for dark or low-illuminated components of said objects by increasing the duration of accumulation of information charges in the photodetector.

5 cl, 10 dwg, 1 tbl

FIELD: information technology.

SUBSTANCE: in the CCD array of a television camera, charge packets of "long" and "short" adjacent frames (half-frames) are transferred from a photodetector section with frame (half-frame) frequency into a memory section and summed therein; the useful image signal with period 2TK which is accumulated in charge form is read from the photodetector, where TK is the period of a frame in progressive scanning mode of the video signal or half-frame in interleaved scanning mode; a useful composite video signal is formed at the output of the television camera, which is transmitted to the "video" input of the computer with period 2TK; the useful video signal with period 2TK is read from the computer memory and optimisation of recursive filtering of the recorded image signal is performed on the image displayed on the monitor screen by remote selection from the computer of the duration of "short" exposure in the television camera.

EFFECT: high quality of recording an image signal by increasing the signal-to-noise ratio of the television camera and making adjustments on recursive filtering of the video signal directly from the computer.

4 cl, 7 dwg, 1 tbl

FIELD: information technology.

SUBSTANCE: method includes a setup step where an operator creates a model of the environment by determining the number of cells corresponding to regions of said environment, and then creates a "cell/sensor" connection by defining for each sensor at least one possible position associated with at least one cell. For each position, the operator assigns the sensor a control estimate for the associated cell. The method also includes a working step where a control system, in order to execute a surveillance function, finds those sensors that can be used to execute the requested surveillance function and control said sensors based on control estimates and the "cell/sensor" connections.

EFFECT: high reliability of surveillance systems, particularly video surveillance systems, by providing dynamic adaptation each time a sensor cannot be used by the system.

9 cl, 6 dwg

FIELD: physics.

SUBSTANCE: method is carried out using, in a displacement metre, a correlator which performs the function of determining the variance of signal increments based on squaring difference values of correlated signals from linear photodetectors in digital form, and an interpolator is made in form of a unit which performs interpolation using the formula: χ^=Δm(D1D1)/[2(D12D0+D1)], where D-1, D1, D0 denote signal variances, χ^ is displacement, Δm is the pixel size of the auxiliary photodetector.

EFFECT: reduced image displacement measurement error.

4 dwg

FIELD: physics, computation hardware.

SUBSTANCE: in compliance with this invention, sequence of images including multiple lower-resolution images is contracted. Vectors of motion between reference image in sequence and one or several nest images in sequence are defined. The next forecast image is generated by application of motion vectors to reconstructed version of reference image. Difference between next actual image and next forecast image is generated. Image in sequence from set to set is decoded and SR technology is applied to every decoded set for generation of higher-resolution image by rime interpolation and/or spatial interpolation of reference and difference images. Compression of sequence of images includes steps of determination of vectors of motion between reference image and at least one of extra image of sequence of images. Note here that obtained vector of motion is applied to forecast at least one extra image to calculate difference in mages between at least one extra image and forecast of at least one extra image, respectively.

EFFECT: high-resolution imaging by superhigh resolution technology.

13 cl, 5 dwg

FIELD: chemistry.

SUBSTANCE: invention relates to system and method of recording procedure for recorder. Proposed system comprises time code generator for time code generation for synchronisation of electronic data. Recorder transceiver executes wireless communication of time code to multiple cameras. Cameras fix video and audio data while appropriate camera time data dispatchers combine receive time code with recorded said data to be transmitted via wireless communication line for writing in recorder memory. Recorder can receive and memorise audio data from warning system while computer can communicate with recorder for appropriate editing of stored camera data and warning data to obtain edited data.

EFFECT: efficient use of recorder.

14 cl, 11 dwg

Digital camera // 2510866

FIELD: physics, communication.

SUBSTANCE: invention relates to digital camera with moving mirror. Proposed camera comprises microcomputer 110 that features live scan mode to control images generated by CMOS-sensor 130 or image data obtained by pre-processing of said image data so that these are displayed on LCD 150 as moving images in real time. Note here that when trigger button 141 receives live scan automatic focusing switch-on instruction, microcomputer 110 controls said moving mirror to displace it on optical path to measure by AF-transducer 132 and out of it thereafter to live scan mode.

EFFECT: expanded operating performances for digital camera with moving mirror.

28 cl, 41 dwg

FIELD: physics.

SUBSTANCE: brightness distribution is determined for each of multiple image data portions, the characteristic value of each brightness distribution is calculated from said brightness distribution and a correcting value is found for tonal correction, which is carried out with respect to the combined image data based on the obtained characteristic value of brightness distribution.

EFFECT: carrying out tonal correction to obtain a combined image, having suitable brightness and contrast.

10 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: device has an image sensor which includes an array of a plurality of image forming pixels and a plurality of focus determining pixels which receive light rays passing through exit pupils of image forming lenses while they are partially shielded, a vertical output line, a vertical summation unit which sums, in the vertical direction of the image sensor, signals from a plurality of pixels, aligned in one column, and a control unit which performs control so that the vertical summation unit is always off when the focus determining pixel is included among pixels having signals to be summed, when summing signals from a plurality of pixels in the vertical direction and reading said signals through the vertical summation unit.

EFFECT: enabling mixing of signals of image forming pixels and focus determining pixels.

7 cl, 32 dwg

FIELD: radio engineering, communication.

SUBSTANCE: video system 10 on a chip for image stabilisation has a main photodetector array 11 and two secondary mutually perpendicular linear photodetector arrays 12 and 13 (with a larger pixel area), first and second random access memory 14 and 15, inputs N1…Nk of which are connected to corresponding outputs N1…Nk of the secondary mutually perpendicular linear photodetector arrays 12 and 13, outputs N1…Nk of which are also connected to inputs N1…Nk of first and second controllers 16 and 17 for calculating correlation, respectively, the second inputs M1…Mk of which are connected to corresponding outputs of the first and second random access memory 14 and 15, wherein outputs of the first and second controllers for calculating correlation are connected to inputs of a control unit 18.

EFFECT: high sensitivity to image shift, wider range of compensated shifts and shift accelerations, accuracy of measuring shift and size and weight characteristics of the device.

2 dwg

FIELD: physics.

SUBSTANCE: image capturing device has an image sensor for capturing an image signal generated by a photographic optical system which includes a focusing lens, a detecting unit for detecting the object region based on the image signal captured by said image sensor, a first generating unit for generating first information associated with the focusing state of the photographic optical system based on the image signal captured by said image sensor, a second generating unit for dividing optical flux from the object into two in order to generate two images and generate second information associated with the value of relative positional shift between the two images, and a control unit for controlling the execution of at least one of first focusing control using first information and second focusing control using second information. The control unit is configured to restrict execution of second focusing control when said detecting unit detects an object region.

EFFECT: ensuring stable focusing on an object at a high rate without creating inconvenience for the photographer.

20 cl, 15 dwg

FIELD: information technology.

SUBSTANCE: at least one light spot is projected on target object; the first image of target object is captured with at least one light spot, in response to step in which light spot is projected; distance from target object to digital camera is programmatically determined using image spot with at least one light spot, and distance triangulation factor of at least one light spot in the image; and digital camera lens is automatically focused based on the step in which distance from target object to digital camera is determined.

EFFECT: providing possibility for triangulation automatic focusing of camera.

39 cl, 21 dwg

FIELD: information technology.

SUBSTANCE: disclosed is a video surveillance method using a video camera with a video recorder, an infrared sensor for detecting movement of the object under video surveillance and a device for interfacing the motion sensor and the video camera with a video recorder. Video surveillance is carried out with possibility of turning the video camera in the azimuthal plane by a multiphase turning angle of the video image synchronously with reception of infrared radiation by the motion sensor from an infrared radiation generator mounted on the object under video surveillance, by scanning with a drive with the interfacing device in the azimuthal plane of the corresponding phase turning angle using clocked pulses with fixed frequency higher than 10 Hz. The phase turning angle is scanned in a code via successive approximation of the normalised value to the measured value by algebraic summation of reverse phase increments, recorded in form of differences between measured and normalised values.

EFFECT: high metrological efficiency of panoramic video surveillance owing to automation of video surveillance with regulated accuracy of the normalised measure of the turning angle phase in the azimuthal plane.

2 cl, 6 dwg

FIELD: engineering of systems for analyzing television images, in particular, for stabilizing an image in television images.

SUBSTANCE: in accordance to the invention, first digital image and at least second image have a set of pixels, and each pixel has associated address for display and is represented by color. System user sets a color matching interval, or system uses a predetermined color matching interval, then in first digital image a pixel is selected, for example, representing an element in an image, which is either fuzzy because of element movement, or appears trembling due to camera movement, and is matched within limits of interval with a pixel of second image. The interval ensures compensation, required during change of lighting. After selection of a pixel in first image, it may be matched with all pixels in the second image, where each pixel of the second image, having matching color within limits of matching interval, is stored in memory, and pixel color is selected, closest to pixel of first image. Then pixel addresses are changed in second image so that the address of pixel positioned in second image, closest color-wise to the pixel in the first image, is assigned the same address on the display as the pixel of first image and the resulting rearranged second image is dispatched into memory for storage.

EFFECT: creation of efficient image stabilization method.

9 cl, 11 dwg

Up!