Camera, system having camera, camera operating method and method for deconvolution of recorded image

FIELD: physics, optics.

SUBSTANCE: invention relates to a camera and a system having a camera, wherein the ratio of the distance between the lens and the sensor to the focal distance varies during exposure. The invention also relates to a method of deconvoluting image data. A variation frequency which enables to form an image which is invariant with respect to movement is set.

EFFECT: reduced blur due to movement.

17 cl, 24 dwg, 1 tbl

 

The technical field

The invention relates to a camera containing a lens and image sensor, in which the ratio of the distance between the sensor and the lens along the optical axis and the focal length of the lens is changed during the exposure time of the image. The invention also relates to a method of treatment of a convolution of the image data captured by the image sensor.

The level of technology

The traditional image sensors, such as sensors based on CMOS and CCD, integrate all the light falling on them during the exposure time. It provides sharp images of stationary objects, but leads to a spatial blur for objects that are moving while the shutter is open. Objects that are not in focus, also blurred. The so-called blur due to motion is proportional to the exposure time and the velocity of the object. The first cause much inconvenience, when the camera is in low light conditions. In such circumstances, it is desirable, long exposure times to achieve sufficiently high levels of signal to noise, in order to adequately display the dark parts of the scene. For this reason, many cameras suffer from the classic contradiction between the blur due to the movement and dynamic range. The exposure time must HC is the setting range of+, in order to capture enough light, but it is necessary to reduce to reduce blur due to movement. In the framework of the invention, the term "Luggage" contains the still camera and video camera for video.

Luggage and fashion, like described in the first paragraph is known from the article Nagahara and other "Flexible Depth of Field Photography", H. Nagahara, S. Kuthirummal, C. Zhou, and S.K. Nayar, European Conference on Computer Vision (ECCV), October, 2008.

The article Nagahara et al. have shown the still camera in which the distance between the sensor and a lens with a fixed focal length is changed. The sensor oscillates within a distance during the exposure time. The distance of the swing set so as to cover the range of the ranges of scene depth to increase depth of field. Traditional camera disclosed in the article Nagahara etc. reduces blur due to defocus. To reduce blur due to defocus, the sensor oscillates along the optical axis, covering a certain range of depth.

Swing sensor provides a composite image, in essence, is a combination of several images at different depths of focus. You can calculate the function point spread (PSF). Function point spread is, in essence, the image point of the object registered on the gauge. For an object that resides completely in focus, aseania point will be zero, and, thus, the PSF is a function of Dirac. The Fourier transform of this function will give a constant value for all frequencies. For a point that is not in focus, the PSF is a function of diffusion, for an object in motion relative to the stationary camera, PSF will scatter within a certain distance due to the movement. Of the PSF can be calculated inverse function of the scattering point (IPSF). Deconvolution in the integral image allows to obtain a sharp image and allows you to get a greater depth of field. The fact that it shows Nagahara, when the swing of the sensor PSF for stationary objects at different distances becomes substantially the same. Thus, deconvolution with respect to the source image with the same IPSF allows you to get a sharp image at all distances, or at least in a larger range of distances, and also to get greater depth of field for stationary objects.

Although the blur due to defocus and its reduction can take place and is an important task, the main problem, as explained above, there is for moving objects, namely the blurring due to the movement, especially when increasing the exposure time.

Nagahara already mentioned problems associated with the blur in the later movements, but not a solution.

A known solution to reduce blur due to the movement means moving the sensor perpendicular to the optical axis. This solution is known, for example, from the article by Levin and other “Motion-Invariant Photography”, A. Levin, P. Sand, T. S. Cho, F. Durand, W. T. Freeman. SIGGRAPH, ACM Transactions on Graphics, August 2008. In essence, this solution involves moving the sensor from left to right (or Vice versa) during exposure to reduce blur due to movement due to horizontal movement.

In addition to the solutions proposed in the article by Levin and others, blur due to the movement you can invert through video processing. This is achieved by motion estimation and inverse filtering along the motion path. It is known, for example, from US 6930676. In practice, however, this procedure suffers from inaccurate motion vectors, especially for areas of darkening. To be able to reverse the filter, you need to know the trajectory and expel the motion vectors. In many standalone cameras used for professional purposes, the motion vectors may be unavailable. For example, recording with many of the cameras used for surveillance or monitoring activity, provide only enter in the procedure of analysis based on computer vision (for example, automatic detection of suspicious is bjectiv, the fall detection of elderly people, etc). In these scenarios, the quality of raw input frames is the determinant of the performance of the detection system. Fairly accurate motion vectors can be available quickly in the chamber, and the subsequent processing of the recorded video is no possibility of tracking system in real time. For the camera, which makes a single shot, it is essentially impossible to accurately determine the motion vectors. In the areas of darkening motion estimation is also extremely difficult and inaccurate, if at all possible. In low light problems arise due to lack of light.

Secondly, most of the traditional camera has adjustable shutter and aperture, which limits the light entering through the lens, in the spatial and temporal dimensions. They usually can be described as filters with rectangular characteristics bandwidth (i.e. has a constant sensitivity within a finite interval)corresponding to the sinc-modulation in the appropriate temporal and spatial-frequency domains. As a result, some high frequency completely suppressed upon receipt and cannot be recovered during the feedback FIR filter, even with full information movement. In practice, britneybritney should be done with great care to avoid amplifying noise and making artifacts. In the end we can say that modern technology is unknown effective and simple means to reduce blur due to movement.

The invention

The objective of the invention is to reduce blur due to movement.

For this camera, according to the invention, characterized in that the camera is designed to work so that there is the following:

Vsweep*A/b2>0.25 s-1,

where Vsweep- the motion sensor and/or lens along the optical axis and/or change the focal length of the lens, A is the aperture of the lens and b is the distance between the lens and the sensor.

VsweepA and b are expressed through the same unit of distance.

“Are expressed through the same distance unit” means that they are, for example, in mm/s or mm or cm/s and see

At first glance, this may have a contradiction: the blurring due to the movement caused by the movement of the object in the direction perpendicular to the optical axis, for example, in a horizontal or vertical direction, while the sensor moves along the optical axis. The captured image shows the blurring due to the movement, for example, in a horizontal or vertical direction depending on the direction of movement of the object, and therefore does not seem a logical choice reducing blur SLE is the result of movement by moving the sensor along the optical axis, and it seems the logical choice movement sensor perpendicular to the optical axis, as in Levin and others

However, the inventors have realized that the movement of the sensor relative to the focal plane along the optical axis, you can actually use to effectively counteract the blurring due to the movement, when certain conditions are met. Thanks to the swing of the sensor along the optical axis, the core of blur due to the movement becomes in a high degree the same for the speed range of the object. PSF is highly similar to the speed range of the object. This allows you to create the image that are invariant to motion using IPSF, which provides a sharp image for a range of velocities of the object.

The inventors have realized that the maximum speed of the object, i.e. the speed range of the object in which it is possible to carry out image formation is invariant under the motion or almost invariant with respect to the movement depends on a number of factors, including

- speed sensor or frequency swing of focus,

the aperture of the main lens and the

- average distance between sensor and lens.

The average distance can be defined for embodiments in which the sensor is moved, as half the distance, passed the CSO for one swing.

Referring to the above, from the perspective of the present invention, it is possible to calculate that the article Nagahara and other up to a certain speed you can carry out the image formation is invariant under the motion, but this range is for all practical purposes, insignificant, inadequate and insignificant as a range from 0 km/h 0.1 km/h In practice, this means, in accordance with the statements in the article Nagahara that no effective imaging invariant under the motion, it is impossible to achieve the camera and the method disclosed in the article Nagahara. Nagahara, therefore, was right in saying that the blur due to the movement is the problem. Using the idea of Nagahara and other, different PSF should be used for different speeds, and the image formation is invariant under the motion, for this reason, it is impossible.

It is noteworthy that Nagahara etc. does not disclose the availability of any features of the formation of the image that are invariant to motion.

Motion sensor proposed by Nagahara, and the reason for this movement is disclosed and discussed in the article Nagahara is passing through different depths of focus. The depth of focus is a sign that has a length along the same axis as the motion sensor, i.e. the movement is tion and cause movement of the sensor parallel, in fact coincide with the optical axis.

The inventors have realized that the problem related to the phenomenon caused by the movement of the object perpendicular to the optical axis, in fact, can be reduced by movement of the sensor along the optical axis, subject to certain conditions.

The minimum value (0.25 s-1for Vsweep*A/b2allowing the image formation is invariant under the motion, for practical speeds of objects and distances, at least one order of magnitude higher than is known from the article Nagahara.

It is noteworthy that the movement of the sensor, from the point of view of the invention, is regarded as the 'movement' of the sensor relative to the focal length. Such movement can be accomplished by moving the sensor by moving the lens or alternately, by changing the focal length of the lens. In all of these embodiments, the sensors are a range of focal planes. Of course, perhaps a combination movement, in which the sensor is moved and simultaneously changes the focal length. Further, for simplicity, such 'displacement or movement of the sensor, regardless of the implementation will be called “motion sensor”. The fact that the sensor or the lens moves along the optical axis (or change focal length) and not the before simultaneous movement of the sensor in directions perpendicular to the optical axis.

Preferably, the parameter Vsweep*A/b2is more than 0.5 s-1most preferably, more than 1-1.

Preferably, the parameter Vsweep*A/b2is less than the 15-1most preferably, less than 5-1. Too large a parameter requires a very high swing speeds or very small distance from the sensor.

Preferably, the exposure time is between 0.005 and 0.05 to seconds, most preferably, between 0.01 and 0.05 seconds.

Too long exposure times would require a very substantial movement of the sensor, possibly beyond the range of movement of the sensor.

In the preferred range of exposure times, the movement sensor is within range, however, providing a very long exposure times. In addition, in this range of exposure times, you can create the sequence.

In embodiments, implementation, changing the relationship is such that the sensor reaches a position outside the range in which all objects are in focus.

From the point of view of the invention, in embodiments of the invention advantageously quickly move the sensor over a large range of positions, or to quickly change focal length, beyond the positions in which any part of the picture is of is in focus. Thus, the swing of the sensor is outside the range of all possible depth-of-field. Such positions provide information that makes it possible to reduce blur due to movement. Expansion of the range of movement of the sensor is outside the range required for depth of field, helps in the formation of the image that are invariant to motion.

In the variants of implementation, the swing is made dependent, at least in the upper range of exposure times, exposure times. Then, the exposure time determines the amount of the proposed displacement sensor.

The article Nagahara, the offset of the sensor is limited by passing through the focal range. In the invention, the speed sensor is the determining factor, therefore, in a preferred implementation options for different times of exposure sensor offset varies with time of exposure.

Provided that in the present invention the speed is the determining factor, for large times the exposure sensor offset is relatively large, while smaller exposure times moving can be done relatively smaller, but to effectively reduce blur due to movement.

Within embodiments for the lower range of exposure times the speed of displacement the Oia may not depend on the time of exposure. For such times the exposure, the moving speed can be chosen so that you can effectively reduce blur due to movement.

The invention also relates to a system for recording images with the camera, optionally containing block treatment convolution for the treatment of convolution of the recorded image.

The invention additionally relates to a method for operation of a camera containing a lens and image sensor, in which the ratio of the distance between the image sensor and the lens and the focal length is changed along the optical axis during the exposure time of the image, while maintaining the following:

Vsweep*A/b2>0.25 s-1,

where Vsweep- the motion sensor and/or the movement of the lens along the optical axis and/or change the focal length in the second, A - aperture, and b is the distance between the lens and the sensor.

Brief description of drawings

These and other tasks, and the predominant aspects will become apparent from the exemplary embodiments that will be described using the following figures.

Fig. 1 illustrates various embodiments of the invention.

Figures 2A-2C illustrate the effect of the shutter occurring during an infinitesimal time, i.e. described by a Dirac function.

Figures 3A-3C illustrate the normal action of the shutter is.

Figures 4A-4C illustrate the action described in the article by Levin and others

Figures 5A-5C illustrate the action described in the article Nagahara and other

Figures 6A-6C illustrate the present invention.

Figures 7A-7C illustrate the inverse filtering function PSF of Fig. 6C.

Fig. 8 illustrates the algorithm, the system and method of filtering of the recorded image.

Fig. 9 illustrates the movement of the sensor.

Fig. 10 is an example of an optimized aperture with additional information about the appearance of the velocities of the objects in the scene.

Fig. 11 shows the integration of the dual cone to the total system, not optimized for speed, when the object moves.

Fig. 12 shows a variant implementation, the sensor orthogonal offset by a distance d_sensor.

The figures are shown in breach of the scale. In General, identical components are indicated on the figures the same reference position.

A detailed description of the preferred embodiments

The invention is based on the understanding that by changing the settings of the camera during the exposure time, you can change the characteristics of the recorded image. This fact is used to construct a chamber in which a blur due to the movement, in the practical speed range, almost does not depend on the motion of objects and, preferably, are often the main characteristic of one of them is that the recorded signal is better suited for inverse filtering. This allows you to generate a sharp image of the records with longer exposure times without the need for motion estimation. In other words, a camera that meets the invention can provide sharp images with high SNR even in very complex optical conditions of image formation: objects moving with different, unknown speed at low light levels. For this purpose, the sensor and/or lens during the exposure time, is moved along the optical axis or alternately or additionally, the focal length of the lens is changed during the exposure time. Changing the focal characteristics of the lens to change the focal length has the same effect as moving the sensor to the lens or away from it or move the lens to the sensor or from him. The common feature of these various embodiments is that the ratio of the distance between the sensor and the lens along the optical axis (b) and focal length (f) is changed during the exposure.

Fig. 1 illustrates various embodiments of the invention.

The object 1 is focused by the lens 2. Focused image is shown in Fig. 1 the point of intersection of the light rays coming from the object. At the top cha is t Fig. 1 camera has a movable sensor 3. Sensor 3 during the exposure time, is moved along the optical axis. In the upper part of Fig. 1 is shown by dashed lines. The movement sensor can be expressed by the distance traversable per second. This sweep rate is denoted below as Vsweep. In the lower part of Fig. 1 shows another variant of the proposed motion sensor 3 and the lens relative to each other. In this embodiment, the sensor 3' is fixed, and the characteristics of the lens 2' is changed, for example, by changing the shape of the lens or refractive index of the lens. Also can move the lens itself. Due to the changing characteristics of the lens, the focal plane moves, providing the same effect as moving the sensor, shown in the upper part of Fig. 1. The expected movement of the sensor relative to the lens, according to the invention, is carried out by any of the above methods or any combination of the above methods. The combination, for example, provides that part of the movement is accomplished by moving the sensor, and part of the move, for example, for fine tuning the rate or extend the possible range of relative movement, by changing the focal length of the lens 2. The swinging speed Vsweepfor rolling sensor is question is about the magnitude of the displacement sensor in a second. Rolling stock lens, sweep rate Vsweepis lens-shift into second. If you move both the speed of the swing is equal to the change of the distance between them in a second.

When the sensor and the lens is fixed, and changes the focal length f of the lens that has the same effect as moving the sensor and/or lens, sweep rate can be calculated as follows:

Is the ratio of:

1/b=1/f-1/v

thus, f(t)=1/(1/v+1/(b(t)), where b(t) denotes the position of the image plane.

Change the focal length as a function of time, thus changing the focal plane as follows:

f(t)=(v/(v+b(t))*b(t)

in a first-order approximation, assuming that v is almost always much more than b:

df(t)/dt=db(t)/dt

In this case, Vsweepdefined as the change in focal length in a second.

In other words, move the sensor to the lens and from it at a constant focal length equivalent to finding sensor and lens in a fixed position when the increase and decrease of the focal length of the lens, and the speed of the swing is in a first-order approximation is equivalent parameters.

For combined action, which moves as a sensor and/or lens, and changes the focal length f, the frequency swing of the butt is conducted by the sum of the two effects.

In all examples, the ratio of the distance between the lens and the sensor (b) and the focal length f is changed during the swing. Modify b, or by moving the sensor or the lens or both, or by changing f, or performing both operations simultaneously or even sequentially.

Let us explain the principles of the invention with reference to the figures.

For simplicity, in the sequel we consider the formation of an image as a 2D (two-dimensional) process (in time and one spatial dimension, the latter is represented in the figure the x-axis). However, the principles discussed in this document apply to two orthogonal spatial dimensions (x, y).

To better understand the principle of image formation, invariant under the motion, it is helpful to first refer to the characteristics of the spatial and temporal sampling of traditional cameras. In traditional optical imaging, such as the camera, the camera focuses on the object of interest, recording the offset between the sensor and the lens at the correct distance. In this case, high sharpness, still and moving images can be achieved when the spatial-temporal action of the shutter is infinitely small in space and time (the Dirac function δ(x,t), i.e. a particular point in simple is ansto and time), as shown in figures 2A-2C. Fig. 2A corresponds to the discretization of light in space-time. In Fig. 2B shows the corresponding spatial-temporal amplitude spectrum, and Fig. 2C shows a cross-sectional 2D amplitude spectrum corresponding stationary (solid line) and moving (dashed line) object. In space-time (Fig. 2A), the stationary object remains in its position all the time and, thus, is shown in Fig. 2A line in a fixed position x0all the time, i.e. a vertical line. The vertical axis represents normalized time, where 0 denotes the time, and 1 indicates the time, while 1 denotes a different, later time. Moving object moving from left to right, changes the position during the exposure, moving to the right. The small square in the middle represents the exposure time. Exposure time is infinitesimal in time and space, which is reflected infinitely small point in space and time. For this idealized hypothetical configuration of the shutter, the sensor records all frequencies with the same intensity, which leads to identical frequency characteristics for different movements. In Fig. 2B presents the Fourier transform of the received signal. This function equal to 1 for all values. For the Dirac function, all frequent who you are chosen the same, and amplitude for all wave numbers are the same. In Fig. 2C shows the function of the point spread (PSF), in which the amplitude of the delayed vertical axis in logarithmic scale, for different wave numbers plotted on the horizontal axis. In Fig. 2B shows that the Fourier transform of the Dirac gives a constant value. Provided that all frequencies are chosen equally, the amplitude at all wave numbers at all possible speeds are the same, i.e. the function PSF is constant. Thus, the functions of the PSF is also equal. Equality of functions PSF means that at all speeds of the object image is equally sharp and can be reconstructed by deconvolution of the image. Thus, donakowski shutter allows you to get a sharp image at all speeds. However, perfect dyakowski the shutter is impossible to construct, and its realistic approximation catch enough light to create a proper image. The image can be sharp for all velocities of the object in the image, but is captured enough light so that the objects are visible, which, to put it mildly, very impractical. In practice, the shutter speed and aperture) is regulated in accordance with local conditions, and longer exposure times are required for ulovlig is of sufficient light in dark ambient conditions to maintain adequate signal-to-noise ratio. One approach to solving problems is to develop sensors that are more sensitive to light, which can reduce the exposure time and to achieve a more accurate approximation to the Dirac function. However, in reality, on the sensitivity of the sensor is subject to restrictions.

Most traditional cameras has adjustable shutter and aperture, which limits the light entering through the lens, in the time (i.e. during the exposure time) and spatial dimensions. They usually can be characterized as filters with rectangular characteristics bandwidth (i.e. has a constant sensitivity within a finite interval) in real time and space dimensions corresponding to the sinc-modulation in the appropriate temporal and spatial-frequency domains. As a result, some high frequency completely suppressed upon receipt and cannot be restored during the feedback FIR filter, even with full information movement. In practice, inverse filtering should be done with great care to avoid amplifying noise and making artifacts. This is illustrated in figures 3A-3C. In Fig. 3A shows the light falling on the sensor. Real shutter has a finite length, so the rectangle has a width in the direction of the tion x. The rectangle represents the light on the sensor, so the width is the light spot on the sensor. The shutter is open during the time t of the shutter, and this time t shutter pending on the vertical axis, starting from 0, the beginning of exposure to 1, when the exposure ends.

In the time domain (Fig. 3B) and the spatial-frequency domain (Fig. 3C) some high frequencies are suppressed, as shown by the dips in the figure. Because of this lost some of the details and there are some artifacts. In addition, it shows three different functions PSF, illustrating three different functions PSF typical for objects moving within the image when 0,50 and 100 km/h in approximately 10 metres from the lens. These functions PSF represented by lines 31, 32 and 33, respectively. You can see that the three functions PSF are very different. Since the function of PSF different for different speeds, for the treatment of convolution of the recorded image, you cannot use a single IPSF. In practice and in theory it means that it is impossible to consistently perform deconvolution some inverse filtering of the three images. In short, the image formation is invariant under the motion, impossible.

For global motion blur due to the movement can be prevented by monitoring the movement of the camera. Alternatively, this is th can be achieved by moving the sensor relative to the main lens with the desired speed along the transverse path (perpendicular to the optical axis) during the exposure time. However, in the recorded image will be sharp only objects moving with the target speed, while all other areas of the scene will continue to be blurred or will become even more blurred than in the case when no action is taken.

Levin et al. proposed a solution to this problem. This is depicted in figures 4A-4C. Levin and others propose to move the sensor in the transverse direction along a parabolic trajectory in space-time. In figures 4A-4C illustrate the proposal Levin and others In the space-time (Fig. 4A), the position of the light falling on the sensor, it follows a parabolic path during the exposure time. Parabolic trajectory creates the same blur due to the movement for a range of speeds up to maximum speed sensor. This is illustrated in Fig. 4C by the fact that the various functions of the PSF for the three speeds are almost identical. Effective blur kernel can be set by time integration, and for the parabolic kernel of the blur proportional to 1/sqrt(x). Although the recorded image will be very blurry even for stationary objects, blur can be eliminated by using a fixed, independent of the movement, the kernel of the inverse filter. Since the three functions PSF almost the same, and the reverse PSF, i.e. about ashenia convolution of the image, can provide sharp images for all speeds. However, Levin and others are right to notice that their approach is only applicable to one-dimensional motion (for example, a purely horizontal movement). In addition, the functions of the PSF shows that for short wavelengths the amplitude is equal to zero due to the finite size of the aperture, which means that some items will be permanently lost. Therefore, although Levin and others offer a way of reducing problems, their solution works only partially and only if the direction is known, and only until this movement takes place parallel to the movement of the sensor. For any other traffic or any component of movement parallel to the movement sensor, no solution is proposed, and the situation is so bad, as shown in figures 3A-3C.

Nagahara and others have proposed a solution to another problem, namely restrictions on DOF, i.e. the depth of field. By moving the sensor along the optical axis, the sensor passes through the various focal planes, and it allows through convolution of the recorded images to obtain an image with greater depth of field. Nagahara and others show that the function PSF for stationary objects at different distances from the lens, you can do almost the same. Thus, for treatment of a convolution of the image different N. the moving objects at different distances from the lens, you can use a single IPSF, increasing depth of field.

In figures 5A-5C show the case for a typical installation shown in the examples, Nagahara, in the absence of stationary objects, but are considered objects with different velocities of the objects. Function PSF 31, 32, 33 (Fig. 5C) for different velocities of the objects, in this case, 0.50 and 100 km/h, as in figures 4A-4C, are very different. Since the function of PSF are very different, the application of a single inverse function PSF will not allow a sharp image for the specified speeds. Thus, although Nagahara etc. increase the depth of field (DOF)blur due to the movement remains a challenge. Fast moving objects give a blurred image.

Thus, none of the documents or approaches that meet the prior art, it is not possible to effectively reduce blur due to the movement if the movement or direction of movement is not known in advance, or the camera is not moving, or it is impossible to determine the motion vectors, which is often the case.

Figures 6A-6C illustrate the invention. The sensor is moved along the optical axis, and saved the following condition:

Vsweep*A/b2>0.25 s-1

In the example shown in figures 6A-6C, the following conditions are true:

Vsweep= 22,4 mm/s

A = 2 mm

B = 4,002 mm

the distance from object to lens = 10 meters

time Expo is icii = 0.02 s

Vsweep*A/b2=2,8-1

In space-time (Fig. 6A) the graph has the shape of a Diabolo (two cones connected at the vertices); the important aspect is that the function PSF for different speeds 0,50 and 100 km/h is almost the same (Fig. 6C).

Since the function of PSF are almost the same, the deconvolution of the recorded images using a single function IPSF is possible, which allows to obtain a sharp image for all objects with the specified speeds, i.e., the image formation is invariant under the motion, or the image formation is almost invariant with respect to the motion, it is possible for any speed up to 100 km/h is a Very important difference between the two solutions proposed by Levin and others, is that for any speed in any direction perpendicular to the optical axis to 100 km/h at a distance of 10 meters, the formation of the image that are invariant to motion. Thus, the restriction to a specific direction of speed inherent in the approach of Levin and others, are overcome. In addition, the functions of the PSF does not demonstrate failures, as in Levin and others (and Nagahara)that, in principle, allows to reconstruct more details. Thus, by moving the sensor along the optical axis, it becomes possible to form images that are invariant to motion, without ogran the treatment in the direction and with a higher degree of conservation of detail, when the conditions are met. A very important difference between the two solutions Nagahara is that the three functions PSF almost the same, which enables the formation of the image that are invariant to motion for the specified speeds, which does not allow the solution Nagahara and other

It is noteworthy that in this example it is assumed that the object is in focus in the middle period of exposure. In Fig. 6A is illustrated by the fact that the upper part of the Diabolo is as high as the lower part. It was found that even if it is not, then the image formation is invariant under the motion, it is quite possible. If the object is in focus is not in the middle of the exposure period, and somewhat earlier or later, it captures most of the information, because most of the information is present in the vicinity of the point of intersection of the Diabolo, and possibly very good image formation is invariant under the motion. Below will be explained that, with understanding, maybe some fine tuning.

In the framework of the invention, according to the options of implementation, the sensor is outside the possible focal ranges. This ensures that most of the information will be captured for any object at any distance from the lens. In such options is the ants implementation probably more reliable image formation, invariant motion.

It is noteworthy that, in the framework of the invention, the image formation is invariant under the motion” should not be interpreted strictly in the sense that for any speed at any level of detail will not be the difference in image formation; object of the invention is to reduce the mismatch due to the movement, i.e. the blurring due to the movement within practical limits; the perfect solution is ideal, unattainable in reality.

The inventors have realized that the maximum speed of the object for which the function PSF of the object registered on the sensor, basically the same as for a stationary object, and thus, perhaps, the image formation is invariant under the motion depends on the following parameters:

A: the options object, namely, the velocity of the object relative to the distance from object to lens. The object that is more distant from the lens, it is easier to capture at the same speed than the object close to the lens. This option basically provides a measure of how fast the object moves on the image generated by the sensor.

B: parameters of the camera or camera system:

B1: the oscillation frequency of the sensor, the greater the frequency of the swing, the higher the maximum speed at which the possible formation of the image, invariant of motion

B2: aperture, the greater the aperture, the higher the maximum speed at which the formation of the image that are invariant to motion

B3: the distance between the lens and the sensor.

The corresponding parameter of the camera is B1*B2)/(B3)2i.e. Vsweep*A/b2.

The inventors have simulated a variety of settings, including those described in the examples, Nagahara, and determined the maximum speed of the object that are invariant image formation.

The results are shown in the following table 1, the first column provides a brief description of the source, where the first two lines are the examples in the article Nagahara, in the last five lines of the examples of the present invention; the other columns provide different options:

Table 1
Distance
to object
(m)
Focal
distance (mm)
Aperture (mm)Speed sensor
(mm/s)
Time
expos-
tion ()
Invariant-s '
range
depth
(m)
Invariant
range
speed
(km/h)
Vsweep*A/b2
Nagahara296,20,230,3601-∝0-0,01280,0176
Nagahara212,58,90,440,3601-∝0-0,1710,025
Example 1294,52500,3600,01-∝0-10013,8

Example 2212,54,53520,3600,01-∝0-10010,14Example 3294,525being 0.0360,07-∝0-101,38Example 4104222,40,020,07-∝0-1002,8Example 55422,240,20,07-∝0-50,28

Obviously, the examples Nagahara not provide any remarkable imaging invariant under the motion. Although, with the understanding of the present invention, Monophysite, with the help of the device Nagahara possible, the image formation is invariant under the motion for speeds up to about 0.15 km/h for an object at a distance of 2 meters, it has no practical use and is actually irrelevant. Invariant speeds of examples 1 to 5 of the invention are practical ranges of speeds, the speed of the walking person or person, the jogger, until the vehicle speed.

It is also noteworthy that, according to the above table, examples Nagahara use f/1.4, often the largest possible aperture, whereas in examples 1 to 5 of the invention uses a more reasonable value of f/2. If you used the same value of f/#, the difference in relation to the range of the invariant velocity and Vsweep*A/b2it turns out even in 1.4 times more. For aperture f/2, in some examples, Nagahara, for a correct comparison with the invention, the range of the invariant velocities for Nagahara is even 1.4 times less than that indicated in the table, i.e. about 0.1 km/h

Take, for example, a surveillance camera, it is acceptable practical lower limit for the range of the invariant speed is a speed of 5 km/h, the speed of the walking person at a distance of 5 meters, acceptable distance for the camera. This provides a lower limit V sweep*A/b2equal to 0.25 s-1.

Preferably, the parameter Vsweep*A/b2is more than 0.5 s-1most preferably, more than 1-1.

Preferably, the parameter Vsweep*A/b2is less than the 15-1most preferably, less than 5-1. Too large a parameter requires a very high swing speeds or very small distance from the sensor.

Preferably, the exposure time is between 0.005 and 0.05 to seconds, most preferably between 0.01 and 0.05 seconds. In particular, in low-light conditions, this range of exposure times allows to form images of good quality are invariant under motion. This range also allows you to create video.

Too long exposure times would also require substantial movement of the sensor, possibly beyond the range of movement of the sensor.

In the preferred range of exposure times, the movement sensor is within range, however, providing a very long exposure times. In addition, in this range of exposure times between 0.01 and 0.05 seconds, you can create the sequence.

Preferably, the motion sensor is linear. In Fig. 6A is prolly is concentrated by the fact that what Diabolo is enclosed between two straight lines. By analogy with a parabolic trajectory proposed by Levin, it provides a sampling with equal weight in the range of motion. By the time integration, the inventors have found that the function point spread (PSF) this picture of the spatial and temporal sampling is proportional to 1/log(x), which makes it easy to find the inverse function. The comparison of Fig. 3B, 4B, 5B to 6B illustrates that the radial discretization (Fig. 6B) better preserves high frequencies than parabolic trajectory (Fig 4B) and to a lesser extent suffers from the sinc-modulations in the spatio-temporal frequency domain. The resulting images are better suited for inverse filtering. Finally, the phase characteristics veerapaneni sampling better than achieved by parabolic transverse discretization.

In Fig. 7A-7C shows a reverse filtering function PSF of Fig. 6C. Fig. 6C is repeated in Fig. 7A.

Other frames, Fig. 7B and 7C correspond to the (average) return the filter and convolution of the inverse and direct PSF.

In Fig. 7A and 7B shows the spatial PSF and the corresponding inverse filter. It should be noted that the inverse filter has the short end of the spatial signature, which means that it is computationally efficient to implement. To confirm, n is Fig. 7C shows the convolution of the direct and the inverse of the blur kernel, which, if desired, may be a single pulse.

After inverse filtering, all objects are reinstalled in another location where they were when sensor is crossed respective focal length (intersection point Diabolo), thus, demonstrating the dependence of the phase depth (phase shift zero-crossing in the middle of the exposure). However, parabolic transverse discretization proposed in the article by Levin and others, suffers dependent on the movement phase shifts, so that after inverse filtering all objects will appear sharp, but in the locations through which they passed in a slightly different points in time. This leads to a slight inconsistencies, in particular, on the borders of the movement and in the areas of darkening. Compared with the solution Levin and others, this provides an important advantage of the invention.

Preferably, the effective core spatial blur varies linearly with time. It is possible to make a linear change of the distance between the sensor and the main lens along the optical axis. This is because the amount of blur due to defocus is proportional to the displacement of the sensor. Such a movement is already under way in most cameras with autofocus, though to fotografer the tion, or when the breechblock is closed. As a result, it is possible that in some cameras the autofocus mechanism can easily use to perform the desired sample rate, by movement in the course of photographing, provided that the movement and the shutter can with sufficient accuracy to control and synchronize.

It is noteworthy that in the case when Vsweep*A/b2change during the exposure, the relevant parameter is the average time from Vsweep*A/b2during the exposure time. As was explained in connection with Fig. 1, an alternative implementation of the necessary spatial and temporal discretization involves the fixation of the displacement sensor lens and swing the focal length of the main lens during the exposure time (lower part of Fig. 1). Obviously, this requires switchable optics, such as a liquid collecting lens or birefringent LC materials. Switchable LCD lenses are applied in 3D) screens, where they are used to switch between 2D (without lens effect) and 3D modes. The refractive index can be changed only in one direction (only the cylindrical lens). Variable focal length can also be achieved by using membrane filled with fluid, which can be deformed under de the predetermined fluid pressure. Again, it is preferable that the image plane rocked linearly with time: b=b0+c0t. Since the distance to the image plane associated with the object distance and the focal length of the lens by the lens equation:

1/f=1/v+1/b,

From this it follows that it is preferable that the focal length has changed over time according to:

f(t)=1/(1/v+1/(b0+c0t)),

where f(t) denotes the focal length of the lens as a function of time and c0- constant.

You can also use a combination of both, for example, to restrict the physical movement of the sensor or the speed of the swing, with a simultaneous increase in the degree of spatial and temporal discretization (i.e., the length of the Diabolo in Fig. 6A), or increase the speed of the swing.

In Fig. 8 shows a method of treatment of a convolution of the recorded image and the system for recording the image data and the treatment of a convolution of the recorded image data. The image is recorded by the camera containing a lens 2 and the movable sensor 3. Sensor data is read by the reader. The corresponding image 81 is schematically shown blurred. The captured image is subjected to deconvolution in block 82 the treatment of convolution. Thus, the system contains the unit conversion convolution to implement treatment convolution of the image data is. The method performs a deconvolution of the recorded image data. The treatment unit performs convolution operation treatment convolution in respect of the recorded image. To facilitate understanding, the algorithm and the method shown in the form of a series of stages. The first step consists in the calculation of the scattering function point. In a simple embodiment, the function of the scattering point depends on the parameters Vsweep*A/b. In simple embodiments, the implementation of the function PSF is calculated for a stationary object in the middle of the range of swing. Since, as explained above, for a large range of speeds function PSF almost does not depend on speed to embodiments of the invention, IPSF to an immovable object in the middle of the range of swing is a good first-order approximation for the optimal PSF for all speeds. In more advanced versions of the implementation of the distance to the object is used for fine adjustment of the PSF and, thus, IPSF. The distance to the object may, for example, be recorded by the camera. Most cameras have some kind of autofocus, which allows to determine the distance to the object. As explained above, it was found that even if the object is not in focus in the middle of the swing, the image formation is invariant under the motion, it is quite possible. If the object is in okuse not in the middle period of exposure, and a little earlier or later, it captures most of the information, because most of the information is present in the vicinity of the point of intersection of the Diabolo, and possibly very good image formation is invariant under the motion. However, while using PSF for a stationary object in the middle of the range of swing will give good results, better results can be obtained by fine tuning PSF, making the PSF is dependent on the distance to the object, and possibly other parameters. This will provide a somewhat skewed and shifted PSF. The end result will be a few more sharp image of a moving object at a specific distance from the lens, the price is slightly less sharp images at other distances. It is noteworthy that the distance to the object can also be used to determine, in embodiments of the invention, oscillation frequency. As explained above, the determining factor is the speed of an object divided by its distance from the lens. Thus, for objects at a distance of 2 meters at a given speed, in comparison with objects at a distance of 10 meters at the same speed, it will be profitable to use the increased speed of the swing, i.e. to increase the Vsweep. In embodiments implementing ways of working chambers, the CTE is surrounding to the invention, measured, at least one or more of the group of parameters consisting of the distance from object to lens, the speed of the object, the direction of the velocity of the object, and the swinging speed is set depending on the measured parameter. This, in turn, will influence the parameters for PSF.

The speed of the object can also serve as input data for the block 82 treatment convolution. Again, although the invention allows to use a single function PSF for all speeds, it is still possible some of the effects of the second order, in which the PSF can be accurately adjusted to a specific speed. Possible field of application is the camera velocity measurements; which already provide a good estimate of the direction and speed of the object.

Calculations showed that PSF may be small, second order dependence on the direction of movement. In the variants of implementation, we computed the optimal PSF for a range of directions, and used the PSF is a choice in this range depending on the input direction. The choice may be, for example, maximum, minimum, median, weighted average, etc.

The pitching also can be used as input.

In short, the power conversion convolution can get the input data for the exact settings to use PSF. In Fig. 8 shows the variety of the parameters, the most important of which are the frequency swing, the distance to the object and the object's speed and direction of velocity.

Subjected to deconvolution and sharp image 83 is shown to the right in Fig. 8.

To facilitate understanding, method, system and algorithm are presented in the form of various stages, starting with the PSF, and then output the inverse of the PSF of the PSF. Of course, since IPSF is an important feature in the deconvolution, and IPSF is the inverse function to the PSF and Vice versa, you can start directly from IPSF, and different input data may as well be the input data, after proper conversion, for use in IPSF, to determine IPSF to be used in the deconvolution.

Preferably, during the recording of the movement of the sensor was carried out in only one direction, for example, only towards the lens. Although a first-order approximation, the image subjected to deconvolution, so do not depend on the direction of swing of the sensor, ie not depends on if he moves towards the lens or away from it, in the approximation of the second order is strictly true for objects that are in focus in the middle of the swing sensor. For objects that are more distant or close to the lens, there is little difference in the final result, in particular, in relation to p is the position of the object in the image, subjected to deconvolution. In a single image, this difference is imperceptible or nearly imperceptible, but in the video sequence may be shaky position of objects. To avoid this effect, it is preferable that the movement of the sensor during the exposure was one-sided, i.e. in only one direction. Of course, the sensor must then navigate back to the starting position, before you can record the following data. In this embodiment, will be an additional constraint on the time of exposure so that the sensor could move back to the starting position.

In Fig. 9 shows the movement of the transducer. Line 91 represents the distance traveled by the sensor, line 92 - the speed sensor. During the first period of time, the sensor picks up speed, as illustrated by line 92 to the fact that the velocity increases from 0 to a fixed speed, which is maintained for some time. Then, the sensor stops and returns quickly to its original position. Return sensor means that data can be recorded only part time, in this example, for example, only about 50-70% of the time. When moving the sensor back and forth, which is possible, for example, when performing a single image, the sensor should not return to its original position is. As explained above, the move can be done in two different ways, or their combination, or the physical movement of the sensor or the lens, or by changing the characteristics of the lens.

For small times the exposure or rapid movement of the sensor, it is preferable to use at least for part of the move, the change characteristics of the lens.

In short, the camera is invariant under the motion that meets the invention, provides sharp, high-quality video or images, particularly in low light conditions. Being a advantage for most cameras, and systems for optical imaging, including consumer cameras & camcorders, it is especially valuable for some professional applications for tracking and monitoring. For example, surveillance cameras security systems often work in dark conditions. Modern surveillance cameras differ expensive solutions, such as the cooling system of the sensor, to achieve the desired SNR levels. The present invention can be used either to provide an economical alternative camera with high dynamic range, or for further increasing the dynamic range of modern sensors. It should be noted that this invention describes a new approach to the Accreditatie light and therefore can be implemented using any existing technology photosensor.

The invention can be described in the form of cameras and systems containing chamber in which, during exposure, the ratio of the distance between the lens and the sensor and the focal length is changed. Set this frequency changes, which allows to achieve the formation of the image that are invariant to motion, for practical ranges of speeds, i.e. speeds at least up to 5 km/h at a distance of 2 meters from the lens, by applying a convolution of the composite image. Preferably, the achievable speed range was at least twice as high. Preferred is a linear movement of the sensor.

It should be noted that, although the above-described methods and devices can work blindly without knowing information about the speeds occurring objects in the scene and, consequently, on the sensor, the availability of this information allows to achieve high accuracy of the reconstruction (i.e. sharper/quality of the final image). This can be done either statically (for example, knowing what speeds are usually, for example, in the application of computer vision, where known, the speed of the conveyor belt, which moves the analyzed objects), or dynamically when the system (e.g., periodically) measure the speed of objects in the scene and optimally adjusts the settings according to sunnym below alternatives implementation.

The first improvement consists in selecting (coded) aperture adapted to speeds of emerging objects that can be accomplished, for example, by using a liquid crystal element in front of the lens or between the lens and the sensor, etc. for Example, with a typical speed of an object (for example, the average movement of the dominant object scenes) v_average and its deviation from the speed of other objects (for example, the people on the train, waving my arms) Cmaxyou can choose round subregion of the source aperture, for example, according to:

,

whereand x is the distance from the object (field point) to the lens, f is the focal length of the lens, and VL- the speed of displacement along the optical axis of the sensor. dmaxthis parameter uncertainties related to the position of the object image on the image plane (i.e. the Central rays through the center of the lens from a point object at the position on the sensor at a distance d from the center of the probe should have a distance of d<dmax). This corresponds to a circle with an offset equal to α multiplied by v_average, and the radius of 1.25*α*Cmax+dmax.

Of course, it is possible to determine the optimal aperture more complex forms, for example, in the form of a ring for rotating objects.

Another option implementation to improve the image of the taxpayer, giving information about typical speeds of objects (for example, an object moving with a speed of v_obj and fixed background), provides for movement of the transducer not only along the optical axis, but perpendicular to it (creating a complex diagonal movement).

Tracking movement of the object can be produced by moving the camera entirely in panaramarooma motion, or, alternatively, rectilinear movement of the transducer perpendicular to the optical axis.

According to the present variant implementation, you can combine motion tracking with the principle of swing of focus, creating a system that moves the plane of image formation at an angle to the optical axis. The method or device variant implementation decomposes the motion sensor on two orthogonal vectors: Vsweepalong the optical axis and VTRat right angles to it.

Rectilinear movement of the sensor along the VTRcreates a shift of the integration of the dual cone (cones, which arise due to blur, time-dependent, through the swing sensor) with such an angle that compensates for the average speed of the object. When nothing is done, the blur kernel in the DCI will be symmetrical with respect to the vertical for stationary objects, but moving objects create a shifted version of this stake is and. It is possible (in part) to move it back, with a backward translational motion VTR.

Instead of centering on zero movement, the range of the invariant speed is now centered on speed tracking. For movement along the optical axis, an implementation option can create invariance movement, setting the optimal speed of the swing and the range depending on the average speed, for example:

,

where a is the aperture of the system.

I.e. the speed of the swing is chosen to cover a predefined range of the invariant speed:

.

Note that to generate a sharp image of the object's speed should not strictly be the same. While the speed of the object is invariant in the range (i.e. covering in the example as the speed of movement of the object v_obj, and zero static speed), a sharp reconstruction is possible.

I.e. you can replace more accurate tracking smaller swing, or Vice versa, to keep track of (at least some of the objects) is inaccurate, but to have a wider cone DCI, i.e. the greater the swing sensor. It is preferable to jointly optimize the speed of the swing and rectilinear movement so that you can correctly reconstruct both static and moving objects.

More about the in an implementation option improves the quality of image formation by displacement of the sensor perpendicular to the optical axis. Since the motion of the object scene on the sensor is a function of the geometrical projection system imaging and motion of the sensor:

,

where v is the object of the image, y is the distance between lens and image plane theoretical sharpness (which is usually located at the midpoint of the swing of the sensor, d is the distance from the center of the sensor of the imaging point and VL- the speed of the swing gauge.

According to this variant implementation, it is desirable to make this the speed sensor is small, to reduce blur due to the movement that can be done, for example, by choosing the offset of the sensor:

When the displacement of the sensor with speed

it turns out the typical optimum offset:

.

Note that for three of the above embodiments it is possible to choose some optimal parameter values (for example, several forms of the aperture, for example, sizes, multiple orthogonal movements and multiple displacements of the sensor, since they not only depend on the desired accuracy of the reconstruction and, for example, on the complexity of the inverse filter, and can in varying degrees to combine the above three principles and, for example, to reduce the swing along the opt the political axis and/or orthogonal movement, choosing more optimal sensor offset (average).

1. Camera to record an image with a lens (2), sensor (3) image, and the ratio of the distance between the image sensor and the lens along the optical axis and the focal length is changed during the exposure time of the image, wherein the camera is configured to change the ratio so that there is the following:
Vsweep*A/b2>0.25 s-1,
where Vsweep- the motion sensor and/or the movement of the lens along the optical axis and/or change the focal length in the second, A - aperture lens and b is the distance between the lens and the sensor.

2. The camera according to claim 1, in which Vsweep*A/b2more than 0.5 s-1.

3. The camera according to claim 1, in which Vsweep*A/b2less than 15-1.

4. The camera according to claim 1, in which the camera is arranged so that the exposure time is between 0.005 and 0.05 in seconds.

5. The camera according to claim 1, in which Vsweepis linear.

6. The camera according to claim 1, in which the change of attitude is such that the sensor reaches a position outside the range of all possible depth-of-field.

7. The camera according to claim 1, in which the displacement of the sensor relative to the focal plane along the optical axis is made to be dependent, at least in the upper range of exposure times, exposure times.

8. Kam is RA according to claim 1, in which at least one characteristic of the lens is changed.

9. The camera according to claim 1, containing a means for measuring the distance to the object, and Vsweepset depending on the measured distance.

10. The camera according to claim 1, in which the camera is a video camera, characterized in that the movement during exposure is performed in one direction only.

11. System for recording images with the camera according to any one of the preceding paragraphs, optionally containing block treatment packages for the treatment of a convolution of the image recorded by the camera, using an inverse function of the point spread for the treatment of convolution.

12. The system according to claim 11, in which the power conversion convolution has an input for input of one or more of the group of parameters consisting of the distance from object to lens, the speed of the object, the direction of the velocity of the object, frequency of oscillation, A, b, distance, swing, and exposure time.

13. System according to clause 12, in which at least one of the group of input parameters is the distance from the object to the lens.

14. The method of recording images using a camera containing a lens and image sensor, in which the ratio of the distance between the image sensor and the lens and the focal length is changed along the optical axis during the exposure time is zobrazenie, moreover, during the recording of the image is the following:
Vsweep*A/b2>0.25 s-1,
where Vsweep- the motion sensor and/or the movement of the lens along the optical axis and/or change the focal length in the second, A - aperture, and b is the distance between the lens and the sensor.

15. The method according to 14, in which the measured at least one or more of the group of parameters consisting of the distance from object to lens, the speed of the object, the direction of the velocity of the object, and Vsweepis set depending on the measured parameter.

16. The method of contacting a convolution of the image recorded by using the method according to 14 or 15, in which the deconvolution is performed using an inverse function of the scattering point.

17. The method of contacting convolutions on clause 16, in which the inverse function of the point spread is set depending on the input data and the input data are one or more from the group of parameters consisting of the distance from object to lens, the speed of the object, the direction of the velocity of the object, frequency of oscillation, A, b, distance, swing, and exposure time.



 

Same patents:

FIELD: physics, photography.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the image capturing device includes a photographic lens which forms an image of an object, a photoelectric conversion unit located in the predicted image plane of the photographic lens, a display unit which displays the photographed image obtained by the photoelectric conversion unit, an image display control unit which displays the photographed image through the display unit after obtaining the photographed image through the photoelectric conversion unit, a distance information acquisition unit which obtains information on distance in the photographed image, and a blur correction unit which corrects blurring on the photographed image based on information on distance obtained by the distance information acquisition unit. The image display control unit displays the photographed image, where multiple distances in the photographed image are focused.

EFFECT: correcting blurring based on information on distance of an object included in the photographed imaged.

13 cl, 25 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to a video surveillance and camera control system capable of performing panoramic turning and tilted turning of the camera. The camera platform system has a camera which captures an object image to generate a frame image, camera platforms which turn a camera about a panning axis and a tilt axis and image processors which generate a visual image based on the frame image. When a camera passes through a predefined angular position for turning about the tilt axis, an image processor generates a first visual image corresponding to the image formed by turning the frame image by an angle greater than 0 degrees but less than 180 degrees about the panning axis in a predefined angular position before generating a second visual image corresponding to the image formed by turning the frame image 180 degrees about the panning axis.

EFFECT: reducing unnaturalness of change in direction of movement of an object in a visual image in order to reduce errors when tracking an object.

8 cl, 15 dwg

FIELD: physics.

SUBSTANCE: method is carried out using, in a displacement metre, a correlator which performs the function of determining the variance of signal increments based on squaring difference values of correlated signals from linear photodetectors in digital form, and an interpolator is made in form of a unit which performs interpolation using the formula: χ^=Δm(D1D1)/[2(D12D0+D1)], where D-1, D1, D0 denote signal variances, χ^ is displacement, Δm is the pixel size of the auxiliary photodetector.

EFFECT: reduced image displacement measurement error.

4 dwg

FIELD: physics, computation hardware.

SUBSTANCE: in compliance with this invention, sequence of images including multiple lower-resolution images is contracted. Vectors of motion between reference image in sequence and one or several nest images in sequence are defined. The next forecast image is generated by application of motion vectors to reconstructed version of reference image. Difference between next actual image and next forecast image is generated. Image in sequence from set to set is decoded and SR technology is applied to every decoded set for generation of higher-resolution image by rime interpolation and/or spatial interpolation of reference and difference images. Compression of sequence of images includes steps of determination of vectors of motion between reference image and at least one of extra image of sequence of images. Note here that obtained vector of motion is applied to forecast at least one extra image to calculate difference in mages between at least one extra image and forecast of at least one extra image, respectively.

EFFECT: high-resolution imaging by superhigh resolution technology.

13 cl, 5 dwg

FIELD: chemistry.

SUBSTANCE: invention relates to system and method of recording procedure for recorder. Proposed system comprises time code generator for time code generation for synchronisation of electronic data. Recorder transceiver executes wireless communication of time code to multiple cameras. Cameras fix video and audio data while appropriate camera time data dispatchers combine receive time code with recorded said data to be transmitted via wireless communication line for writing in recorder memory. Recorder can receive and memorise audio data from warning system while computer can communicate with recorder for appropriate editing of stored camera data and warning data to obtain edited data.

EFFECT: efficient use of recorder.

14 cl, 11 dwg

Digital camera // 2510866

FIELD: physics, communication.

SUBSTANCE: invention relates to digital camera with moving mirror. Proposed camera comprises microcomputer 110 that features live scan mode to control images generated by CMOS-sensor 130 or image data obtained by pre-processing of said image data so that these are displayed on LCD 150 as moving images in real time. Note here that when trigger button 141 receives live scan automatic focusing switch-on instruction, microcomputer 110 controls said moving mirror to displace it on optical path to measure by AF-transducer 132 and out of it thereafter to live scan mode.

EFFECT: expanded operating performances for digital camera with moving mirror.

28 cl, 41 dwg

FIELD: physics.

SUBSTANCE: brightness distribution is determined for each of multiple image data portions, the characteristic value of each brightness distribution is calculated from said brightness distribution and a correcting value is found for tonal correction, which is carried out with respect to the combined image data based on the obtained characteristic value of brightness distribution.

EFFECT: carrying out tonal correction to obtain a combined image, having suitable brightness and contrast.

10 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: device has an image sensor which includes an array of a plurality of image forming pixels and a plurality of focus determining pixels which receive light rays passing through exit pupils of image forming lenses while they are partially shielded, a vertical output line, a vertical summation unit which sums, in the vertical direction of the image sensor, signals from a plurality of pixels, aligned in one column, and a control unit which performs control so that the vertical summation unit is always off when the focus determining pixel is included among pixels having signals to be summed, when summing signals from a plurality of pixels in the vertical direction and reading said signals through the vertical summation unit.

EFFECT: enabling mixing of signals of image forming pixels and focus determining pixels.

7 cl, 32 dwg

FIELD: radio engineering, communication.

SUBSTANCE: video system 10 on a chip for image stabilisation has a main photodetector array 11 and two secondary mutually perpendicular linear photodetector arrays 12 and 13 (with a larger pixel area), first and second random access memory 14 and 15, inputs N1…Nk of which are connected to corresponding outputs N1…Nk of the secondary mutually perpendicular linear photodetector arrays 12 and 13, outputs N1…Nk of which are also connected to inputs N1…Nk of first and second controllers 16 and 17 for calculating correlation, respectively, the second inputs M1…Mk of which are connected to corresponding outputs of the first and second random access memory 14 and 15, wherein outputs of the first and second controllers for calculating correlation are connected to inputs of a control unit 18.

EFFECT: high sensitivity to image shift, wider range of compensated shifts and shift accelerations, accuracy of measuring shift and size and weight characteristics of the device.

2 dwg

FIELD: physics.

SUBSTANCE: image capturing device has an image sensor for capturing an image signal generated by a photographic optical system which includes a focusing lens, a detecting unit for detecting the object region based on the image signal captured by said image sensor, a first generating unit for generating first information associated with the focusing state of the photographic optical system based on the image signal captured by said image sensor, a second generating unit for dividing optical flux from the object into two in order to generate two images and generate second information associated with the value of relative positional shift between the two images, and a control unit for controlling the execution of at least one of first focusing control using first information and second focusing control using second information. The control unit is configured to restrict execution of second focusing control when said detecting unit detects an object region.

EFFECT: ensuring stable focusing on an object at a high rate without creating inconvenience for the photographer.

20 cl, 15 dwg

FIELD: engineering of systems for analyzing television images, in particular, for stabilizing an image in television images.

SUBSTANCE: in accordance to the invention, first digital image and at least second image have a set of pixels, and each pixel has associated address for display and is represented by color. System user sets a color matching interval, or system uses a predetermined color matching interval, then in first digital image a pixel is selected, for example, representing an element in an image, which is either fuzzy because of element movement, or appears trembling due to camera movement, and is matched within limits of interval with a pixel of second image. The interval ensures compensation, required during change of lighting. After selection of a pixel in first image, it may be matched with all pixels in the second image, where each pixel of the second image, having matching color within limits of matching interval, is stored in memory, and pixel color is selected, closest to pixel of first image. Then pixel addresses are changed in second image so that the address of pixel positioned in second image, closest color-wise to the pixel in the first image, is assigned the same address on the display as the pixel of first image and the resulting rearranged second image is dispatched into memory for storage.

EFFECT: creation of efficient image stabilization method.

9 cl, 11 dwg

FIELD: devices for reading, recording and reproducing images, and method for correcting chromatic aberrations.

SUBSTANCE: processing of correction is performed with consideration of diaphragm aperture size and object image height in image reading lens. The output signal of the camera signal processing circuit (4) by means of switch (5) is sent to block (6) for correction of chromatic aberration. Value of aperture of diaphragm (31) in lens (1) for reading image, and coordinates of pixel, relatively to which correction processing is performed, from the block (6) for correction of chromatic aberration is sent to block (10) for computation of transformation ratio. The length of focal distance of approach or withdrawal of lens (1) for reading image and camera trembling correction vector are sent to block (10) for computing transformation ratio, then transformation ratio is produced for each color to be dispatched to chromatic aberration correction block (6), where the signal, corrected in block (6) for chromatic aberration correction is compressed in data compression circuit (15) for transmission to record carrier in device (17) for recording and reproduction and unpacked in data unpacking circuit (18) for transmission to switch (5).

EFFECT: increased quality of image, such as color diffusiveness.

6 cl, 10 dwg

FIELD: information technologies.

SUBSTANCE: method and the device for stabilisation of the image containing set of shots is offered, and estimates motion vectors at level of a shot for each shot, and is adaptive integrates motion vectors to yield, for each shot, the vector of a motion which is subject to use for stabilisation of the image. The copy of the reference image of a shot is biased by means of the corresponding is adaptive the integrated vector of a motion. In one version of realisation of the invention, the perimetre of the data unit of the image is supplemented with the reserve of regulation which is subject to use for neutralisation of images, in other variant vertical and horizontal builders are handled independently, and plans of motion evaluation related to the MPEG-4 coder, used for evaluation of vectors at level of macroblocks, and histograms.

EFFECT: possibility to delete astable motions at maintenance of a natural motion of type of scanning of the film-making plan, with the underload requirement for additional specialised plans and the underload magnification of computing complexity.

26 cl, 4 dwg

FIELD: information technologies.

SUBSTANCE: invention can be used for underwater shooting, provision of surveillance, visual inspection and control of underwater shooting parametres and diver actions from surface in process of underwater-technical or diagnostic works at a depth under water. Underwater television control system comprises video portable camera block installed under water in leak-tight box and video camera fixed on helmet of diver's suit and installed in leak-tight box, leak-tight sources of light for illumination of video filming object, the following components installed under water - control unit, monitor, units for power supply of light sources, unit of communication with diver, unit of audio-video recording, terminals of video-audio recording unit are connected to information inputs of monitor, unit of system power supply, accumulator and unit of accumulator charging.

EFFECT: improved efficiency of underwater-technical works control, monitoring over divers' work under water due to increased reliability and validity of information obtained in process of underwater shooting.

14 cl, 5 dwg

FIELD: physics; video technology.

SUBSTANCE: invention relates to video surveillance devices. The result is achieved due to that, a camera (16) and a receiver (28) of a swiveling base are connected to each other so as to transmit a video signal. A web-server (50) sends a video signal beyond the border to a camera (16) and receives a signal from outside for remote control of the camera and a signal for remote control of the swiveling base. A control unit (40) controls the camera (16) in accordance with the signal for remote control of the camera. The signal for remote control of the swiveling base is superimposed on the video signal to be transmitted to the receiver (28) of the swiveling base using the video signal circuit (52). The receiver (28) of the swiveling base extracts the signal for remote control of the swiveling base from the video signal and controls rotation of the base (14) in accordance with the signal for remote control of the swiveling base. The given configuration can be used for transmission with superposition, and the camera and the swiveling base can be easily controlled through communication with the external environment.

EFFECT: controlling swiveling base of a camera through a remote control signal.

10 cl, 6 dwg

FIELD: physics; computer engineering.

SUBSTANCE: invention relates to computer engineering for determining and reducing parametres of video cameras to given values, where the video cameras operate in a machine vision system consisting of three video cameras, two of which provide a detailed image and the third is for scanning. The result is achieved due to that, a device is proposed for automatic adaptive three-dimensional calibration of a binocular machine vision system, which has a first video camera, first image input unit, first orientation unit, second video camera, second image input unit, second orientation unit, system controller and control unit. The device also includes a third video camera, third image input unit and a third orientation unit. Accuracy of calibrating the machine vision system is achieved due to successive pairwise calibration of different pairs of video cameras.

EFFECT: calibration of a machine vision system consisting of three video cameras which, after calibration, should be placed on a single line straight line and directed perpendicular this line, where the two outermost video cameras have a narrow view angle and different focal distances and the third video camera which is placed in the centre between the outermost video cameras has a wide view angle.

4 dwg

FIELD: physics, photography.

SUBSTANCE: invention relates to television and digital photography and more specifically to image stabilisation methods. The result is achieved due to that two additional linear photodetectors of considerably smaller area, made in form of rows (columns) are placed on a single crystal together with the main photodetector matrix, a signal is read from the additional two linear photosensitive devices with horizontal frequency many times greater than the frame frequency of the main photodetector matrix. The pixel size along the linear photodetector is selected such that it is several times less than the pixel size of the main matrix. To main equality of sensitivity of the main matrix and the additional linear photodetectors, in the latter the pixel size in the direction across reading is increased in proportion to reduction of the longitudinal size and reading time. Further, three video data streams are picked: one main one and two auxiliary ones, from which the shift of the crystal relative the image formed by the lens is calculated.

EFFECT: compensation for the effect of the shaking of the hands of the operator.

2 cl, 6 dwg

Digital camera // 2384968

FIELD: physics, photography.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the digital camera includes a microcomputer (110) having a "live" display mode which controls such that image data generated by a CMOS sensor (130) or image data obtained through predefined processing of image data generated by the CMOS sensor (130) are displayed on a liquid-crystal display (150) as a moving image in real time. When the down button (141) receives an instruction relative the beginning of the automatic focusing operation in "live" display mode, the microcomputer (110) controls the movable mirror such that it enters the optical path in order to measure trough an AF sensor (132) and then enable the movable mirror to come out of the optical path in order to return the digital camera to the "live" display mode.

EFFECT: display of a subject image of a frame in "live" mode through an electronic view finder in a digital camera with a movable mirror.

7 cl, 41 dwg

FIELD: information technology.

SUBSTANCE: digital photographic camera has a support structure, an objective lens held by the support structure and having an optical axis, a sensitive element held by the support structure under the objective lens and having a certain number of adjacent pixel rows, where each pixel row contains a certain number of pixels, and each pixel includes an image sensor, and the image signal processor connected to the sensitive element includes an image scaling device which is configured to scale each pixel row in accordance with the scaling factor which differs from the adjacent pixel row. The image scaling device is configured to correct the oblique angle between the sensitive element of the photographic camera and the objective lens, the image of which is being captured.

EFFECT: avoiding geometrical distortions caused by the position of image capturing apparatus relative the object whose image is being captured.

25 cl, 16 dwg

FIELD: information technology.

SUBSTANCE: low-power mobile device for capturing images can create a stereo image and stereo video in time from one fixed type. For this purpose, statistics from an auto-focusing process is used to create a block depth map of one fixed type. In the block depth map, artefacts are suppressed and the depth map of the image is created. 3D left and right stereotypes are created from the depth map of the image using a 3D surface reconstruction process based on the Z-buffer and a mismatch map, which depends on the geometry of binocular vision.

EFFECT: providing a simple calculation process for detecting and estimating depth information for recording and creating stereo video in real time.

29 cl, 24 dwg

Up!