Image capturing device and control method thereof

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the image capturing device comprises an image capturing unit configured to capture an image of an object through an optical system; a display unit configured to display an image captured by the image capturing unit on a screen; a determination unit configured to simultaneously determine a plurality of touch positions on the screen where an image is displayed; and a control unit configured to smoothly adjust the focusing state in accordance with change in distance between a first determined touch position and a second determined touch position in order to change the focusing area.

EFFECT: broader technical capabilities of the image capturing device.

13 cl, 27 dwg

 

The technical field to which the invention relates

The present invention relates to a method of controlling the focus moves from an arbitrary moving target focusing in accordance with the user action on the device for fixing the image.

The level of technology

In a number of modern devices for the fixation of images, such as digital video cameras and digital cameras have auto-focus on any object, when the user touches the object on the display screen, which displays the object (for example, paved the Japan patent No. 2011-039457).

In addition to focusing on the desired object the user wants to submit a recorded video in the form of deliberately blurred image. In addition, when the focus area indicate with touch screen display, which includes such operating funds as a touch pad (see laid out the Japan patent No. 2011-039457), the user can temporarily close a finger or other part of the subject, and to miss its movement or facial expression.

The INVENTION

The present invention was made in view of the above problems, and it implements the method of operation of a moving target focusing, in order to adjust the state of the Fock is irowiki video when committed by a simple operation without loss of the fixed object.

To solve the above problems, the present invention provides a device for fixing the image, comprising: a fixing unit of an image configured for fixing the object image through the optical system; a display unit configured to display the image fixed by the fixing unit of the image on the screen; the definition block, configured for simultaneous detection of multiple touch positions on the screen that displays the image; and a control unit configured to control fine adjustment of status focus in accordance with the change of the distance between the first specific touch position and the second specific position of the touch to change the focus area.

To solve the above problems, the present invention provides a method of controlling the device for fixing the image, which contains the unit fixing the image, configured to capture the object image through the optical system; and a display unit configured to display the image fixed by the fixing unit of the image on the screen, and the method includes: a step of simultaneously determining multiple touch positions on the screen that displays the image; and a phase control fine adjustment of status the status of the focus in accordance with the change of the distance between the first specific touch position and the second specific position of the touch, to change the focus area.

According to the present invention can perform the move operation objectives focus to adjust the focus status of the fixed video by simple operation without loss of the fixed object.

Additional features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the accompanying drawings).

BRIEF DESCRIPTION of DRAWINGS

Figure 1 - block diagram of the fixation device image according to one variant of implementation of the present invention;

figure 2 is a perspective view showing the appearance of the fixation device image according to the aforementioned variant implementation;

figa-3D are schematic views showing an example of control operations move the focus according to the first variant of implementation;

figa-4B are schematic views showing processing of the control moves the focus in the example of the operation shown in figa-3D;

5 is a view showing the position control of the focusing lens in the example of operation shown in figa-3B;

figa and 6B is a flowchart of the sequence of operations management focus moves according to the first variant of implementation;

figa-7D are schematic views showing the first example of an operation according to the second variant the NTU implementation;

figa-8B is schematic views showing the processing control moves focus to the first example of an operation;

Fig.9 is a view showing the position control of the focusing lens in the first example of an operation;

figa-10D - schematic views showing a second example of an operation according to the second variant of implementation;

figa and 11B are schematic views showing the processing control movement of the focus in the second example of operations;

Fig view showing the position control of the focusing lens in the second example of an operation; and

figa and 13B is a flowchart of the sequence of operations management focus moves according to the second variant implementation.

DESCRIPTION of embodiments

Hereinafter are described in detail embodiments of the present invention. The following implementation options are just examples of the practical implementation of the present invention. These options exercise should be appropriately modified or changed depending on various conditions and structure of the device to which is applied the present invention. The present invention should not be limited to the following options for implementation. Also part of the above mentioned embodiments, described below, can be combined properly.

[First variations is t implementation]

The following describes an implementation option, in which the fixation device image according to the present invention is applied, for example, to a digital video camera (hereinafter referred to as "camera") for fixing the moving image.

<Configuration>

Next, with reference to figures 1 and 2 describes the configuration of the camera 100 according to this variant implementation.

Refer to figure 1, where the optical system 101 is formed from a group of lenses to change the focal distance, aperture, filter, neutral density (ND) and other Reference position 102 denotes a focusing lens. Unit 103-commit image includes an image sensor such as CCD (charge coupled device) or CMOS (complementary structure of metal-oxide-semiconductor), and is generated from the diagram of sample and hold (S/H)circuit preprocessing, etc. Reference position 104 denotes a processing unit of the signal; a reference position 106 denotes a block write signal; and a reference position 107 denotes a recording medium such as a flash memory. Unit 108 controls the operation of the whole camera. Operational element 109 is formed of a touch panel, a capacitive type or the like, the display Unit 110 is formed of a liquid crystal panel or the like, and displays the video signal used by the user for confirmed what I object.

The amount of light received from an object through the lens optical system 101, is regulated by the aperture and ND filter. In block 103, the image is fixed on the surface of the image sensor receiving light that forms the image light from the object, so that this light is accumulated in the form of charges of the signal in the photoelectric conversion. The video signal generated by the block 103 commit image is processed sample and hold in the scheme of sampling and storage. This video signal is supplied to the preprocessing scheme for processing AGC (automatic gain control)processing, black processing, white balance processing, gamma correction and the like, and then served in the block 104 of the signal processing. Block 104 signal processing performs processing, correction, etc. of the video signal on the basis of commands from unit 108 controls and displays processed and adjusted video signal in block 106 recording a signal.

On the basis of commands from block 108, the control unit 106 records the signal imposes a graphic element, such as a frame in the received video signal and displays it on the display unit 110. In parallel with this processing unit 106 records the signal performs intraframe coding or interframe coding of video signal and records it on the recording medium 107.

Operational e is ement 109 sends the position data, indicated by the user in block 108 control. Even in the case when the user designates at the same time many positions, so many positions can also be sent at the same time. Unit 108 controls the operation of the whole camera as a whole in accordance with the content received from the user symbols.

Note that in the case when the operating element 109 includes a touch panel unit 108 controls can define the following operations on the touch panel: touch panel with a finger or a pen (hereinafter it is called "touch"); hold your finger or stylus in contact with the touch panel (hereinafter, this is called "retention"); move your finger or a pen without losing contact with the touch panel (hereinafter, this is called "moving"); moving the finger or stylus from the touch panel (hereinafter, this is called "abstraction"); and the state in where nothing is in contact with the touch panel (hereinafter, this is called "no touch"). Each operation and the positional coordinates of the finger or stylus in contact with the touch panel, sent in block 108 control. Block 108 control based on the received information, determines what operation was performed on the touch panel. As for the move, you can determine the direction of movement of the finger or the pen, moving on sensor the second panel, both vertically and horizontally based on the positional change of coordinates. Operation starting with touch panel followed a pre-defined movements and moving the finger or the like, is called a single pass. Operation fast pass is called a "dash" (flick). "Dash" is a transaction quickly move your finger without losing contact with the touch panel at a certain distance, followed his lead. In other words, "a dash" - this operation is fast on the surface of the touch panel when you move like a "dash". After determining the movement on a predetermined or greater distance with a predetermined or greater speed, followed by abstraction of a finger you can determine what was done "dash". If it is determined move by a predetermined or greater distance at a speed lower than the aforementioned predetermined speed, then determine what was "dragging". It can be used touchpad different types, for example, a resistive type, a capacitive type touch panel based on surface acoustic waves, infrared touch panel type touch panel on the basis of electromagnetic induction touch panel based on the image recognition or the sensor is th panel on the basis of optical sensors.

Unit 104 performs signal processing signal processing for achieving the contrast of the video signal, or signal processing, providing the phase difference to obtain the estimated value of the focus.

When adjusting the focus, aimed at obtaining the contrast, the sharpness of an object image formed by the optical system 101, obtained by evaluation of the signal using a predetermined function. Then adjust the position of the focusing lens 102 on the optical axis, providing maximum sharpness. As the evaluation function, add the absolute value of the difference of luminance between adjacent pixels in the zone definitions focus or add the squared difference of luminance between adjacent pixels in the zone definitions focus. Since the zone definitions focus can be set arbitrarily, based on the video signal, the focus adjustment can be performed for an arbitrary location of the object in a recorded image.

Typically, if the above definition of focus based on the definition of the contrast value of the evaluation function get smoothly moving the focusing lens 102. Thus, to adjust the focus it takes time until you find the focus.

However, when the focus adjustment on the basis of receiving the Oia phase difference image of the object formed by using two light beams, which have passed through different areas of the lens optical system 101. Positional phase difference between two images of the object is determined on the basis of the output signal of the image sensor and convert it to the amount of defocus of the optical system.

For example, as a sensor of the phase difference provided separately from the image sensor unit 103 fixation of the image, provided the image sensor to determine the phase difference, so that the surface of the sensor phase difference of receiving light, an image created by the split optical system 101 with the light from the object. Alternatively, for forming an image of an object provided with an optical system different from the image sensor unit 103 of the image is fixed. In these cases, the position or range to perform the focus adjustment for an object in a recorded image can be limited in accordance with the number of optical systems sensors phase difference and the position or range, where provided with the optical system or the sensor phase difference.

On the other hand, in each pixel of the image sensor unit 103 commit image provided many facilities for photovoltaic conversion, which allows the image sensor to determine the phase difference. In this configuration, multiple zones photoelectro the definition of transformation through the microlens or the like, provided on a pixel basis, perceive two light beams that have passed through different areas of the lens optical system 101, in order to obtain a set of signals with different phases from the image sensor to perform an operation of determining the phase difference. In this case, since the zone definitions focus can be set arbitrarily, based on the video signal, the focus adjustment can be performed for an arbitrary location of the object in a recorded image, as and when receiving contrast.

With the above definition of focus based on the definition of the phase difference between the desired position of the focusing lens 102 can be calculated by obtaining the amount of defocus. Consequently, it is possible to reduce the time to reach focus compared to the option of determining the contrast.

To obtain estimated values of the focus block 104 signal processing arbitrarily sets the zone definitions focus on the basis of the video signal, or sets the zone definitions focus, using a method that minimizes the restriction on the designated position or range of focus.

The result obtained by evaluating the focus state ("state-focus") using the above definitions focus, send in the unit 108 controls as the estimated values of focus. Unit 108 controls in chislet the amount of movement of the focusing lens 102 based on the obtained estimated values of focus and gives a command to the optical system 101 to move the focusing lens 102.

Figure 2 presents a perspective view showing the external appearance of the camera 100 according to this variant implementation. Operational element 109, such as a touch panel-integrated display unit 110, such as a liquid crystal panel. The user can designate a desired area through the operation of touch, confirming the displayed object or a graphical element, such as a frame.

<functional Description>

Next, with reference to figa-3D describes the operation performed by the user, when the control has the focus.

Contact figa where on the display unit 110 as the fixed object shows a scene with two characters - a man and a woman in a flowered field. Frame 101 indicates that the unit 108 controls selected area of the face of a man in the frame 101 as the current target area of focus adjustment (hereinafter it is called "the first goal focus"). In this case, it is assumed that the first goal of the focus is really in focus. In the following explanation it is assumed that the user performs the selection operation of a blossoming field near side as a new goal focus (the second goal of the focus) and a gradual move the focus area.

On figv the user's index finger and thumb at the same time touches on the uh position: frame 301 and a blossoming field near side, designating that the two points on the image displayed on the display unit 110. Frame 302 that is displayed in the area indicated by the thumb indicates that the block 108 management has determined that the location of the object, surrounded by a frame 302 is the second goal of the focus that is required by the user.

On figs the first touch position indicated by the index finger of the user comes close to the second touch position indicated by the thumb. Frame 304 is displayed on the first touch position indicated by moving the index finger, indicating that the block 108 management monitors the position of the touch display user. Block 108 control changes the focus area in accordance with a positional relationship between the first touch position and the second position of the touch. In particular, the focus area is gradually moved in accordance with the rate of change of the distance between the first area and the second area. In addition, as the guide from the first position of the touch, marked the first time the index finger, to frame 302 that indicates the second goal of the focus, there is an arrow 303.

On fig.3D shows the display after the first touch position indicated by the index finger of the user has reached the frame 302. This indicates that the block is 108 management chose the location of the object, located in the area surrounded by the frame 302, as a new second goal focus and that blooming field near side is in focus. Management focus moves ends, and the frame 301 and 304, and an arrow 303 erased.

As for the shape of each display frame, the first goal of the focus before the management focus is represented by a rectangle of solid lines, the first goal of the focus to move halfway through the management focus is indicated by a rectangle of dotted lines, and the second goal focus after moving is indicated by the dashed circle. The touch position during the move indicated a small dotted circle. The user can easily confirm whether the unit 108 controls the management focus is on purpose, watching each frame.

<a position Control lenses>

Next, with reference to figa-5 describes the processing that initiates the selection unit 108 management goals focus on two points marked in accordance with a touch operation performed by the user on the operating element 109, and calculates the position of the focusing lens 102 when the control move is shown in figa-3D.

On figa diagram shows the processing when the user has designated two points is in the display), initiating the determining unit 108 controls the following objectives focus, based on the identified first and second touch positions. Let And is included in the first goal of the focus at the time of designation, and P0and Q0two marked points. Let p be the distance from the area of the screen displays the object And to the point R0and q is the distance from the area where the screen displays the object And to the point Q0. Since p is less than q and P0is within a predetermined range from the object And (p<R), block 108 management determines that P0indicates the first goal focus. In addition, since the point Q0separated from the object And a predetermined or greater distance (q>R), block 108 management determines that Q0denotes the second goal focus. From the distance between points P0and Q0subtract a predetermined value (radius R) to obtain the distance D, which is used to further control the position of the focusing lens 102.

On FIGU diagram shows the processing when the user has moved from one of the marked points closer to the other), initiating the determining unit 108 controls on the basis of these two marked points of the parameter to be used for calculating the position of the focusing lens 102. The beginning is inferred from two points of P 0and Q0marked the first time these two marked points are monitored over a predetermined time period. Let P1and Q1are the two points marked in a given time. From the distance between points P1and Q1subtract a predetermined value (radius R) to obtain the distance d, which is used as a parameter for calculating the position of the focusing lens 102 together with the previously obtained distance D.

Figure 5 shows the position of the focusing lens 102. The position on the optical axis, where the focal length is minimum, represented as min, and the position where the focal length is infinite, represented as ∞. Let α - position of the focusing lens, which is focused on the first goal And focus before moving due to management focus; β - position of the focusing lens, which is focused on the second goal, Q1focus after moving, and γ - position of the focusing lens 102 during the move. Unit 108 controls the position γ of the focusing lens 102 according to the expression (α-β)·d/D+β.

<operation>

Next, with reference to figa and 6B describes the management focus moves according to the first variant implementation.

Contact figa. where at step S601 unit 108 controls the coolant which gives a new operation designation from the user.

At step S602 unit 108 control determines whether two or more of the touch position indicated by the user. If marked one touch position, the management focus is not running. At step S603 objective focusing means in the area of the object displayed at the designated position of the touch, and the process returns to step S601.

At step S604 unit 108 control determines whether there is at the moment the purpose of the focus. If the focus does not exist, the management focus is not running. At step S605 the touch position indicated by the user, sorted in ascending order of distance to the center of the screen of the display unit 110. At step S606 the purpose of the focus set position of the object displayed on the touch position closest to the center of the screen, and the process returns to step S601.

At step S607 unit 108 control sorts the touch position indicated by the user in order of increasing distance to the position on the screen of the current first goal And focus.

At step S608, if the distance p between the goal And the focus and the position P of the touch, the nearest to the goal And focus is outside a predetermined range (R), this designation is not suitable for motion control focus. As a result, the block 108, the control returns to step S601. Kardarshian between the goal And focus and position Q touch, lying farthest away from the goal And focus, does not fall within the predetermined range (step S609), the block 108, the control returns to step S601.

At step S610 unit 108 control sets equal to D, the initial value of the distance obtained by subtracting a predetermined value R of the distance between the positions P and Q touch, and substitutes the value of D instead of d during the move.

Contact figv, where the block 108 control sets the position of the focusing lens providing a focusing on the subject And equal to α (step S611) and substitutes α as the initial values of the position γ of the focusing lens during the movement (step S612).

At step S613, the block 108, the control starts the control mode, move the focus. In this mode, the display of frames for the first goal And focus before moving, the display object at the position Q touch the second objective of the focus after the move and the display position P of the touch while moving, and displays an arrow indicating the direction that is in parallel with the subsequent processing.

At step S614, the block 108 is monitored by the management positions P and Q touch. Tracking is performed within a predetermined period, and the condition for successful tracking is to change the distance from the previous position the touch was within the limits of a predetermined range. Here it is assumed that the predefined period is considerably shorter than the time during which the touch position is moved by a predetermined distance.

At step S615 unit 108 control determines whether the tracking of the positions P and Q touch successful. If the tracking for one item, touch failed and the position of the touch is no longer present, the location of the object displayed at the position Q touch immediately before the moment of failed tracking is set as the target focus (step S620), and the control moves the focus ends.

At step S616 unit 108 control determines whether the distance between the positions P and Q touch a predetermined value R. If this distance is less than or equal to a predetermined value R, then at step S621, the location of the object displayed at the position Q touch set as a goal, focus, and control movement of the focus ends.

At step S617 unit 108 control updates the distance d obtained by subtracting a predetermined value R of the distance between the positions P and Q touch.

At step S618 unit 108 control sets the position of the focusing lens, which focuses on the location of the object displayed at the position Q Casa is Oia, equal to β. At step S619 as provisions γ focusing lens while moving is substituted (α-β)·d/D+β. The process then returns to step S614.

At step S622 unit 108 control out of the control mode, move the focus, blurs the frames and arrows, the display control focus, and returns to step S601.

As described above, according to this variant implementation, the user adjusts the focus area of the fixed video by simple operation, so that changes occur gradually while monitoring the recorded object. This allows you to commit image defocusing, which is an advantage of this variant implementation.

[Second variant implementation]

The following describes a second variant embodiment of the invention. Describe the configuration of the camera according to this variant implementation is omitted because it is the same as in the first embodiment except the operation of the user to control the focus and control the position of the focusing lens in accordance with this operation.

Below are examples of two types of operations, where the focus is on differences from the first version of the implementation.

<the First example of the operation>

The first example of the control operation moves the focus of the camera 100 according to the SNO this variant implementation is described with reference to figa-7D. In the first example of operation, the user performs an operation gradually move the focus area to a new target focus, as in the first embodiment.

Contact figa, where as the fixed object shows a scene with two characters - a man and a woman in a flowered field, as figa. Frame 701 indicates that the unit 108 controls selected region of a blossoming field with the near side as the current first target focus. In this case, it is assumed that this area is really in focus.

On figv the user touches two areas around the woman's face with the index finger and thumb, which are quite distant from each other, thereby indicating two points on the image displayed on the display unit 110. In these positions of the touch display frame 703 and 704. Frame 702, shown in an intermediate region between them in such a way that it includes the woman's face, indicates that the block 108 management has determined that the above region is the second goal of the focus that is required by the user.

Although the frame 702 is shown in an intermediate position between frames 703 and 704 in the above description, the present invention is not limited with this. For example, for the image displayed on the display unit 110 may be made of obrabotka to face recognition. If the person is between part 703 and 704, it is possible to ensure that the frame 702 surrounded the face. If you have defined several persons, may be surrounded by the face nearest to the intermediate point between frames 703 and 704.

On figs the user moves the touch position indicated by the index finger and thumb, to the woman's face. Frame 703 and 704 are displayed on the touch positions during the move, indicating that the block 108, the control keeps track of the touch position. Control is performed for smooth movement of the touch positions according to the rate of change of the distance between the touch positions. In addition, arrows 705 and 706 as guides from the touch positions denoted by the index finger and thumb, to the frame 702 that indicates the second goal of the focus.

On fig.7D shows a display example when the distance between the touch positions identified by the user's index finger and thumb, is less than a predetermined value. This indicates that the block 108 management chose a frame 702 as a new goal focus, and the woman's face is in focus. Management focus is completed, and the frame 701, 703, 704 and 705, and the arrow 705 and 706 are erased.

<a position Control lenses>

Next, with reference to figa-9 describes the processing, initier is the one of the aims of the focus given two points indicated by the user, and calculating the position of the focusing lens 102 when the control move is shown in figa-7D.

On figa diagram shows the processing when the user has designated on the screen two sections of the fingers, which are quite distant from each other), initiating the determining unit 108 controls the following objectives focus on these two points. Let And is included in the first goal of the focus at the time of designation, and P0and Q0two marked points. Block 108 management determines that the location of the object displayed in the intermediate point B0between points P0and Q0indicates the next target of the focus. From the distance between points P0and B0subtract a predetermined value (radius R) to obtain the distance D, which is used to further control the position of the focusing lens 102.

On FIGU diagram shows the processing when the user has moved two marked points to reduce the distance between them), initiating the determining unit 108 controls on the basis of these two marked points of the parameter to be used for calculating the position of the focusing lens 102. Starting from two points of P0and Q0marked the first time these two carts is Uchenye points are monitored over a predetermined time period. Let P1and Q1are the two points marked in a given time. When the intermediate point B1between points P1and Q1moved relative to the first intermediate point B0block 108 control sets the location of the object displayed in the intermediate point B1as the next target of the focus. From the distance between points P1and B1subtract a predetermined value (radius R) to obtain the distance d, which is used as a parameter for calculating the position of the focusing lens 102 together with the previously obtained distance D.

Figure 9 shows the position of the focusing lens 102. The position on the optical axis, where the focal length is minimum, represented as min, and the position where the focal length is infinite, represented as ∞. Let α - position of the focusing lens, which is focused on the first goal And focus before moving due to management focus; β - position of the focusing lens, which is focused on the goal of the focus after the move, and γ - position of the focusing lens 102 during the move. Unit 108 controls the position γ of the focusing lens 102 according to the expression (α-β)·d/D+β.

<a Second example of the operation>

Next, with reference to figa-10D is described in the second example of the control operation by moving the focus performed by the camera 100 according to this variant implementation. In the second example of operation, the user performs an operation of gradually moving the focal point for intentional view video in the defocused condition where the objects are not in focus, in contrast to the first example.

Contact figa, where as the fixed object shows a scene with two characters - a man and a woman in a flowered field, as figa. Frame 1001 indicates a state in which the unit 108 controls selected region of a blossoming field with near-hand, displayed in a frame as the current first target focus.

On figv the user touches two areas around the woman's face with the index finger and thumb, thereby indicating two points on the image displayed on the display unit 110. In these positions of the touch display frame 1003 and 1004. Frame 1002, shown in an intermediate region between them so that it includes the woman's face, indicates that the block 108 management has determined that the above region is the second goal of the focus that is required by the user. In this case, the control moves the focus is not to focus on the second goal, the focus because the user then performs an operation of a blurred image.

The piano is GS the user pushes the fingers to move the position of the touch from the state, in which the distance between the touch positions denoted by the index finger and thumb, is less than a predetermined value (see figv), thereby increasing the distance between the touch positions. Frame 1003 and 1004 are displayed on the touch positions indicated by moving the index finger and thumb, indicating that the block 108, the control keeps track of the touch position. Control is performed to smooth the installation focal point in the state of defocus when the distance between the touch positions is increasing. In addition, arrows are displayed 1005 and 1006 as guides from the touch positions denoted by the index finger and thumb to the touch position, where they get blurred video.

On fig.10D shows a display example when the distance between the touch positions identified by the user's index finger and thumb, greater than or equal to a predetermined value. Frame disappear, indicating that the purpose of the control unit 108 controls no longer present. Simultaneously control movement of the focus ends up in the defocus state in which the focusing lens 102 is moved to the place with the minimum focal length, and all frames and arrows erased.

<a position Control lenses>

Next, with reference to figa-12 describes the W treatment, initiating selection unit 108 management goals focus on two points marked in accordance with the operation of the user, and calculating the position of the focusing lens 102 when the control move is shown in figa-10D.

On figa diagram shows the processing when the user has designated on the screen two sections of the fingers, which are close to each other), initiating the determining unit 108 controls the following objectives focus on these two points. Let And is included in the purpose of the focus at the time of designation, and P0and Q0two marked points. Block 108 management determines that the location of the object displayed in the intermediate point B0between points P0and Q0indicates the next target of the focus. From the distance between points P0and B0subtract a predetermined value (radius R) to obtain the distance D, which is used as a parameter for subsequent calculations of the position of the focusing lens 102. In this example, since D is less than the predetermined value M-R, block 108, the control moves the point of focus in the defocus state, when the distance between the two marked points will be greater than in the initial state.

On FIGU shows the processing circuit (when isolately moved two marked points to increase the distance between them), initiating the determining unit 108 controls on the basis of these two marked points of the parameter to be used for calculating the position of the focusing lens 102. Starting from two points of P0and Q0marked the first time these two marked points are monitored over a predetermined time period. Let P1and Q1are the two points marked in the moment, and B1intermediate point between them. From the distance between points P1and B1subtract a predetermined value (radius R) to obtain the distance d, which is used as a parameter for calculating the position of the focusing lens 102 together with the previously obtained distance D. When the distance d has reached a predetermined value M, get the defocus state in which the focusing lens 102 is moved to the minimum focal length.

On Fig shows the position of the focusing lens 102. Let α - position of the focusing lens, which is focused on the first goal And focus before moving due to management focus; β - position of the focusing lens, which is focused on the second objective B focus after moving, and γ - position of the focusing lens 102 during the move. Unit 108 controls the position γ focuser the setup portion of the lens 102 according to the expression (min-α)·(d-D)/(M-D)+α.

<operation>

Next, with reference to figa and 13B describes the management focus moves according to the first and second examples of the operation.

Contact figa, where the steps S1301-S1303 performs the same processing as in steps S601-S603 on figa.

At step S1304 unit 108 control sets equal to D, the initial value of the distance obtained by subtracting a predetermined value R of the distance between the position P touch during displacement and the intermediate position B between the position P of the touch and the other position Q touch, and substitutes D as d values of this distance during the movement.

At step S1305 block 108 management determines is smaller if the distance D than a predetermined value M. If the distance D is less than this predetermined value, then the control unit 108 controls substitutes 1 as the value of the flag F defocus at step S1306. If the distance D is greater than or equal to a predefined value, then the control unit 108 controls enters 0 as the value of the flag F defocus at step S1307. This flag means that, when the distance between the two touch positions is less than a predetermined value, managed, ensuring the transition to a state of defocus subsequent processing by moving the position of the touch in the direction of the velichenie distances between these positions.

Unit 108 controls the step S1308 sets equal to α position of the focusing lens, which focuses on the first current target And focus, and substitutes α as the initial values of the position γ of the focusing lens during the movement (step S1309).

At step S1310 block 108, the control starts the control mode, move the focus. In this mode, the display of frames for the first goal And focus before moving, goal B focus after moving and positions P and Q touch during displacement, as well as display of the guide arrow is parallel with the subsequent processing.

Contact figv, where in step S1311, the block 108 is monitored by the management positions P and Q, denoted by the user. Tracking is performed within a predetermined period, and the condition for successful tracking is to change the distance from the previous position of the touch is not beyond the predetermined range. During tracking is also updated intermediate position between the positions P and Q touch.

At step S1312 unit 108 control determines whether the tracking of the positions P and Q touch successful. If the tracking for one item, touch failed and the position of the touch is no longer present, the location of the object displayed on positions and In touch immediately before the moment of failed tracking is set as the target focus (step S1313), and control movement of the focus ends.

At step S1314 unit 108 control determines whether the distance between the positions P and touch a predetermined value R. If the distance is within a predetermined range R, then the location of the object displayed at the position of touch, set as a goal focus (step S1315), and control movement of the focus ends.

At step S1316 block 108 management determines is smaller if the distance between the position P of the touch position In the touch than a predetermined value R+M. If this distance falls outside the predetermined range R+M, the position γ of the focusing lens is set to the min focusing lens with a minimum focal length (step S1317), and control movement of the focus ends. At step S1318 goal focus is not set.

At step S1319 unit 108 control updates the distance d obtained by subtracting a predetermined value R of the distance between the position P of the touch position In the touch.

At step S1320 unit 108 control determines whether the conditions d<D and F=0. If these conditions are met, it executes the processing associated with the gradual displacement of the point of focus to the next target f is kusonoki. If these conditions are not satisfied, the processing associated with the smooth movement of the focal point in the defocus state.

If the mentioned conditions on the step S1320 are satisfied, then the control unit 108 controls to step S1321 sets the position of the focusing lens, which focuses at the position equal to β. At step S1322 as the value of the position γ of the focusing lens during the movement value is executed according to the expression (α-β)·d/D+β. The process then returns to step S1311.

On the other hand, if the conditions at step S1320 not satisfied, then the control unit 108 controls substitutes (min-α)·(d-D)/(M-D)+α as the value of the position γ of the focusing lens during the movement (step S1323). The process then returns to step S1311.

In step S1324 unit 108 control out of the control mode, move the focus, blurs the frames and arrows, the display control focus, and returns to step S1301.

As described above, according to this variant implementation can deliberately introduce video in the defocus state in which no object is in focus. In addition, when the display screen indicated the focus area using the operating element, such as a touch panel, the user can designate a desired area fokusirovki is without temporary shielding object-goals focus your finger or the like, so without loss of movement of an object or expression.

Other embodiments of the

Aspects of the present invention can also be implemented using a computer in a system or device (or devices such as a CPU or MPU)that reads out and executes a program recorded in a memory device to perform the functions of the above options (options), either by way of the steps is performed by a computer included in the system or device, for example, by reading and executing the program recorded in a memory device to perform the functions of the above options (options). To this end, the computer is referred to the program, for example, via a network or from a recording medium of various types serving as the memory device (for example, machine-readable media). In this case, the system or device, and recording medium that stores the program, included in the scope of the present invention.

Although the present invention has been described with references to exemplary embodiments of the implementation, it should be clear that the invention is not limited to the disclosed here exemplary embodiments of the implementation. The volume of the following claims must meet the widest interpretation is AI, providing coverage of all these modifications, equivalent structures and functions.

1. The locking device of the image that contains:
unit fixing the image, configured to capture the object image through the optical system;
a display unit configured to display the image recorded in the mentioned unit fixing the image on the screen;
the definition block, configured for simultaneous detection of multiple touch positions on the screen that displays the image; and
a control unit configured to control fine adjustment of status focus in accordance with the change of the distance between the first specific touch position and the second specific position of the touch to change the focus area.

2. The device according to claim 1, wherein in the case where the above-mentioned definition block defines the first position of the touch and the second touch position, and then determines that the first touch position has moved closer to the second touch position, the said control unit changes the focus area, to move the first focus area corresponding to the first touch position closer to the second focus area corresponding to the second position of the touch.

3. The device according to claim 2, in which the said control unit immediately if you want the NGOs change the first area of focus at the second focus area, if the distance is less than a predetermined value.

4. The device according to claim 1, wherein in the case where the above-mentioned definition block defines the first position of the touch and the second touch position, and then determines that the first touch position and the second touch position has moved closer to each other, the said control unit changes the focus area the focus area corresponding to the location of the object displayed between the first touch position and the second position of the touch.

5. The device according to claim 1, in which when the said block determining determines that the first touch position and the second touch position has moved farther apart from each other, the said control unit changes the focus area is not on the focus area corresponding to the location of the object displayed between the first touch position and the second position of the touch.

6. The device according to claim 1 in which the said control unit configured to control the optical system to move the focus area in accordance with the speed at which changes the distance between the first touch position and the second position of the touch.

7. The method of controlling the device for fixing the image, which contains the unit fixing the image, configured to fix the image of the object chartoptions system, and a display unit configured to display the image fixed by the fixing unit of the image on the screen, and the method includes:
stage simultaneous determination of multiple touch positions on the screen that displays the image; and
the phase control fine adjustment of status focus in accordance with the change of the distance between the first specific touch position and the second specific position of the touch to change the focus area.

8. The method according to claim 7, in which the step of determining determines the first position of the touch and the second touch position, and then determine that the first touch position has moved closer to the second position of the touch; change the focus area to move the first focus area corresponding to the first touch position closer to the second focus area corresponding to the second position of the touch on the phase control.

9. The method according to claim 8, in which the phase control of the first focus area is changed to the second focus area, if the distance is less than a predetermined value.

10. The method according to claim 7, in which the step of determining determines the first position of the touch and the second touch position, and then determine that the first touch position and the second touch position has moved closer to each other; changing the area of focusi ovci on focus area corresponding to the location of the object displayed between the first touch position and the second position of the touch on the phase control.

11. The method according to claim 7, in which the step of determining determines that the first touch position and the second touch position has moved farther apart from each other; changing the focus area is not on the focus area corresponding to the location of the object displayed between the first touch position and the second position of the touch on the phase control.

12. The method according to claim 7, in which the phase control optical system is controlled to move the focus area in accordance with the speed at which changes the distance between the first touch position and the second position of the touch.

13. Machine-readable storage medium storing a program that causes a computer to execute the control method according to claim 7.



 

Same patents:

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. An image processing device for detecting, from image data generated by an image sensor formed by a plurality of pixels, a signal from a defective pixel of the image sensor comprises a first determination unit for obtaining a first determination value indicating the value of the difference in signal strength between a pixel of interest and a plurality of pixels located near the pixel of interest; a second determination unit for obtaining a second determination value indicating the distribution width of the difference in signal strength between the pixel of interest and the plurality of pixels located near the pixel of interest; and a detection unit for detecting if the signal from the pixel of interest is a signal from a detective pixel using the first determination value and the second determination value, wherein the first determination unit obtains the first determination value by obtaining the difference in signal strength between the pixel of interest and each of the plurality of pixels located near the pixel of interest, obtaining from each difference value indicating the probability that the signal from the pixel of interest is a signal from a defective pixel, and multiplying the obtained values.

EFFECT: high accuracy of detecting a defective pixel.

11 cl, 22 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to an image forming apparatus. The result is achieved due to that the image forming apparatus includes a control unit and a detector which includes a plurality of pixels and which performs an image capturing operation for outputting image data corresponding to emitted radiation or light. The image capturing operation includes a first image capturing operation in which the detector is scanned in a first scanning region which corresponds to part of the plurality of pixels to output image data in the first scanning region, and a second image capturing operation in which the detector is canned in a second scanning region larger than the first scanning region to output image data in the second scanning region. The control unit prompts the detector to perform an initialisation operation for initialising a conversion element during a period between the first image capturing operation and the second image capturing operation in accordance with the switch from the first scanning region to the second scanning region.

EFFECT: design of a device capable of reducing the difference in level which might arise in a captured image and which depends on the scanning region to prevent considerable deterioration of image quality.

9 cl, 8 dwg

FIELD: physics, computer engineering.

SUBSTANCE: group of inventions relates to image processing technologies. An image processing device for reconstruction processing for correcting image quality deterioration due to aberration in an optical image-forming system. The image processing device comprises a dividing means for dividing image data of colours of colour filters into image data of corresponding colours of colour filters. The device also includes a plurality of image processing means, each designed to perform reconstruction processing by processing using an image data filter of one of the corresponding colours divided by said dividing means.

EFFECT: fewer false colours through image reconstruction processing in a RAW image, as well as reduced load on image reconstruction processing.

10 cl, 33 dwg

FIELD: physics.

SUBSTANCE: apparatus for adjusting a magnetooptical system for forming a beam of protons consists of a pulsed electromagnet which is formed by a pair or a system of pairs of thin conductors directed along the axis of a proton graphic channel spread in a transverse plane. A scaling array of metal plates mounted in a frame is placed at the output of the electromagnet. The method of adjusting a magnetic system for forming a beam of protons and a method of matching magnetic induction of an imaging system involve generating a magnetic field, through which the beam of protons is passed, the direction of said beam through the imaging system to a recording system by which the image of the scaling array is formed. Upon obtaining a distorted image, the magnetic beam forming system is adjusted and magnetic induction of the magnetooptical imaging system is adjusted by varying current of lenses of said systems and retransmitting the beam of protons until the required images are formed.

EFFECT: high quality of adjustment.

4 cl, 14 dwg

FIELD: radio engineering, communication.

SUBSTANCE: user sets, in a photograph display device 370B, the fact that a physical address 2000 represents a recording device which controls 370B display of photographs in place of the physical address 2000. According to that setting, the photograph display device 370B defines a logic address as a recording device controlled by consumer electronics control (CEC) devices. When the user performs operations with the recording device 210B on a disc, which is a CEC-incompatible device, using a remote control transmitter 277, a television receiver 250B generates a CEC control command addressed to the disc recording device 210B. The photograph display device 370B detects a CEC control command, converts the CEC control command to an infrared remote control command and transmits the infrared remote control command from the infrared transmission module 384 to the disc recording device 210B.

EFFECT: controlling operations of a controlled device, which processes only a control signal in a second format based on a control signal in a first format.

11 cl, 31 dwg

FIELD: physics.

SUBSTANCE: disclosed apparatus includes a means (100) for providing an aerosol designed to generate an aerosol stream (108) with average particle diameter of the disperse phase of less than 10 mcm in a screen formation area, a means (200) of providing a protective air stream designed to generate a protective air stream (210, 211) on two sides of the aerosol stream (108), wherein the aerosol stream (108) and the protective air stream (210, 211) have a non-laminar, locally turbulent flow near an obstacle on the flow path, wherein the Reynolds number for said streams near outlet openings (134, 215, 216) is in the range from 1300 to 3900.

EFFECT: improved method.

17 cl, 9 dwg

FIELD: physics.

SUBSTANCE: image forming process includes a first image forming process for outputting image data in accordance with illumination of a detector with radiation or light in an illumination field A, which corresponds to part of a plurality of pixels, and a second image forming process for outputting image data in accordance with illumination of a detector 104 with radiation or light in an illumination field B which is wider than the illumination field A. In accordance with transfer from illumination in the illumination field A to illumination in the illumination field B, operation of the detector is controlled such that the detector performs an initiation process for initiating conversion elements during the period between the first and second image forming processes.

EFFECT: weaker ghost image effect which can appear in an image resulting from FPD operation, and which is caused by the illumination region, and preventing considerable drop in image quality without complex image processing.

7 cl, 21 dwg

FIELD: physics.

SUBSTANCE: computer has a video card; in the prototype television camera, the first television signal sensor is based on a charge-coupled device (CCD) matrix with "row-frame transfer" and the use of an additional pulse former for clock power supply of the photodetector provides summation in the memory section of charge signals accumulated in its photodetector section. As a result, sensitivity is levelled on the entire field of the composite image.

EFFECT: high signal-to-noise ratio at the output of the CCD matrix of the first television signal sensor owing to summation, in its memory section, of charge packets formed in the photodetector section.

4 cl, 12 dwg, 3 tbl

FIELD: radio engineering, communication.

SUBSTANCE: invention provides an optical-electronic system which enables to measure density of fluorescence radiance in the UV spectral range, arising from ionisation of atmospheric nitrogen, and converting the obtained information to a visual image of distribution of levels of radioactive contamination on the underlying surface.

EFFECT: faster aerial radiological survey of an area due to shorter flight time of the aircraft and high reliability of instrument measurement data.

3 cl, 2 dwg

FIELD: radio engineering, communication.

SUBSTANCE: apparatus for obtaining location information is configured to wirelessly transmit location information obtained by a means of obtaining location information in response to a location information request from an image obtaining device, wherein a determining means is configured to determine whether the means of obtaining location information transmits location information, and a signal transmission means is configured to wirelessly transmit a signal to the image obtaining device if the determining means determines that the means of obtaining location information does not transmit location information, wherein the signal prohibits the location information request.

EFFECT: facilitating wireless communication with an image obtaining device.

14 cl, 5 dwg

FIELD: radio engineering.

SUBSTANCE: for authentication of data exchange, identifier of decoder receiver is used, based on number of smart-card and gate is used, made with possible receipt of data from receiver-decoder, which are sent via non-network protocol and transformation of these data to inter-network protocol for their following transfer to internet service provider.

EFFECT: higher efficiency.

2 cl, 10 dwg

FIELD: communication systems.

SUBSTANCE: transfer system for transfer of transport flow of MPEG-2 standard from transmitter 10 to receiver 14, having check channel 16, along which receiver 14 can transfer selection criterion for selection of information blocks of MPEG-2 standard to transmitter 10, transmitter 10 has selector 38, which receivers required criteria and then filters information blocks in accordance to these criteria prior to transfer.

EFFECT: higher efficiency.

4 cl, 3 dwg

FIELD: technology for broadcast transmissions of digital television, relayed together with multimedia applications.

SUBSTANCE: method includes transmission of digital signal, having additional data flow, appropriate for compressed video images and data flow, appropriate for at least multimedia application, and also service signals, meant for controlling aforementioned data flows, service signals is determined, appropriate for series of synchronization signal, including series, meant for assignment of multimedia signal, meant for recording execution parameters of aforementioned assigned multimedia application, after that multimedia application is loaded and multimedia application is initialized with aforementioned execution parameters.

EFFECT: possible interactive operation of multimedia application with user.

2 cl, 2 dwg

FIELD: engineering of receivers-decoders used in broadcasting systems such as television broadcasting system, radio broadcasting system, cell phone communication system or other similar systems.

SUBSTANCE: method includes performing transmission to receiver-decoders through broadcasting system of a command, ordering receiver-decoders to perform an action; when command is receiver, identifier of command stored in current command is compared to identifiers of commands stored in memory of current decoder-receiver; aforementioned action is only executed in case when command identifier is not stored in memory.

EFFECT: transmitted commands are only executed once and information related to each receiver-decoder malfunction and may be useful for detecting and repairing malfunction is extracted remotely.

2 cl, 10 dwg

Personal computer // 2279708

FIELD: engineering of computer hardware.

SUBSTANCE: personal computer contains system block including system board, processor, random-access memory, hard drive with controller, video card, sound card, disk drive, monitor, keyboard and mouse. Personal computer additionally contains digital monitor, television card and compression block.

EFFECT: expanded functional capabilities, possible receipt by personal computer of digital television broadcast signal, reproduction of aforementioned signal without loss of quality on full screen of display with recording in compressed form onto hard drive of computer.

22 dwg, 1 tbl

FIELD: medicine; medical diagnostic technique.

SUBSTANCE: topographometric system with topography function contains X-ray emitter, profondometer, rotational console, mechanism for moving of the profondometer along console, device for detection of X-ray radiation as a detectors scale, moving mechanism for detection device, table for the patient with radiotransparent deck, stand with rotation drive for console, data transmitter to the electronic computer, analog-digital converters and management system. Profondometer scanning contains flat-field collimator graving from the opening of operating beam narrow fan-shaped beam. It also has drive which provides moving synchrony of the beam and device for X-ray radiation detection containing crate with detectors scale. It is mounted on the movable carriage of the mechanism for moving of the detecting device with the ability of rocking about the axis and fitted with driver, providing the orientation of the scale on the focus point in any position of the carriage. Detectors scale radius in the central position is equal to the distance from the focus point to the central scale in the tomography mode. Mechanism for the moving of detectors device is switched in the synchronization device, which is made with the capability of crate orientation according the position of flat-field collimator so that the detectors scale is located in the flat of fan-shaped beam and crate aperture is oriented on the focus point of the X-ray tube.

EFFECT: development of the sphere of potential focal disease detecting among the oncology patients at one testing.

3 cl, 6 dwg

FIELD: physics, radio.

SUBSTANCE: stated invention is related to the field of radio engineering, in particular, to elements of television systems, and may be used in observing systems used under low temperature conditions. Container of video camera contains external and internal casings, holder that interacts with external surface of internal casing and internal surface of external casing in closed circuit. In cross section shapes of external casing internal surface and internal casing external surface are similar.

EFFECT: provision of normal temperature conditions for operation of video camera with its invariable position in relation to container external casing.

8 cl, 3 dwg

FIELD: medical equipment.

SUBSTANCE: medical diagnostic X-ray system contains X-ray emitter with flat-field collimator connected to high-frequency X-ray generator and programmable control unit with PC, control panel and video monitor, multielement linear or matrix X-ray detector connected to digital electronic system of image translation, recording and pattern generation connected to programmable control unit, the mechanical scanner with irradiation scanner input system and protective cabin with platform for patient's feet. Irradiation scanner input system contains coordinate measuring apparatus connected to scanner. The platform for patient's feet is steadily rotates by mechanical drive round vertical axis lying through central ray of X-ray beam, perpendicularly to X-ray beam plane. Coordinate measuring apparatus and a mechanical drive of platform are connected to programmable control unit to generate transverse tomographic section. Rotation axis of platform for patient's feet is displaced relatively to central X-ray along the line, perpendicular to central ray direction by value S≤R/2, where R=1sinα, L is detector length; 1 is distance from focus of X-ray tube to cross point of displacement line of rotation axis of platform for patient's feet and central ray projection to plane of this platform, f is focal length.

EFFECT: higher diagnostic accuracy and enhanced operation of the device due to improved performance of scanning system and higher patient's orientation accuracy relative thereto.

2 dwg

FIELD: medical equipment.

SUBSTANCE: scanning low dose radiograph comprises mechanical scanning system, on which the following components are serially installed: X-ray radiator, slit collimators and X-ray receiver that comprises multiline solid-state linear detectors installed along width of scanning zone. Multiline solid-state linear detectors of X-ray receiver installed along width of scanning zone are installed in n parallel rows with shift relative to each other in every row, at that device comprises at least two slit collimators distanced in space along with direction of X-ray radiation beam and arranged as n-serial, and their slots are oriented at focus of X-ray radiator, while their projections are matched with receiving apertures of multiline solid-state linear detectors.

EFFECT: improved quality of X-ray images.

5 cl, 5 dwg

FIELD: medicine.

SUBSTANCE: equipment contains a X-ray emitter with a fissured collimator, connected to a high-pitched x-ray generator and a programmed control block equipped with a computer, a control panel and a video monitor, multielement linear or matrix x-ray detector bridged to digital electron system of transformation, registration and formation of the image, connected to the programmed control block, mechanical scanner with system of conclusion of the scanner on level of irradiation and protective cabin with platform for feet of the patient. The system of conclusion of the scanner on irradiation level contains coordinate scale, bridged to the scanner electric motor. The platform for feet of the patient is executed with possibility of uniform rotation by means of mechanical drive round the vertical axis which are passing through the central beam of x-ray fascicle, is perpendicular to the plane of x-ray fascicle. The coordinate scale and the mechanical drive of the platform are connected to the programmed block of management executed with possibility of formation of the cross-section tomographic section. Thus the platform for feet of the patient is supplied by fixatives of patient's position.

EFFECT: application of the given invention will allow to raise accuracy of diagnostics and to dilate operational possibilities of the apparatus at the expense of improvement of system of scanning and rising of accuracy of orientation of the patient concerning it.

2 cl, 2 dwg

Up!