Environment monitoring method and system

FIELD: information technology.

SUBSTANCE: method includes a setup step where an operator creates a model of the environment by determining the number of cells corresponding to regions of said environment, and then creates a "cell/sensor" connection by defining for each sensor at least one possible position associated with at least one cell. For each position, the operator assigns the sensor a control estimate for the associated cell. The method also includes a working step where a control system, in order to execute a surveillance function, finds those sensors that can be used to execute the requested surveillance function and control said sensors based on control estimates and the "cell/sensor" connections.

EFFECT: high reliability of surveillance systems, particularly video surveillance systems, by providing dynamic adaptation each time a sensor cannot be used by the system.

9 cl, 6 dwg

 

The present invention relates to a method of monitoring for environmental monitoring according to the preamble of paragraph 1 of the claims.

The invention also relates to a system adapted to implement the aforementioned method of observation.

In many places, from public buildings to private companies, there is a need to monitor the environment to detect any abnormal events, such as theft, vandalists action, aggression, robbery, and, in General, any event that may endanger people and property.

With this purpose are commonly used surveillance system, in which multiple sensors, such as video capture devices (such as cameras or motion sensors (e.g. infrared or volumetric sensors), control different areas of the considered environment.

Information signals sent by the sensors, (for example, a video room or alarm with sensor) are accepted by the control station, serviced by the observer.

Known monitoring systems require the setup stage, which focuses on the sensors.

During this stage settings to create a model system that determines for each sensor controlled their area and related sensors, i.e. sensors that control Studio strobe, flash is for the mentioned border region.

Thus, during procedures tracking system will follow the tracked subject, switching from one sensor to the adjacent sensor appointed under settings.

In advanced video surveillance systems, such as the system disclosed in patent applications US 2003/0085992 and WO 2005/120071, this kind of configuration provides that, once a certain environment has been selected by the monitor, the system will automatically trigger the camera video, associative associated with this region,

However, all known monitoring systems suffer from the disadvantage that if the sensor is defective, the system will not be able to automatically compensate for the absence of this sensor.

For example, during the execution of the algorithm tracking, the subject will be lost, and the algorithm will stop as soon as the system will switch on the faulty camera.

Another problem that exists in the known monitoring systems, occurs when two users of the system (for example, two observers using two different control stations) want to observe the same region of the controlled environment. Although the signal can be sent to both control stations, control the main camera may be provided with only one user (which is about what a rule is selected according to the priority criteria). In this case, the second user has the autonomy to choose another camera and operate it manually.

In large systems with multiple sensors and video cameras, search video camera can be a difficult task.

Another disadvantage of known surveillance systems is the complexity of the setup stage, on which the installer of the equipment shall be noted in the system, which sensor controls some specific area of the environment and any related sensors cover the border of the controlled area. An example of this procedure is known from patent application US 2003/0085992, according to which for each scene (the scene), which is recorded by video equipment installer must specify the system name removing the camcorder and to build the table, indicating which video covers the Northern border, what kind of camera covers the southern border, and so on.

Such a configuration is disadvantageous and difficult to implement in large systems containing many sensors.

To overcome this drawback, the patent US 6,437,819 revealed the setup procedure that does not require you to manually enter the names of the sensors and related sensors (which could lead to errors).

However, the setup procedure described in the patent US 6,437,819, has the disadvantage that it requires the presence of the set is wdica equipment, who should be on the appropriate path within a controlled area. It won't work everywhere the area to be monitored is very wide (e.g. for Parking, airport terminals). In addition, the above procedure is not applicable in the case in which some areas are not available for installers of equipment, for example when upgrading old systems surveillance in prisons or facilities that handle hazardous substances.

The present invention is to overcome the problems of known surveillance systems.

In particular, the present invention is to improve the reliability of surveillance systems, including video surveillance systems, providing a dynamic adaptation of the system whenever the sensor cannot be used by the system.

This problem is solved by a method of monitoring and surveillance system, comprising the features set forth in the attached claims, which, as implied, is an integral part of this description.

The present invention is based on the idea of execution stage settings, focused on the environment, and not on the sensors.

At the stage configuration according to the invention, the installer of the equipment first creates a model of the environment the environment, determining a number of areas referred to environment, for example, Parking, main entrance, etc.

The installer of the equipment then the associative links with each region, at least one sensor in one or more of its possible positions (preset positions), and assigns each position estimate of the control field.

Preferably, the assessment of control is consistent with the assessment of the probability of detection events in associative related field, through the considered sensor, and working on stage, it allows you to perform intelligent selection of sensors for environmental monitoring. For video cameras, an assessment of control, essentially corresponds to the assessment of the review area in question is pre-configured position.

During the working stage control system monitoring system finds all installed sensors are those sensors that can be used (for example, those sensors that are not defective or not in use by other users) in order to perform the monitoring function.

After the used sensors tested, the control system controls one or more sensors used, referring to the models created during the setup phase of the system, and to estimate control, p is issuenum operator of different pre-configured with the provisions of the sensors.

This makes it possible dynamic associative link between the areas subject to control, and sensor: if the sensor is working in your field of interest, will not be ready to use (because it is defective or is already in use by another user), then the sensor will not be taken into account, and the control system will select the second sensor in a configuration that is most suitable for monitoring areas of interest.

In addition, if the sensor is defective, the system reorganizes the remaining sensors, positioning them in such a way as to optimize the ability to detect events within the environment using ready-to-use sensors. More specifically, this became possible due to the fact that the choice of sensor depends not only on its associations with interest in the region, but also on assessments of control, which, as mentioned above, represent the probability of event detection by this sensor.

In particular, the assessment of the control can be used to select sensor according to the heuristic method, which, for example, assigns the task of controlling the cell is ready to use the sensor with the highest rating of the control.

These solutions reduce calculate the additional complexity of algorithms, used to control the sensors.

Alternatively, the method according to the invention may provide for adjustment of the sensors in such a way as to maximize the probability of detection events after defining the area to be monitored, or the restrictions placed on ready-to-use sensors.

This allows you to optimize the efficiency of the control.

In addition, it is preferable that the monitoring system was a distributed architecture with multiple computers that communicate with each other over the network. Every computer associative associated set of sensors.

It is (statistically) reduces the number of algorithms that must simultaneously fulfill each computer, for example, in order to manage multiple tasks requested by various sensors that are controlled by one computer.

In a preferred solution of the computers built-in sensors, which, therefore, be intelligent and able to communicate with each other in order to automatically choose the optimal configuration for executing tasks requested by the operator.

Other objectives of the invention and its advantages will become more apparent from the following description and from the accompanying drawings, which are given as Pogranichnaya the example, on which:

1 shows a surveillance system corresponding to the present invention;

figa, 2b and 2c shows a model of the environment used by the system to activate cameras;

figa and 3b shows a model of the environment used by the system to activate the cameras.

The architecture of the surveillance system

In figure 1 the reference position (1) in matrix form refers to the environment being controlled. Environment (1) consists of a building (100) and located in the open air courtyard (200).

Within the environment (1) identified the following areas of special interest:

- the entrance to the building: area (C1);

- place (P1) Parking: area (C2);

- place (P2) Parking: area (C3);

- input gate: area (C4);

Region (C1)-(C4) are controlled by four video cameras (S1)-(S4) and one volumetric sensor (S5), located within the environment to be monitored.

The number of regions, which are separated by the environment, and the number of sensors (video cameras and other sensors) are not factors limiting the invention, but depend on the choice of the installer of the equipment.

Camcorder (S1)-(S4) can be fixed or movable, in particular, the type of PTZ (pan/tilt/Tr is stokesii).

In the example shown in figure 1, the camera system (S1) and (S3) are stationary video camera and shoot only the relevant region (C1) and (C3), whereas cameras (S2) and (S4) are video cameras PTZ type, capable of capturing region (C1)-(C2) and (C3)-(C4), respectively.

Volumetric sensor (S5) is a stationary sensor and sends an alarm when it detects movement inside the building (11), for example, because the door at the entrance (S1) is opened.

Sensors connected to the network (2) data transmission, through which they exchange signals with the station (3) control, for example, video cameras (S1)-(S4) send their pictures taken.

Network (2) data transfer may be a wired LAN network (local network), but of course referred to the data network may also contain wireless elements that can facilitate the installation of cameras.

Station (3) control contains a computer (31), which takes images from video cameras and displays them on the appropriate means (32) visualization adapted to display a set of images (33) at the same time.

Preferably, mentioned renderer contained multiple screens or a single screen displaying multiple images next to each other (this decision and is known as multiplexing).

Station (3) control also contains other well-known by its core components, such as a keyboard (34), mouse (35) and the joystick (36)used by the observer to control cameras PTZ type, changing the angles of rotation, tilt and zoom.

As visualization tools, these components are also connected to the computer (31), which is equipped with appropriate interfaces, such as interface for joystick (36), interfaces to send images taken from cameras (S1)-(S4), means (32) visualization, and network interface through which control data is sent to the cameras.

This user interface, along with a keyboard, joystick and mouse, allows the user to choose and control cameras, activating, thus, the transfer of images from one or more cameras means (32) rendering.

When you have selected one of the regions (C1)-(C4), for example, location (P2) of the vehicle, then the image is transmitted from the video camera, for example. (S2), which is automatically associative associated with the selected area according to pre-set criteria-based targeting, which is described below, by maximizing the probability of detecting an abnormal event within the interest region,

Details about the interface that allows izvati camcorder, will be described later in a special section.

In the example shown in figure 1, the system architecture refers to the centralized type, in which one computer (31) takes the algorithm of calculations required to control the sensors and to perform monitoring functions, such as tracking subjects moving within this environment, recording video, and automatic detection of events. Also possible other solutions; for example, the monitoring system may be provided in the form of a distributed system, such as those described later in the section titled "Option with a control system having a distributed architecture.

System configuration

According to the invention during the setup phase of the monitoring system creates a model of the environment by identifying interest areas (C1)-(C4) the environment and their smezhnosti.

In the following description, the area of this model will be referred to as 'cells' in order to avoid any confusion with the physical areas of the environment.

Following this, each cell (C1)-(C4) associative associated with one or more sensors capable of monitoring at least the area corresponding to the cell.

In particular, since the video camera PTZ type can taking the th many provisions, then each cell in the associative associated not only with a camera, but with a camera in some given position (referred to as "pre-configured").

Of course, for still cameras or sensors, there is only one pre-defined position, which is specified when installing the sensor.

During the setup phase of the sensors sequentially activated.

For each mobile camera or sensor operator determines the preset position of the moving sensor that best meet the needs of the operator while observing.

For each preset position, the operator is associative links by means of selecting the sensor area of environment and assigns evaluation of control (e.g., expressed as a value between 0 and 1)representing the quality of the image.

Evaluation of control correspond to the probability of detection by the above-mentioned sensor events within the cell with which the sensor was associative connected in a pre-configured position.

At a working stage it allows you to control the sensors by placing them in these preconfigured position that maximizes the ability to control the environment.

The examples on Azania on figa-2c and 3a-3b, illustrate two different models of the environment that are used by the UI to display the environment to be monitored, the installer of the equipment and to allow him to select the area.

According to the example shown in figa, 2b and 2c, the program providing a user interface that allows you to define the list of cells and their connections through a graphical interface that allows you to build a graph in which the cells are represented as nodes connected by arcs.

Preferably, the spatial location of the nodes were free, so that during the setup stage the operator can position them in such a way as to reflect the geographic location of areas to be controlled, which enables the observer to immediately find the area subject to monitoring during the working stage.

According to an alternative solution that is less complex from a computational point of view, but a little less convenient for the user, identifying areas and their connections can be done in text mode.

Again refer to the example shown in figa-2c, according to which at the stage of setting operator creates a model of the environment and then sequentially activates the sensors (e.g., loops, video cameras). For each sensor on erator checks controlled area (for example, he sees the image received by the camera, on the screen), and creates a connection with the areas of the model, assigning the values of the control.

The connection of the sensor/area can be performed by drawing a connection between the icon representing the sensor, and a block representing enclosed in a border region.

Thus, a count of "cells/sensors", similar to the graph shown in figs.

Preliminary setup of the sensor and the evaluation of control software stored in the database management system.

In the example shown in figa-3b, the model environment consists of maps of the area subject to control.

Map, shown in figa, can be constructed by the operator electronically using conventional graphics programs, or it may be an imported graphic file.

In the solution shown in figa-3b, during the setup stage, the operator selects the sensor and, for each preset position, said controlled area on the map, as shown in fig.3b, and associative links with her assessment of the control, for example, printing the value on the keyboard (34) or by selecting the displayed value "mouse" (35).

When the selected set of pixels corresponding to the area controlled by the sensor, software AB is maticevski will create a cell model and the corresponding adjacency.

In a variant implementation of the invention, each of the selection pixels corresponds to one cell of the model.

Neighboring set of pixels are adjacent cells.

In a preferred embodiment of the invention a set of pixels that overlap each other sufficiently, are grouped in a single cell.

The criterion used to decide whether two sets of pixels to be grouped in a single cell or not is determined during stage programming: if two sets of pixels only touching, actually, it may be preferable to keep these two cells separate.

Geometric information associated with each cell (area, shape) and information about the direction of the transition between cells is extracted automatically.

In the map can be included barriers that will be taken into account when calculating smezhnosti; for example, two nesoobschaemost between the areas are not adjacent to each other.

After the model is built, the user can be added and other information (label cells, the mobility of the field, time-dependent barriers, the orientation of the arcs, and so on).

When setup is finished, the control system will draw graph "cells/sensors", similar to the graph shown in figs.

Algorithm for optimal the real coverage

After the setup stage, the control system is ready to start working the stage of monitoring the environment.

At specified areas of interest and the set of sensors, the aim is to control the mentioned area, choosing the most appropriate sensors and control them in the most appropriate manner.

This translates into maximizing the probability of detecting an abnormal event occurring within a given area of concern.

In particular, suppose that the interest region is a set consisting of N cells environment model, built during the setup stage.

Ci indicates an abnormal event occurring in the cell i.

The surveillance system has M sensors, with xjindicates the position of sensor j.

In particular for sensors PTZ type (pan/tilt/zoom) position can take values in a finite discrete set of values, referred to as "pre-configure".

Fixed sensor can be considered as a special sensor PTZ type with only one pre-setting.

In the General case, the position sensor can take values within a continuous range of values and may contain the geographic coordinates of sensors located in the basement is the author means (patrol cars, robots, etc).

For event detection, denoted by D, the expression

p(D|C1C2...CNx1,...xM)(1)

specifies the probability of detecting an abnormal event when the condition that this event occurs in only one of the N cells are associated with the area of the environment, subject to control, and that M sensors have a certain preset xj.

The problem of optimal coverage of this area, therefore, results in detection of the optimal sensor configuration that maximizes the above probability. The above configuration can be expressed as follows:

The detection probability can be expressed as follows:

p(D|C1C2...CNx1,...xM/mtext> )=i=1Np(D|Cix1,...xM)p(Ci)i=1Np(Ci)(3)

The above expression is obtained using the direct observation that the presence of events in the cell i is independent of M sensors, that is, p(Ci|x1,..., xM)=p(Ci).

The probability p(Ciwhat event occurs in a given cell i, may be proportional to the size of the corresponding area and the importance that characterizes the cell from the point of view of observation.

For simplicity, the following will be accepted, that all models have the same probability; thus, the expression for the detection probability takes the form:

p(D|C1C 2...CNx1,...xM)=i=1Np(D|Cix1,...xM)N(4)

where p(Ci|x1,...,xM) represents the probability of detection events, the latter occurs in the cell i, and the sensors have some preset xj.

Now consider the case in which the cell watching one sensor surveillance systems, for example, the sensor (1).

Thus, it turns out that p(D|Cix1,...,xM)=p(D|Cix1) that is, the detection in cell i, does not depend on the position of the sensors that control the cell i.

It is assumed that p(D|Cix1) is an estimate of the control assigned by the installer of the equipment when setting up a surveillance system. If we describe this in more detail, the evaluation monitoring is an assessment that is used and the La assigned to the sensor (1) in position x 1when he was associative associated with cell i.

It is quite reasonable to assume that if the evaluation of control is high, then the probability of detection events in this area of the model will also be high. On the contrary, if the estimate was zero, it would be impossible (zero probability) to achieve detection in a given cell by a sensor (1) with presetting x1.

Now consider the case in which, for cell i see multiple sensors with the appropriate pre-setting.

To assist the installer of the equipment above the setup procedure does not provide a summary of the control.

For this reason, in the absence of such information would be considered the worst case: it is determined by the sensor with the good review, and it is assumed that other sensors cannot add any information that can improve the overview of this area of the model. So, get:

p(D|Cix1,...xM)=maxj=1,...Mp(D|Cix/mtext> j)(5)

Now consider the following example. Two sensors (s1) and (s2)located in the x1and x2observe the cell Ciwith survey estimates, components of 0.8 and 0.6, respectively.

It is assumed that other sensors can't see this box not under any pre-configuration, or that they have the advanced settings in which they can't see this box.

Thus, it turns out that P(D|Cix1)=0.8, p(D|Cix2)=0.6, p(D|Cixj)=0 j>2 and that, from the previous expression, p(D/Cix1x2,...,xM)=0.8.

Under the transformation (4) on the basis of this assumption would thus be obtained that:

p(D|C1C2...CN,x1,...,xM)=i=1Nmaxj=1,...,Mp(D|Ci,xj)N (6)

where p(D|Cixj) is an estimate of the control with which the sensor j at prestaging xjcontrols the cell Ci. The above expression is strictly quantitatively determines the quality of the coverage area, allowing, thus, to compare different configurations of sensors used for this purpose.

The optimal configuration,, ...,you can search among all possible configurations thus, as explained below.

The algorithm, which allows to find the optimal configuration, start with the information contained in the column "cells/sensors (figs), built when setting up a surveillance system.

Each cell has one or more arcs, which connect the cell with sensors that control.

For each arc has a piece of information indicating the preset sensor. Taking into account all of the cell region, subject to coverage, you can build a set of sensors used to obtain the above-mentioned coverage, with appropriate pre-sets.

The optimal configuration,, ..., sensors is the configuration that maximizes the probability of detection; therefore, in a preferred solution, the algorithm continues by evaluating (6) for each combination of sensors and then selects the combination of sensors with the highest probability of detection.

This will become clear from the following example: given three cell C1C2C3the cell C1visible to sensor (1) when prestagingx11cell C2visible to the sensor (2) when prestagingx21and when prestagingx22and the cell C3visible to the sensor (2) when prestagingx22.

The range of possible configurations consists of pairs(x11x21)and(x1 1x22).

Of course, the configuration like(x11x21x22)not valid, because at any time some time this sensor can only be in one position.

After you have identified two possible configurations, the algorithm calculates the probability of detection, corresponding to the mentioned two configurations.

Suppose now that all estimates control is equal to 1, except assessments for pre-settingx22that is 0.8 (when this preset is visible 2 cells, but with a lower quality).

Under these conditions, the first configuration(x11x21)has a probability of detection of abnormal events occurring within that environment with the food, equal to

p(D|C1C2C3x11x21)=23

While the second configuration(x11x22)has a probability of detection of abnormal events occurring within that environment, which is equal to

p(D|C1C2C3x11x22)=2.63

The algorithm then compares the two calculated detection probability and selects the configuration of(x11x22 )that is, the configuration that maximizes the probability of detecting events in this area, which in this case corresponds to the whole environment.

From the above description it is obvious that, if the number of possible configurations is large, then the search for the optimal configuration can be a heavy computational task.

In this case, may be applied to the methods of operations research, in order to obtain a suboptimal solution.

These methods, although based on estimates of the control and the column "cells/sensors"minimize an appropriate cost function derived from the probability of detection.

Alternatively, the computational complexity can also be reduced by introducing a heuristic methods; for example, you can decide to install each sensor in a position that has the highest rating control to control one or more cells that are visible only to that sensor.

Once installed, these heuristic methods are independent of the calculation of the detection probability, even though this probability can in the development of the algorithm can be used to compare the effectiveness of various heuristic methods: if there is a number of heuristic methods, the calculation of the probability of detection will allow you to set which configuration from among those proposed according to heuristic methods, is the most suitable for the detection of the event.

The best coverage with the lowest number of sensors

The problem of optimal coverage discussed in the previous section can be supplemented by the constraint of having to use the smallest possible number of sensors.

The algorithm to calculate the optimal coverage with the lowest number of sensors is also based on the assumption that the probability of detecting events in the cell does not increase with the number of sensors used for coverage of this cell. Thus, it is assumed that:

p(D|Cix1,...xM)=maxj=1,...Mp(D|Cixj)(7)

Therefore, the algorithm only considers only the sensor with the highest rating is ontrol among all the sensors, which is visible to the same cell.

The sensors with lower scores are considered as redundant. Although from a practical point of view it may be useful to add information from a larger number of sensors (for example, to add keyframes area in order to some extent to improve the efficiency of the control, however, there are situations in which it is important that was not taken no redundant information: the operator receiving redundant information, will evaluate what is happening in the monitored scene, more slowly and with greater difficulty.

Not less important is the fact that the use of a minimum number of resources in a multiuser and multitasking system will increase the capacity to serve a larger number of operators and tasks.

The optimal solution in the presence of the foregoing limitation is found by considering all possible subsets of the global set of sensors to be used for optimal coverage.

For each subset of the optimal pre-configured configuration are thus, as described in the previous section, the Algorithm for optimal coverage.

The solution to this problem is represented by a set with a minimum number of elements that have the highest probability of detection.

It should be noted that there is no need to consider all possible subsets of sensors: when the probability of detection for optimal solutions, look at the subsets of sensors with the number (N) of elements equal to the optimal detection probability, we find in the subsets with the number (N-1) elements, then the optimal solution will be a configuration with N-1 sensors.

Therefore, it is preferable that the method involved the search for a solution to the problem of optimal coverage in subsets of sensors with increasing number of elements.

Time-varying optimal coverage

Consider now the situation where the optimal solution may not provide coverage of the whole environment, for example, because the number of ready-to-use sensors are quite small.

In order to control this environment, the task is executed, referred to as the 'patrols', which contains the following steps.

First of all, is the configuration that maximizes the probability of detecting events occurring within that environment; this is achieved by the algorithm is optimal coverage discussed above in the section with the same name.

Through the steps mentioned algorithm, the sensor and are therefore to control the amount (X) of cells that is less than the total number of cells (which is assumed to be equal to the number N).

This configuration is maintained within a predefined time, then the algorithm is optimal coverage applies to N - S cells, which were not controlled by sensors previously calculated configuration.

The new configuration of the sensors is maintained within a predefined time, which in the preferred embodiment, is set by the operator.

If the new configuration does not cover all cells, the method will again apply the algorithm of optimal coverage to the remaining cells and will save the new configuration for a pre-determined time.

The procedure is repeated until then, until you have covered all the cells.

There is a possibility that maximize the probability of detecting events in a progressive decreasing the number of cells will be found configurations that include redundant sensors; for this reason, it may be advantageous to use an algorithm for optimal coverage with the constraint that should be used is the lowest number of sensors.

Thus, any sensors that are redundant with the coverage of the selected cells will be used to cover the cells that are already observed to precede the stages, so in order to maximize the probability of detection within this environment. As will be explained in detail below, in traditional systems, video surveillance, patrol function uses circular independent sensors in their respective pre-configured and do not interact among themselves to reach a controlled area.

Optimal coverage boundaries

Surveillance systems provide a feature called 'tracking object', that is, the tracking of a moving object within a controlled environment.

The cell occupied by this object is called the "active cell", while adjacent cells are called "boundary cells".

Adjacency between cells defined in the column "cells/sensors", built in the customizing system.

In order not to lose the object, while the sensor is positioned on the active cell, other sensors configured to provide optimal coverage of the border region.

The goal of the algorithm is optimal coverage boundaries is to maximize the following probability

The mentioned probability corresponds to the probability of detection events, provided that the event comes from cell k and occurs in one of the N cells Ciborder is in relation to the cell k, and that M sensors are in certain positions xj.

Under conditions similar to those described for calculating the optimal coverage, the probability of detection is determined by the following relationship:

where p(D|Cixj) is an estimate of the control for sensor j with presetting xjwatching the cell Ci, while p(Ci|Ck) represents the probability that the object moves from the active cell in the supposedly existing edge cell i.

According to a preferred implementation variant of the invention, when creating a model of the environment during the setup stage, you need to determine the probability of a transition from one cell to another.

These probabilities correspond to the instances of p(Ci|Ckin the above formula.

If the information about the probability of transition from one cell to the adjacent cell is not specified, as in the example embodiments of the invention described with reference to figa-2c and 3a-3b, all instancesequal and the ratio to calculate the probability of event detection in border cell will be determined by the formula:

Finally, since the scope of the border is a private problem of optimal coverage, it m what should also be carried out using techniques of time-varying coverage, similar to that described in the section "time-Varying optimal coverage".

Dynamic interaction and reconfiguring sensors

In this surveillance system is a set of sensors that can participate in the task of providing optimal coverage of this area will change over time due to the different priorities of tasks to be performed and different operators, they are requesting.

By way of example, there is the possibility that, while the system performs the function of "patrolling" the verification environment, the operator requests the camcorder; if the operator has precedence over the automatic routine patrol, it is clear that the above procedure should be performed without a video camera, which is operated by the operator.

In addition, any faulty sensors will be time to change the set of ready-to-use sensors.

In order to cope with these situations, the system calculates the optimal configuration to cover this area every time you change the set of sensors involved in the coverage.

For example, when the number of cameras that can be used for tasks is reduced, the remaining cameras covering this area will be managed in such a way as to provide a more W the wide shots with zoom or in accordance with the procedure of time-varying coverage (described above), which will change the crop in time, therefore, to cover all the cells.

In this latter case, the time constant of stay sensors in this configuration can be reduced in order to ensure proper coverage.

On the contrary, if the number of sensors increases, the used sensor configuration will be aimed at smaller coverage areas, for example, a camera with higher levels of zoom, increasing the time constant of stay of each sensor in a pre-configured position.

Thus, the method of monitoring according to the invention makes it possible to dynamically reconfigure sensors.

Control system monitoring system, whether it is centralized (as in figure 1) or distributed (as described below in the section "case management system having a distributed architecture"), checks to see what sensors it has, and decides which task they should participate.

For example, when tracked some event, as soon as the object was detected in the active cell, the sensors that control the border, are configured in such a way as to ensure better coverage of the border. These Dutch is kami control then according to the algorithm of optimal coverage boundaries.

Other sensors are allocated the task of optimal coverage for the remaining cells, and, thus, are configured to maximize the probability of detecting events in other cells.

Dynamic configuration of sensors is made possible due to the fact that the problem of coverage is formulated as a function of the environment and the column "cells/sensors", not smezhnosti sensors.

This approach brings in a choice of sensors with some degree of freedom in that area can be covered in different ways with the use of sensors in different configurations.

These degrees of freedom allow to manage those situations in which the sensors involved in the problem of coverage may vary.

Coverage boundaries is a broader form of interaction, which, in addition to interaction, aimed at achieving optimum coverage boundaries, also provides interaction between the active sensor and the sensors borders. When the active sensor or provision of this active sensor is changed, the border will also change along with covering its sensors and their configuration. This functionality is referred to as "aid to navigation" and will be described in detail in the next section.

Assistance in navigation

The monitoring system according to the invention performs the function is meneau aid navigation", which provides a simple and efficient tracking of moving objects within this environment.

Tracking movements of the object (tracking object) is a difficult task and, as you know, is facing a lot of problems associated with the selection active camera (camera, which generates frames with the object), and switching from the active camera to the other camera is activated in order to continue tracking the object.

The function of navigation is the solution to those problems.

After event detection (automatically or by the operator), the operator selects the camera shooting this event, and includes the navigation function.

The algorithm navigation defines the sensor selected by the operator (active sensor), thus identifying it and getting information about its location (orientation in space).

By the count of "cells/sensors", built during the setup stage, the navigation algorithm determines the area of environment controlled at this point active sensor (active zone), calculating the set of cells that form the active area.

When the sensor that controls the active cell is controlled manually by the operator, as in the case of sensor PTZ type, the latter may be in any position other than tentative is configured but the provisions recorded at setting; this is due to the fact that the sensor PTZ type can move among essentially a continuum of positions, whereas the method according to the invention requires (at setting) was recorded discrete number of positions.

On the basis of appropriate indicators of the approximate algorithm calculates the preset position that is closest to the position set by the operator.

Starting with a pre-configured position calculated in this way, the algorithm then obtains the boundary of the active zone, i.e. the set of cells not belonging to the active area, but adjacent to it.

Calculation of active cells and the border is dynamic; therefore, if the active sensor changes its position, the algorithm navigation repeat this calculation and, accordingly, will update the border.

Using algorithm optimal coverage boundaries, the algorithm navigation selects the most appropriate sensors and positions them in such a way that maximized the border of the active zone. Among all sensors in the system, this procedure selects only those sensors that can potentially detect the object, if the latter exists in the active zone, i.e. in the zone of action of the active sensor.

If the sensors provide the amount camcorder, the functionality of navigation, provide simultaneous display of the image from the active camera images from cameras that control the boundary (adjacent cells).

When the object goes out of sight from the camera is active, if the border is fully controlled, it will appear on one of the images from adjacent cameras. The detection of an object in one of the adjacent images is a simple task for the operator, who can thus easily determine the location of the new active camera and, perhaps, to take her under his control.

Preferably, the images captured by the cameras are displayed on the touch screen.

In this embodiment of the invention, the operator touches the image containing the object, generates a signal that is used by the algorithm to aid in navigation to identify the camcorder, which is responsible for the selected image, and to execute, using the aforementioned information, the above described steps of the algorithm.

If the border is not controlled completely, then the probability of losing the object is minimized when using for the optimal coverage of other cells, those sensors are not used by the task navigation.

The image sequence for the field

Another function, a CH is th observation system according to the present invention, called the "image sequence for the area.

This feature allows you to overcome the disadvantages of the known sequences, which show images shot with the camera system, one by one, according to the sequence defined by the operator when setting.

The method according to the invention requires that during the setup stage, the operator would rotate selected area subject to control.

The selection area can be accomplished through selection of the site on an electronic map or specify a set of cells forming interest area.

At this stage, the operator has only to specify the area to be monitored, the duration of the plans and, if required, the direction of the coverage.

Preferably, the operator did not give any indications about the sensors to be used to create a sequence; however, the operator may set limits on the selection of one or more sensors.

Based on this information, the method executed algorithm for optimal coverage with the restriction that should only be used cameras, and in the preferred embodiment, also, with the constraint that should be used is the lowest number of cameras to determine the configuration of the cameras, which will cover the area specified operas the torus.

As discussed above, the algorithm is optimal coverage takes into account the priorities and the willingness of sensors for use: if the sensor is busy performing another task, the algorithm will perform calculations considering only ready-to-use sensors.

The calculation of the sensor is dynamic and depends on the operations for which sensors are used.

After selecting a set of sensors that will participate in the sequence, it is necessary to determine the order in time for the sequence of displayed images.

In a variant implementation of the invention, the operator specifies the order of shooting for the cells associated with the area subject to control in this sequence.

In a preferred embodiment of the invention, the algorithm of the sequence of images for the region takes the final step of calculating, determining the order in time, according to which will activate the sensors.

This final calculation is performed using the information about the positions of the sensors contained in the column "cells/sensors".

Preferably, the algorithm provided the spatial continuity of different images, so the sensors, removing adjacent cells will be adjacent in the sequence.

The above method of the sequence of the pictures for the area allows us to calculate the sequence of sensors, used in this sequence of images for the area automatically, allowing, thus, to overcome limitations of the efficiency inherent in the sequence specified by the operator manually.

In addition, the above described method a sequence of images for the area provides dynamic selection of sensors, so it is possible to perform suboptimal sequence, which covers the interest region even in the presence of conflicts caused by multi-user and multi-tasking environment, for example, in the case when two users or two tasks request the same sensor at the same time.

User interface

The invention uses an advanced interface that allows the operator to quickly and effectively control the entire environment.

The interface shows the operator the model of the environment in the form of graph cells/sensors (figs) or e-card.

This allows the operator to cause the image area by simply selecting the corresponding cell in column "cells/sensors" or pointing to it using the mouse or touching it (if you are using a touch screen) on the electronic map.

This provides a quick observation, because the observer can quickly and intuitive is the first to concentrate on the image of interest in the field.

The choice of the user is read by the interface and sent to the control system, which then causes the image (or more generally, information) one or more sensors that monitor the selected area.

The sensors are selected according to the selected area and according to the preference of the operator, which may, for example, decide to display a single frame area or to perform optimal coverage of the area.

These preferences can be set, for example, via a drop-down menu that can be opened by clicking the right button of mouse on areas of interest.

The variant with the control system having a distributed architecture

The above described system and method of monitoring include multi-user and multi-tasking control.

Therefore, there is a possibility that at some given time is requested multiple tasks, such as optimal coverage of the various areas of the environment, or that are activated above the navigation function and the function of the sequence of images for the area.

Thus, it is obvious that the computational power of the system must be sufficient to withstand the computational complexity resulting from the simultaneous activation of multiple algorithms.

In large systems the Oh observations with many sensors, with this computational complexity is not so easily solved using a control system with a centralized architecture, as the control system shown in figure 1.

In a preferred embodiment of the invention the control system that executes the algorithms of the various tasks is a distributed system, i.e. it consists of a set of computers that communicate with each other over the network data.

A large number of computers also provides a good level of redundancy in case of any failures.

Preferably, the computers were located in different and remote from each other places in order to provide a higher level of protection against any attacks on their physical destruction.

According to this variant of the invention, the model of the environment and count cell/sensors", built during the stage settings are stored in memory on all computers so that each computer can autonomously perform the above algorithms coverage.

Each associative computer is associated with a subset of the sensors (preferably only one) and, therefore, is connected, via count "cells/sensors", with a subset of fields within a controlled environment.

The computer is involved in the task (for example, sadakathullah images for the area or the navigation task), regarding this area, if one of the sensors that are managed by this computer, watching, in at least one of its settings, at least one cell in this field.

The computer decides whether one of its sensors to participate in the task, depending on the priority of the requested task. For example, if you request the image sequence for the region, and zadeistvovany it the sensor is already used for navigation tasks, the computer may decide not to participate in the new task, because navigation has a higher priority than the image sequence for the field.

After the decision was made to participate in a certain task, the computer needs to know what other sensors will also participate in this task, in order to calculate the optimal coverage.

This became possible due to the fact that each computer sends to all the other computers on the network data, the piece of information indicating the task involving its sensors.

Each computer, thus, knows the full set of sensors involved in the task, and can therefore calculate the configuration of managed sensors in order to obtain optimal coverage.

Each computer has the potential for IV the tion, at least one algorithm for optimal coverage for the whole of this environment.

However, the number of tasks that are enabled simultaneously, the computer decreases as the number of sensors associated with it; from this it follows that the computational power of each individual computer belonging to a distributed system, can be reduced in comparison with the computational power required by the centralized management system.

The optimal solution, therefore, would be to use one computer to the sensor or, even better, to embed computers in the sensors, thus making it intelligent.

Other ways of monitoring systems

The benefits of this surveillance system is apparent from the above description, it is also clear that it can be made many changes, for example, by concentration or resource allocation control system or by the use of equivalent procedures to manage the same information required and processed in the above-described methods of observation.

For example, the graph "cells/sensors", under construction at the end of the setup stage, was proposed, because it allows the operator very easy to understand communication "cells/sensors". However, in order to strictly numerical processing graph "I had side/sensors" can be replaced with table cells/sensors", contains the same information about the relationships between cells and sensors.

1. The way to control the environment (1) through a variety of sensors (S 1 - S 5) or video surveillance motion detection, in which the system (31) control receives information from one or more sensors mentioned sets and uses the information to control referred to the environment (1), and the method comprises a stage settings, in which the operator adjusts the control system, and the stage of work at which the system (31) controls a number of sensors (S1-S5) or video surveillance motion detection for environmental monitoring, in which, during the setup stage, the system (31) control accepts input from the operator to create a model of the environment, and the input data contains:
text and graphic symbols multiple cells corresponding to areas of the environment, subject to the control,
communication cell/sensor" between the cells and the above-mentioned multiple sensors, each sensor mentioned many links cell/sensor provides information to communicate at least one possible position (xieach of the sensor with at least one cell of the above-mentioned set, and
numerous assessments of control, privae is related to the operator mentioned at least one possible position (x ieach sensor, and the evaluation of control mentioned set corresponds to the probability of detection in at least one position, the occurrence of events in the associated cell, and in which the working stage in order to perform the function of monitoring subject to the control area, the control system uses the above-mentioned input data:
finds, from among the above-mentioned variety, suitable for use sensors that can be used to perform the mentioned functions, monitoring, and
controls at least one of the mentioned sensors used, placing it in its best position (xi), selected on the basis of these relations cell/sensor, and based on assessments of control are associated with these possible positions (xi) mentioned a variety of sensors.

2. The method according to claim 1, in which the mentioned input data provided by the operator under settings contain the adjacency between these cells.

3. The method according to claim 1, in which the mentioned selected possible position (xi) maximizes the probability of detecting events in the mentioned areas subject to control.

4. The method according to claim 1, wherein said control system controls the set of sensors of the above-mentioned plural is tion of the used sensors, having them in the appropriate possible positions (xiaccording to the configuration selected from among the various possible combinations of possible positions (xi) mentioned set of sensors.

5. The method according to claim 1 in which the said set of sensors includes at least one video camera, and in which the assessment of control is assigned to at least one provision of the mentioned cameras, corresponds to the assessment review associative associated cell.

6. The method according to claim 5, in which the mentioned input data provided by the operator under settings contain the adjacency between these cells.

7. The method according to claim 6, in which the mentioned selected possible position (xi) maximizes the probability of detecting events in the mentioned areas subject to control.

8. The method according to claim 7, in which said control system controls the set of sensors of the above-mentioned set of sensors used, placing them in the appropriate possible positions (xiaccording to the configuration selected from among the various possible combinations of possible positions (xi) mentioned set of sensors.

9. The method according to claim 8, in which the above-mentioned configuration maximizes the probability of detecting events in the mentioned areas subject to control.

10. The way pop, in which the said area to be monitored, associative associated with the first set of cells mentioned many, and in which the above-mentioned configuration of sensors covers a subset of cells subjected to control, and the method comprises the steps are:
a. determine the configuration of the sensors,
b. retain the mentioned sensors used in the above-mentioned configuration within a predefined time,
c. determine a second subset of cells not covered by the mentioned first configuration,
d. determine a second configuration of sensors that maximizes the detection probability of the events referred to in the second subset.

11. The method according to claim 9, in which the above probability is limited by the very low number of sensors required to perform the aforementioned control functions.

12. The method according to claim 11, in which the said area to be monitored, associative associated with the first set of cells mentioned many, and in which the above-mentioned configuration of sensors covers a subset of cells subjected to control, and the method comprises the steps are:
a. determine the configuration of the sensors,
b. retain the mentioned sensors used in the above-mentioned configuration within a predefined time,
c. determine the second paragraph is mnogestvo cells, not covered by the mentioned first configuration,
d. determine a second configuration of sensors that maximizes the detection probability of the events referred to in the second subset.

13. The method according to item 12, in which, if the above-mentioned second configuration does not cover all the cells mentioned second subset, the steps from c) to d) are executed again up until you mentioned the second subset of cells will not be empty.

14. The method according to item 13, which at the time of each calculation mentioned second configuration management system finds from among the above-mentioned set of those sensors that can be used to perform the aforementioned functions of the observations above mentioned second set of cells.

15. The method according to item 12, in which the sensors of these sets that are not used for the execution of the said monitoring functions are used to perform the second function of the observations on the second area subject to control.

16. The method according to any one of claims 1, 2, 5, or 6, in which to perform the aforementioned functions of the observations mentioned control system selects the best position (xi), with whom associative tied the highest score of the control for the area subject to control, and manages the corresponding sensor, placing it in the above-mentioned possible position (xiwith the highest OC is ncoi control.

17. The method according to any one of claims 1, 2, 3, 5, 6 or 7, in which said control system uses methods of operations research to determine the sensors that you want to manage to perform the mentioned functions, monitoring, and positions of the sensors can be managed, in which the above-mentioned methods of observation minimize the cost function, which depends on the above estimates of control.

18. The method according to any one of claims 1 to 15, in which the said operator determines mentioned cells, selecting a set of pixels on an electronic map mentioned the environment in which a partially overlapping set of pixels are grouped into a single cell.

19. The method according to any one of claims 1 to 15, in which the sensors include cameras and in which during the setup stage camcorder activate sequentially; for each activated video camera image from a video camera display on the screen (32) and the operator enters the relationship for the cell/camera and assigns a rating control.



 

Same patents:

FIELD: radio engineering, communication.

SUBSTANCE: adaptive tracking measuring device has a discriminator, two scaling units, three adders, two delay units, an extrapolator, a squarer, an averaging unit, two multiplier units, a storage unit, a divider unit and a subtractor unit, connected to each other in a certain manner.

EFFECT: automatic adaptive control of parameters of the tracking measuring device without using a special manoeuvre detector with intense variation of the low of motion of the tracking object.

5 dwg

FIELD: information technology.

SUBSTANCE: in a CCD array television camera, between a photodetector section and a horizontal recorder, there is a memory section that is charge coupled to both, and a control pulse generator includes series-connected memory pulse former and second level converter, and an exposure pulse former includes series-connected interfacing unit and electric drive, a second monovibrator, a counter-halver, a first AND element, a second AND element and an OR element; the expansion slot of the computer motherboard is further fitted with a video card, which is matched on input/output channels, control and power with the computer bus, and the communication link includes additional cable conductors for connecting Start, Stop and Exposure Control signals on the computer with inputs of said signals in the television camera, wherein the video output of the video card is the fringe pattern image signal output of the television system.

EFFECT: high signal-to-noise ratio of the television camera and enabling adjustment operations on recursive filtering of a video signal directly from the computer.

5 cl, 7 dwg, 2 tbl

FIELD: physics.

SUBSTANCE: apparatus for inspecting waggons has a multi-section bar on one end of which a television camera is mounted, and on the middle part there is a housing with a microprocessor with a memory unit, a control unit, a display and a power supply. The apparatus if fitted with an additional television camera which is mounted at the opposite end of the multi-section bar, a switch and a transmit-receive module mounted in the housing. Outputs of the television cameras are connected through switches to the display and the input of the microprocessor, which is connected to the control unit; the input/output of the microprocessor is connected to the output/input of the transmit-receive module, and the auxiliary output of the microprocessor is connected to the control input of the switch.

EFFECT: high efficiency and easier inspection of top and bottom parts of a waggon.

4 dwg

FIELD: information technology.

SUBSTANCE: passenger carriage video monitoring system has at least one video camera mounted such that it is capable of scanning at least part of the interior of the passenger carriage. The video camera is connected to a video recorder to which a monitoring and control unit is reversibly connected. The video recorder can be connected to a data transmission system and a monitor mounted in the guard's compartment.

EFFECT: safer transportation of passengers.

7 cl

FIELD: information technology.

SUBSTANCE: apparatus for detecting and monitoring hazard with a built-in display system is meant for use in a public safety system and warning on emergency situations, adapted for detecting and identifying hazards in the surrounding medium, as well as displaying information of interest relating to the hazard or other public information.

EFFECT: detecting hazard and endangering activities at early stages thereof.

25 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: intelligent information system for an aircraft passenger cabin comprises: first and second servers located, respectively, in the head and tail sections of the aircraft and connected by digital data buses and a bus for transmitting commands to a peer-to-peer network; main modules connected to digital data and command buses; seat modules located in each passenger seat and connected to corresponding main modules; cameras for monitoring passengers located in predetermined places in the passenger cabin and connected to at least one of the servers; lighting equipment of the passenger cabin connected to at least one of the servers; public address equipment for the passenger cabin connected to at least one of the servers.

EFFECT: design of an aircraft information system and wide range of equipment.

9 cl, 1 dwg

FIELD: physics.

SUBSTANCE: method is realised by using in the television camera a reducer, a shutter control unit, a shutter which is based on an electrochromic device, a "frame" and "window" signal generator, a video mixer, and a gate pulse generator, and using in the communication line between the television camera and the computer two additional cable conductors for connecting the "Initialisation/Shutter return" signal output on the computer to the input of that signal in the television camera and the "Window" signal output of the television camera to the input of that signal on the computer.

EFFECT: preventing blooming of the video signal and providing undistorted television monitoring of the movement of hot-rolled products being transported through live rollers in real time.

3 cl, 8 dwg, 2 tbl

FIELD: information technology.

SUBSTANCE: several auxiliary markers are formed, which consist of references of image areas for enabling tracking of the selected area of the scene; to generate references of auxiliary markers, selection of areas of the scene with marked selective features is carried out, as well as detection of areas of the scene with marked selective features and elimination of the noise microstructure is carried out based on results of decomposing the initial image with a Haar wavelet via fast discrete stationary two-dimensional wavelet-transform. Coordinate shifts of the auxiliary markers are used to increase accuracy when calculating coordinate shifts of the tracking point, as well as maintaining the tracking function in cases when information contact with the tracked object is lost, wherein to reduce the effect of reference rewriting on accuracy of localisation thereof on the current image, simultaneously with formation of a reference for each marker, its scaling series is created, which is used during changes in the scale of the scene.

EFFECT: high accuracy and reliability of tracking irrespective of selection of the tracking object.

18 dwg

FIELD: information technologies.

SUBSTANCE: system comprises a basic transceiving station 1 and one or more external transceiving stations 2. The basic transceiving station 1 includes a control panel 3, a control unit 4, a coding unit 5, a transceiver 6 with an antenna, to the receiving input of which the following serially connected components are connected - a low pass filter 7, a demodulator 8, an analog-to-digital converter 9, a memory unit 10, a digital-to-analog converter 11, a shaper 12 of a full television signal and a unit 13 of video monitoring devices, a frequency selector 14, connected between the output of the low pass filter 7 and the synchronisation input of the control unit 4 and a supply unit 15, and the external transceiving station 2 comprises a control unit 16, a switchboard 17, to the inputs of which one or more video cameras 18 are connected, and to the output - serially connected low pass filter 19, an analog-to-digital converter 20, a memory unit 21, a digital-to-analog converter 22, a modulator 23 and a transceiver 24 with an antenna, a decoding unit 25, connected between the receiving input of the transceiver 24 and appropriate inputs of the control unit 16, and also a video recorder 26, a programmable timer 27 and a supply unit 28.

EFFECT: provision of a video surveillance system with image transfer along narrow-band communication channels.

2 cl, 2 dwg

FIELD: information technology.

SUBSTANCE: "long" and "short" accumulation of signal charges in adjacent frames (half-frames) is realised in a CCD matrix; a multiplexed video signal is formed at the output of a photodetector, and a multiplexed television signal is formed at the output of a television signal sensor; the multiplexed television signal is delayed by a frame (half-frame), and based on the observed image, weighted summation of the immediate and delayed video signals is performed via summation of the immediate video signal with a unit weighting coefficient K and the delayed video signal is summed with coefficient K selected based on the inequality 0<K<1, and that adjustment is augmented with selection of the optimum duration of the "short" accumulation in the CCD matrix.

EFFECT: high accuracy of recursive filtration as a result of a higher signal-to-noise ratio of the output interference pattern signal through further control of the accumulation time of the delayed component of said video signal.

3 cl, 5 dwg, 2 tbl

FIELD: information technology.

SUBSTANCE: passenger carriage video monitoring system has at least one video camera mounted such that it is capable of scanning at least part of the interior of the passenger carriage. The video camera is connected to a video recorder to which a monitoring and control unit is reversibly connected. The video recorder can be connected to a data transmission system and a monitor mounted in the guard's compartment.

EFFECT: safer transportation of passengers.

7 cl

FIELD: information technology.

SUBSTANCE: apparatus for detecting and monitoring hazard with a built-in display system is meant for use in a public safety system and warning on emergency situations, adapted for detecting and identifying hazards in the surrounding medium, as well as displaying information of interest relating to the hazard or other public information.

EFFECT: detecting hazard and endangering activities at early stages thereof.

25 cl, 3 dwg

FIELD: physics.

SUBSTANCE: system has a sensor unit 1, a signalling unit 2, a video surveillance unit 3, an actuating device unit 4, first 5 and second 6 modems, an information processing and display unit 7, a radio channel 8, a switching unit 9, a video information superposition unit 10, a display unit 11, audio signal transmitting 12 and receiving 13 units, a video peephole 14, a video signal converter 15 and a low frequency amplifier with a microphone 16 (i=1, 2,…, n). The first modem 5 has driving generators 17.i, a first OR logic element 18, a modulating function former 19, an amplitude modulator 20, a modulating code former 21, a phase-shift modulator 22, a power amplifier 23 and a transmitting antenna 24.

EFFECT: high reliability of security and surveillance of residential facilities and other secure objects through use of a radio channel using signals with combined amplitude modulation and phase-shift modulation.

3 dwg

FIELD: information technology.

SUBSTANCE: system is equipped with two memory modules, two analysis and recognition units, one having a function for analysing and comparing records from the first frame to the last, and the other having a function for analysing and comparing records from the last frame to the first, and a second video monitor, where each television camera is connected through a data network to the corresponding analysis and recognition unit which is connected to the corresponding memory module and the video monitor. Television cameras are arranged such that they can be simultaneously rotated in the horizontal plane by a rotary drive. Optical axes of the television cameras lies on a straight line and are directed oppositely. Motion detectors lie on the security perimeter. The output of each motion detector is connected to the corresponding pulse former, whose output is connected to the input of an OR element, whose output is connected to control inputs of the television cameras and the rotary drive. Efficiency of the security television system lies in elimination of blind spots inside the perimeter of the secured object, which prevents intrusion of the facility.

EFFECT: high efficiency of inspecting a secure territory on the perimeter and avoiding intrusion, achieved due to that the security television system comprises two television cameras connected through a data network to a video monitor and motion detectors connected to a signalling unit.

7 dwg

FIELD: physics.

SUBSTANCE: in the method, using a video surveillance subsystem with video cameras and thermal imagers mounted along the linear section of a boundary with alternation, using backlighting by backlight lasers, as well as an information collecting and processing subsystem, information is continuously transmitted from the thermal imagers to a computer and a monitor connected to said thermal imagers, and simultaneously recorded in a memory unit, for example a computer hard disk. Vibration detectors having detecting elements and signal processing units are mounted on the posts of the enclosure.

EFFECT: reduced cost of the equipment used while ensuring video surveillance at night and during bad visibility, ensuring detection of optical devices directed towards a secure facility.

2 cl, 6 dwg

FIELD: information technologies.

SUBSTANCE: video surveillance is carried out inside an object as a result of the fact that video equipment is placed, for instance, at least one digital camera and/or one video camera, in order to observe the inner space of the trading equipment object, which is not a room, and the field of view of video equipment is directed so that it covers both items located inside the object and the inner area of the most probable opening of the object, for instance, a cover or a door.

EFFECT: reduced possibility of penetration into trading equipment.

37 cl, 12 dwg

FIELD: information technology.

SUBSTANCE: camera forms a background image containing at least part of the field of vision of the camera; image fragments are arranged into a composite background image; the image is received in real time of the current field of vision of the camera; the image of the current field of vision of the camera is placed in real time into the background image formed by the camera; the position of the image in the background image correlated in real time and the position of the image, obtained in real time, in the field of vision of the camera is determined, wherein at the determination step, moving objects, new objects and disappearing objects are detected and selected in the field of vision of the camera, and if a moving or new object appears in the field of vision of the camera during observation, it is added to the background image in real time; the position of the image in the background image is measured in real time based on the change in the current field of vision of the camera. The camera is also fitted with an illumination device for creating an illuminating beam and a protective module for detecting presence of objects lying in close proximity to the illumination device, and switching off the illumination beam.

EFFECT: wider range of technical capabilities of video surveillance systems.

39 cl, 11 dwg

FIELD: radio engineering.

SUBSTANCE: television chamber comprises the second peak detector, an ADC, a generator of accumulation length, a gate multivibrator, the first and second RS-triggers, an inverter, an "AND" element and an "OR" element, new connections of these blocks with remaining blocks of the television chamber, and three additional cable strands are introduced into a line of connection between instruments of a transmitting and a receiving side of the television system.

EFFECT: expansion of functional capabilities of the security television system by provision of an image of a damage zone to the operator with high signal-noise ratio.

5 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: television camera includes a peak detector, an analogue-to-digital converter, an interval former, a univibrator, RS flip-flops which, during the given interval, enable measurement of the current video signal value, conversion into digital form and recording said value at a defined moment in time in the counter of the interval former. Further, the video camera switches to a mode where the accumulation interval is set optimum based on the criterion of maximum signal-to-noise ratio.

EFFECT: providing visual monitoring of the situation in a zone of disturbance after detection of moving low-contrast objects due a low signal-to-noise ratio in a video signal at the monitor input.

3 dwg

FIELD: information technology.

SUBSTANCE: integrated security system for a facility, which includes a security and surveillance system, having a first and a second modem, an information processing and display unit and a switching unit, also has at each guarded facility a sensor unit, a signalling unit, a video surveillance unit, an actuating device unit, a unit for aligning video information and a display unit, where said system is provided with a fire alarm system and a perimetre intruder alarm and video surveillance system for the territory of the facility.

EFFECT: broader functional capabilities owing to a fire alarm, perimeter intruder alarm and video surveillance of the territory of the facility.

3 cl, 8 dwg

FIELD: video surveillance.

SUBSTANCE: method includes video surveillance of controlled object state, while into television cable main of object high-frequency television modulated signal is sent, while to receive signal concerning state of S objects, each of which includes group of N video surveillance blocks, including camera and microphone, video-audio signals from each group of N video surveillance blocks are combined along low frequency, received complex video signal is converted from each group of N video surveillance blocks into high-frequency television modulated signal and it is synchronized with unified cable main - coaxial television cable, in arbitrary groups combination, via which received independent S signals are sent to input of visualization and/or recording systems.

EFFECT: higher efficiency.

3 cl, 2 dwg

Up!