Method of detecting data to be transmitted in visible light using sensor of standard camera

FIELD: electricity.

SUBSTANCE: invention is referred to the sphere of lighting systems and optical receivers, and more specifically to the detection of data introduced into a light flux (output) of lighting systems. The invention suggests a system for the method detecting the first repeated sequence out of N symbols included into the first code, at that the first code is introduced into the light flux of the first light source of the lighting system, wherein the sequence contains at least N frames, and each frame out of at least N different frames is received with the total time of exposure containing one or more instants of the exposure, at that in each frame out of N different frames one or more instants of the exposure are in different time positions in regard to the first repeated sequence out of N symbols. The system contains a chamber to receive sequences of scene images through certain structures of an open/close state of the gate and a processing module to process the received sequence of images and to detect the repeated sequence out of N symbols.

EFFECT: development of the system for detecting data introduced into the light flux with high-frequency invisible modulation using standard commercial cameras in mobile telephones or web-cameras.

15 cl, 12 dwg

 

The technical field to which the invention relates

Embodiments of the present invention relate in General to the field of lighting systems and optical receivers, and more specifically to systems and methods for detecting data embedded in the light output of such lighting systems.

Art

Data transmission in visible light refers to the transmission of data through light output generated by the light sources. This transfer of data is a promising way to provide localized wireless data exchange in the future, as a wide range of unlicensed frequencies available for that purpose, and because the LEDs used to illuminate the room or space can be used to transfer data. Sometimes the light source for lighting in the future may become a source of data.

One of the technologies of data transmission in visible light based on the introduction of data into the light output of the lighting device by modulation of the light output from the lighting device in response to the replay signal data (such light output is sometimes called a "coded light" and is shortened to "CL"). Preferably the light output is modulated with a high frequency so that the modulation is invisible� for consumers.

One possible option is the use of CL includes lamp, the light which is superimposed related technical information service, such as the number of hours of illumination, the schemes reduce the brightness, opportunity, network address management, temperature lamp, dim, waivers, etc. Another option includes lamps in public places, in which the embedded data representing local weather, background information, local advertising the nearest shops/restaurants, information on road traffic, music, etc.

The embedded data can be detected by an optical receiver which, for example, can be implemented in the remote control to control the lamp, or may be included in another module, such as a switch or a sensor device. One known detector technology CL includes a direction of an optical receiver in a particular lamp and reading data embedded in the light output of this lamp.

One of the drawbacks of this technology detection is that can be detected only embedded data present in one position. On the contrary, it is desirable to simultaneously receive data transmitted in different positions within a given space. To this end were presented to other technical�ogy detection to use the camera by means of a camera on stage and parallel processing and recording of data flows in various positions within the scene. This type of detection based on camera requires a camera with a frame rate that is at least equal to the modulation frequency, was used for embedding data into the light output of the lamp. Conventional commercial cameras, such as used for example in mobile phones and web cameras have a frame rate of 50-100 Hz, well below the modulation frequency required for the implementation of the data so that it is not noticeable to consumers. Although these cameras can be used to detect CL, CL should be implemented either by use of low frequency modulation, or modulation using color. Both of these method of implementation lead to variations of the light output from the lamp that is visible to consumers.

As presented above, in the art a need to develop technology to detect data that is embedded in the light output of light sources, which solves at least some of the tasks described above.

Disclosure of the invention

The purpose of the invention is to provide a system and method of detection, suitable for determining data embedded into the light output of the light source with modulation invisible "high frequency", using conventional commercial cameras, such as used in the previous technology.

In one embodiment, the implementation�display of the present invention presents a detection system for determining a repeating first sequence of N symbols, included in the first code. The first code is embedded in the light output of the first light source of the lighting system. The detection system includes at least a camera and a processing module. The chamber is made with the possibility of obtaining the sequence of images on the stage. Each image includes a plurality of pixels, each pixel represents the intensity of the total light output of the lighting system in different physical positions within the scene. The total light output of the lighting system includes a light output of the first light source in at least one physical position within the scene. The sequence of images includes at least N different images. Each image is at least N different images get with a total exposure time comprising one or more moments of exposition in temporary positions within the repeating first sequence of N symbols. A processing module configured to process the received sequence of images to determine a repeating first sequence of N symbols.

Such a detection system may be embodied, for example, in the remote control to control the lighting system or may be included in another module, such as switches�l, the sensor device or mobile phone. The processing module may be implemented in hardware, in software or as a hybrid solution with both hardware and software components.

In addition, there is a corresponding method and computer program for detecting a repeating first sequence of N symbols included in the first code, wherein the first code is embedded in the light output of the first light source of the lighting system. A computer program can be loaded into existing detection systems (e.g., existing optical receivers or mobile phones) or can be stored after manufacturing systems detection.

Used here, the term "pixel" [image] refers to the module of the image data corresponding to a specific point within the scene. These images contain intensity (or their derivatives) of the total light output of the lighting system at different points on the stage. The placement of the image data in the rows and columns of pixels represents one of the ways of representing three-dimensional (3-D) scenes in 2-D image.

The length of time for repeating a sequence of embedded code (or, in the alternative, the duration of repeating a sequence of embedded code�, measured by the number of binary values that make up the sequence) is called here the "period code". A specific code may be embedded in the light output of the light source by modulating the drive signal applied to the light source through binary modulation or layered modulation using, for example, pulse width modulation, density modulation pulse or amplitude modulation, as is known in the art. Used here, the term "drive signal" refers to the electric signal when the light source provides generate light source light output. Preferably the code embedded into the light output in such a way that the human eye cannot discern light output, which includes embedded code, and the light output is that it does not include. This can be achieved, for example, by modulating the drive signal applied to the first light source with high frequency.

The present invention is based on the recognition that by capturing a sequence of images of the same scene that differ only by the relative times at which each image was obtained within repetitive sequences (such times are called here "moments Expo�icii"), and the selection of the interim provisions and the duration of moments of exposition within the frame time of the camera, which must meet certain coded bits of light, can be determined by a specific sequence of code bits that are embedded in the light output of the light source (the light output of this light source is represented within the scene). The sum duration of all the moments of exposure time within a frame is called "exposure time" of the camera. The required exposure time at certain points of exposure can be implemented by opening and closing the camera shutter, and the shutter can be either internal or external to the camera. The exposure time remains the same for all images, changing when the shutter opens relative to the embedded sequence of code bits (i.e. when you change the temporary provisions of the moments of exposure time within a repetitive sequence). For example, during the time for the camera frame, set of 20 milliseconds (MS), and exposure time set to 5 MS, a single image can be obtained when the valve is open only during the first 5 MS of the frame (i.e. the shutter will be closed for at least the next 15 msec frame), another image can be obtained when the shutter is opened only during the last 5 MS �Adra (that is, the shutter is closed within the first 15 MS of frame) another image can be obtained when the shutter is open during the first milliseconds of the frame, then closed for a millisecond, then open within the next millisecond, then closed again for a millisecond, then re-open at 3 MS, and then closed for the remainder of the frame. The processing module may be configured to compare the selected pixels of the received sequence of images, using a threshold value that distinguishes one coded bit from another to define all the code bits of the coded sequence.

Receiving and processing the sequence of images, thus, provide the possibility of determining data embedded in the contribution of light from the light source in a 2-D scene. Through careful initiating the point in time at which the camera shutter open to capture different code bits of the coded light during each time frame of the camera can be used a conventional camera with a relatively long time frame. Therefore, the technology presented here is suitable for the detection of invisible "high frequency" CL, using a less expensive camera than used in the prior art.

The light sources described herein may contain a gas discharge source with high/low pressure, neorganic�ski/organic LEDs, laser diodes, sources incandescent or halogen sources. Data embedded into the light output from the lighting system may include a localized identification of sources of light, its capabilities and current settings, or other types of information relating to the light sources. However, it should be noted that the lighting system does not necessarily apply for the purpose of illumination of a space or area, but can also be used for data transmission as such. As an example, the lighting system may be an access point to the network. In such embodiments, applying at least part of the light output generated by the lighting system may be outside the visible spectrum (i.e. light output of one of the light sources of the system may be outside the visible spectrum).

An implementation option according to claims. 2 and 13 establishes that the processing of the received image sequence may preferably comprise determining the type of modulation used for the introduction of the first code in the light output of the first light source using a particular modulation type to determine one or more different characters, differentiating threshold value of the first code, and determining a repeating first sequence of N symbols by comparing each pixel serial�of the pixels of the received image sequence, corresponding at least one physical position within the scene that contains the light output of the first light source with at least one or more defined thresholds.

Variant of the implementation according to claim 3 ensures that the sequence of pixels used to define a repeating first sequence of N symbols contains the pixels that correspond to the physical position within the scene, which includes only the light output of the first light source. Thus, the sequence of pixels that includes only a single data stream that is embedded in the light output of one light source is used to define characters.

Variant of the implementation according to claim 4 provides the inclusion of the intermediate sequence in the first code. The intermediate sequence may preferably be used, for example, to ensure synchronization between the repeating first sequence of N symbols and definitions with receiver (p. 5), a repeating first sequence of N symbols contains the pixels that correspond to the physical position within the scene, which includes only the light coming from the first source (p. 5) light, and/or determine the type of modulation used� for embedded, repeating the first sequence in the output light of the first light source (claim 6).

Variant of the implementation according to claim 7 sets the duration for a given exposure time. The choice of this duration is preferred as it provides a solution for rendering individual characters embedded code.

Implementation options for PP. 8 and 9 provide that when one or more time points of exposure contain at least two points in the exposure time, the exposure time can be consistent (that is, one exposure) or inconsistent (i.e. exposure within the camera frame is divided into several time points of exposure). When all time points of exposure within the camera frame are consecutive, the shutter, which is usually the internal for all cameras can be used to set the correct time of exposure within the total exposure time (i.e., for the installation of the moments of exposition in the required time positions within the frame time of the camera). Alternatively, the shutter, which is usually the internal for all cameras can be installed so that it was open during the entire time frame for each frame, and a shutter which is external to the camera, can�about would be used to set the correct time of exposure within the total exposure time. With this purpose you can use the electronic shutter.

Implementation options for PP. 10 and 14 enables correction of multi-bit values of the exposure (exposures) in camera frame when one or more moments of the exposition contain two or more exposure time. The receipt of each image when using multi-bit exposure time per frame can be more effective through the world, particularly in poor lighting conditions, by reducing the influence of noise on the detection process.

Variant of the implementation according to claim 11 preferably establishes the use of a camera with a frame rate lower than the modulation frequency used for the introduction of the first code in the light output of the first light source to obtain an image sequence.

Variant implementation of the invention will be described in more detail below. It should, however, be understood that this implementation could not be considered as limiting the scope of the present invention.

Brief description of the drawings

Fig. 1 shows a schematic illustration of a lighting system installed in the structure in accordance with one variant of implementation of the present invention;

Fig. 2 shows a schematic illustration of a lighting system in accordance with one VA�Ianto implementation of the present invention;

Fig. 3 shows a schematic illustration of code containing a repeating sequence of N symbols, in accordance with one variant of implementation of the present invention;

Fig. 4 shows a schematic illustration of a detection system, in accordance with one variant of implementation of the present invention;

Fig. 5 shows a schematic illustration of an exemplary code that contains a repeating sequence of 4 symbols, and an exemplary one-bit moments of exposure in the camera frames corresponding to such code, in accordance with one variant of implementation of the present invention;

Fig. 6a presents a schematic illustration of the first image in the sequence of images captured when two light sources to provide input to light the scene, in accordance with one variant of implementation of the present invention;

Fig. 6b presents a schematic illustration of a second image sequence of images captured when two light sources to provide input to light the scene, in accordance with one variant of implementation of the present invention;

Fig. 6c presents a schematic illustration of a sequence of images taken when two light sources to provide input to light the scene, in accordance with one variant of implementation nastojasih� of the invention;

Fig. 7 shows a block diagram of the sequence of operations of the method steps for determining a repeating sequence of N symbols, in accordance with one variant of implementation of the present invention;

Fig. 8 presents a schematic illustration of a sequence of pixels of the received image sequence that match one of the selected physical position on the stage that contains the light output of a first light source, in accordance with one variant of implementation of the present invention;

Fig. 9 presents a schematic illustration of an exemplary code that contains a repeating sequence of 4 characters and an exemplary multi-bit points exposure in the camera frames corresponding to such code, in accordance with one variant of implementation of the present invention; and

Fig. 10 presents a schematic illustration of an exemplary code that contains a repeating sequence of N symbols, and the approximate switching signal used to control the shutter, external to the chamber, in accordance with one variant of implementation of the present invention.

The implementation of the invention

In the following description various specific details are presented to provide a more complete understanding of the present invention. However, for SP�specialists in the art it will be clear that the present invention can be used in practice without one or more of these specific details. In other embodiments, well-known properties have not been described in order not to complicate the present invention.

Fig. 1 shows a structure 100 (in this case, a room) with an installed system 110 lighting. System lighting 110 contains one or more sources of light 120, and one or more controllers (not shown) that control the sources of light 120. Under the action of the actuator from an electrical signal sources 120 light illuminate parts of the structure 100, the contribution to the total light from the various sources of light 120 is shown as contours 125a-125d lighting. The sources of light 120 may include a gas discharge sources with high/low gas pressure, organic/inorganic LEDs, laser diodes, incandescent sources, or halogen sources. System lighting 110 may further comprise a remote control 130 remote control providing the user the ability to control the sources of light 120.

Fig. 2 shows a schematic illustration of a system 200 of the illumination in accordance with one variant of implementation of the present invention. The lighting system 200 may be used as the system 110 of the lighting in the structure 100 shown in Fig. 1. As shown, the system 200 OS�of edenia includes at least a system controller 210 and the first source 220-1 light and is arranged to generate a light output of 205 in accordance with the light installations. In other embodiments, the lighting system may include additional light sources and, if necessary, additional controllers individually control each of the additional light sources. Alternatively, a single controller may be arranged to control multiple light sources.

The lighting system 200 is configured to work as follows. As shown in Fig. 2, the installation of the light system 200 lighting is provided to control the generator 230 of the signal (which, if necessary, can be included in the lighting system 200). Install lights indicate, than the average light output of 205 needs to be expressed, such as light power, that is defined as the brightness and chromaticity. Installation of light can be provided by the user via the remote controller 130 of the remote control or can be pre-programmed and supplied from an external module controlling the settings of the scene. Alternatively, the installation of light can be pre-programmed and stored in the memory device, the generator 230 of the drive signal or lighting system 200. Generator 230 converts the signal to drive the light installations in different electrical control signals to other light sources in predlagaemy 200 lighting and supplies these control signals to the system controller 210. In a variant implementation, shown in Fig. 2, the generator 230 of the control signal converts the light installations in the first control signal to the first source 220-1 light. The system controller 210, in turn, controls the different light sources by their respective control signals to obtain a light output of 205. In a variant implementation, shown in Fig. 2, the system controller 210 is arranged to control the source 220-1 first light control signal for receiving light output 225-1. In this embodiment of the light output 205 of the system 200 lighting contains light output 225-1.

As described, the installation of light indicate that the light output 205 of the system 200 of lighting should be expressed, for example, in the form of the chromaticity of the light. Change the color of the light output 205 may be achieved by various reduce the intensity of different light sources (additional, optional light sources, not shown in Fig. 2) within the system 200 of lighting, by controlling the drive signals provided to the system controller 210 from the generator 230 of the drive signal. For a constant level of attenuation of light intensity to the light source drive signal, which is supplied from the generator 230 of the control signal to the system controller 210, contains a repeating structure of the pulse�s. Such repeated structure of the pulses is called here the "drive structure".

Various methods for reducing the brightness of light sources known to those skilled in the art and therefore not described in detail here. Such methods include, for example, pulse width modulation, density modulation pulse or amplitude modulation.

The system controller 210 is further arranged to receive a signal 245 from the source 240 of the data. Signal 245 comprises (at least) bits of data that should be embedded into the light output 225-1 from the source 220-1 light. The system controller 210 is capable of generating code that must be embedded in the light output 225-1, by placing bits of data in repeated sequences. One example of such a code is shown in Fig. 3. As shown, RCD 310 contains a first repeating sequence of television characters (e.g., bits), shown as "sequence 1". In the following description, the symbols are called bits. However, one should take into account that whenever the word "bit" is used in this application, applies a broader definition of "symbol", which can also comprise a plurality of bits represented by one symbol. One example is a m�newie characters where there are not only 0 and 1, for the introduction of data, but also many discrete levels.

Every bit of code 310 has a duration of Tbit. Thus, the period of the code is equal to N*Tbit. The sequence can be represented, for example, a localized source identification 220-1 light, its capabilities and the current setting for lighting or other type of information that may or may not be related to the source 220-1 light or lighting system 200.

In one embodiment, the implementation of "sequence 1" can contain all of the data bits that must be implemented in light output 225-1. In another embodiment of the system controller 210 may divide the data bits that must be embedded into sets of smaller length (in packets). In this embodiment, the first code that must be embedded in the light output 225-1, must include a repeating sequence of one set (for example, duplicate first packet), the following code will include a repeating sequence of another set (for example, repeating the second package), and so on.

If necessary, data bits, which should be embedded into the light output 225-1 may be encoded using channel coding (e.g., using convolutional coding and�and block coding), or using the cyclic redundancy check (CRC). This may be done to increase redundancy in the transmitted data sequences for correcting errors in the detection of bits and/or to check whether the detected data is valid. Alternatively, such an encoding can be used for shaping the spectrum of the transmitted light signal, for example to reduce the visibility of data embedded into the light output 225-1. For the latter approach, in one embodiment, the implementation may be used Manchester encoding. The advantage of this encoding is that it suppresses the low frequency components in light signals that cause visible flicker.

In one embodiment, the implementation code 310 may contain an intermediate sequence, inserted between at least some of the instances of "sequence 1" (not shown in Fig. 3). For example, the intermediate sequence may be included between each instance of "sequence 1" in a variant implementation, in which data bits are divided into packets, so that the intermediate sequence may be similar to the packet header that contains information relating to the package, such as, for example, the modulation type and/or codero�ing, used on the side of the transmission, and/or to identify the source 220-1 light. Alternatively, the intermediate sequence may be included after every two or three copies of "sequence 1". In yet another variant implementation, the intermediate sequence may be included random and not periodic.

As explained below, the partial sequence can be used by using the processing module of the system of detection to obtain synchronization between the source 220-1 light and the detecting system and/or to provide additional information to the processing module, which allows the processing module to determine the code bits of the first sequence of N bits.

The system controller 210 may inject code 310 in light output 225-1 source 220-1 light by modulating the control signal that should be applied to the source 220-1 light in response to code 310. Different technologies for the implementation of the code in light output of light source known to experts in the art and therefore not described in detail here. Signal 245 may further include other similar codes that should be embedded into the light output of other light sources. Each of the codes includes various recurring follower�spine of N bits.

Fig. 4 shows a diagram representing the system 400 of detecting, in accordance with one variant of implementation of the present invention. As shown in the drawing, the system 400 includes detecting at least the camera 410, the shutter module 420 and 430 of the processing. If necessary, the system 400 of detecting also includes a storage device 440. The camera 410 is configured to receive the sequence of images of a scene. The shutter 420 is configured to the correct time setting for times when the images were taken by camera 410 (images were obtained while the shutter 420 is open, and images were obtained while the shutter 420 is closed). In various embodiments, the shutter 420 may contain conventional shutter inside the camera 410, which can only be opened and closed once during the frame time of the camera (i.e., a single exposure of a given duration within a frame) or electronic shutter located in front of the camera, which can be opened and closed multiple times during one frame.

Scenario 1: exposure to one bit per frame

First consider an example scenario, when the lighting system 200 is designed so that two light sources can provide light contribution in a particular scene. Consider that the scene �predstavljaet a part of the floor structure 100, shown in Fig. 1, where the first light source is one of the sources of light 120, illustrated in Fig. 1 having the circuit 125b lighting within the scene (i.e. on the floor), and the second light source is one light source 120, shown in Fig. 1 having the circuit 125c lighting within the scene. Relevant codes embedded in the light output of the first and second light sources comprise different repeating sequences of N bits.

For simplicity, consider that the data should be embedded into the light output of the first light source includes 4 bits. Code containing a repeating sequence of these 4 bits, shown in Fig. 5 code as 510 (i.e., the first repeating sequence of N bits contains a sequence of 4 bits). As shown, each of the bits has a duration of Tbit. Therefore, the period code is 4*Tbit. In addition, consider that the individual bits of the sequence, the bits of c11, c12, c13, c14contain 0, 1, 0 and 0, respectively, illustrated in Fig. 5, the signal 520. The signal 520 may be included in the signal 245 described with reference to Fig. 2.

As described above, data is embedded in the light output of the first light source by controlling the first light source with a drive signal, modulated in response to� code 510. In various embodiments, the system controller 210 may generate a modulated signal by modulating the drive signal using the binary or multi-level modulation using, for example, pulse width modulation (PWM, PWM), pulse-phase modulation, density modulation pulse or amplitude modulation. For example, to implement the binary value 0 of the signal 520 using PWM, the system controller 210 may perform the management structure within the control signal which has a magnitude required for the implementation of the binary value “0” from the signal 520, and the system controller 210 may make a different structure of the drive within the drive signal, the wider the value of b, for the introduction of the binary value “1” of signal 520. Due to the fact that the relationship between quantity a and quantity b is equal to the ratio between the number of ones and zeros in the signal 520, the data embedding in light output from the lighting system, can become invisible to the human eye, because the average time-modulated drive signal remains the same as in the original drive signal. To a person skilled in the art will understand other ways to modulate the drive signal depending on the signal 520 for the implementation of encoded data in light output sist�we lighting.

Similarly, consider that the data embedded in the light output of the second light source includes 4 bits, c21, c22, c23with24. And again, each bit has a duration of Tbitand the period code is therefore equal to 4*Tbit. The above-mentioned description applies in respect of data bits can be embedded into the light output of the second light source and therefore, for brevity, is not repeated.

The system 400 of detection can be configured to work as follows to determine the bits of data embedded into the light output of the first light source.

Initially, the camera 410 is configured to receive the sequence of images of a scene, where the scene is selected so that at least part of the scene included light output of the first light source. To this end, the frame time of the camera can be set to one bit more than the period of the code, i.e. 5*Tbitand the exposure of the camera can be set to contain a single exposure period having a duration equal to the duration of one bit period of the code, i.e., Tbit. In addition, the camera 410 may be formed so as to receive the image during the first period of time Tbitof each frame. The exposure of the camera 410, so established, are presented in Fig. 5 string 50.

When you shoot the image, the camera receives the intensity of light output from the lighting system in all positions on the stage. In this application, whenever the term "intensity" (light output), it is understood that it also includes "derivatives of intensity", such as, for example, light color, color temperature, spectrum of light and the change of light intensity. The image is usually divided into a set of pixels, where each pixel represents the intensity of the light output of the lighting system in different physical positions on stage. In the current scenario, the total light output of the lighting system contains the contribution of light from the first light source and the contribution of light from the second light source.

Consider that each image is divided into a 2-D lattice of 150 pixels with 10 pixels in the direction of the axis x and 15 pixels in the y axis direction. Since the exposure time of the camera 410 is set equal to one code bit of the code, on the intensity at a particular pixel of the image affects the value of a bit of code, encoded in the light output of the first light source, and the value of a bit of code, encoded in the light output of the second light source at the time, when you shoot the image. This first image shown in Fig. 6a (this image corresponds to the frame 1, performance�introduced in Fig. 5). The contour of the illumination of the first light source in the scene is represented as a circle 610 and an outline of the lighting of the second light source is represented as a circle 620. Since the first image 600-1 shoot, when the light output of the first light source is modulated bit c11and when the light output of the second light source is modulated bit C21code, the intensity Ix,yat each pixel (x, y) can be calculated as follows:

Ix,y=Ax,y·c11+Bxy·c21,

where Axyand Bxyare the corresponding values in what were to be the intensity of the light output of the first and second light sources, if the drive signals applied to the first and second light sources were not modulated in a certain code bit. Thus, as shown in Fig. 6a, the intensity of, for example, in a pixel (7,6) is equal to (A7,6·c11+B7,6·c21). In addition, as shown in Fig. 6a, the intensity at e.g. a pixel (4,5) is equal to A4,5·c11and the intensity in pixel (5,5) is equal to A5,5·c11because the second light source is any light contribution for part of the scene represented by these pixels (these pixels are located outside of the contour 620 lighting), i.e. B4,5=B5,5=0. Similarly, intensely�th for example, in a pixel (7,9) is equal to B7,9· c21and the intensity in pixel (8,12) is equal to B8,12· c21because the first light source does not provide any contribution by the light part of the scene represented by these pixels (these pixels are outside of loop 610 lighting), i.e. B7,9=B8,12=0. The intensity, for example, in a pixel (9,2) is shown as zero, because neither the first nor the second light sources do not provide any contribution by the light of the scene represented by this pixel (this pixel is outside of the circuits 610 and 620 lighting).

As you can see from the exposition 530, a second image of the image sequence on stage during frame 2 of the camera 410. The second image shown in Fig. 6b as the picture 600-2. Because the image is removed on the same stage, the circuits 610 and 620 of lighting remain the same as in the first image. Because the image 600-2 shoot, when the light output of the first light source modulated code bit c12the light output of the second light source modulated code bit c22the intensity at each pixel (x, y) can be calculated as follows:

Ix,y=Andx,y·c1,2+Bx,y·c22.

Thus, as shown in Fig. 6b, the intensity, �of primer, in a pixel (7,6) is equal to (A7,6·s1,2+In7,6·s22). In addition, as shown in Fig. 6a, the intensity of, for example, in a pixel (4,5) is equal to A4,5·c12the intensity in the pixel (5,5) is equal to A5,5·c12the intensity in the pixel (7,9) is equal to B7,9·c22and the intensity in pixel (8,12) is equal to B8,12·c22. And again, the intensity of, for example, in a pixel (9,2) shown is zero, since either the first or second light sources do not provide any contribution by the light of the scene represented by that pixel.

Similarly, in frames 3 and 4, the camera 410 receives, respectively, the third image (600-3) and the fourth picture (600-4) image sequences. And again, given the way in which make up the exposure time and frame time of the camera 410 relative to the embedded codes, the third image shooting, when the light outputs of the first and second light sources modulate the coded bits c13and c23, respectively, and the fourth image shooting, when the light outputs of the first and second light sources modulate the coded bits c14and c24respectively. The sequence of images 600-1, 600-2, 600-3 and 600-4 shown in Fig. 6c, where different images are displayed so that they continue in the direction of t (where "t" stands for "time"), performance�abuser, the images were filmed on the same stage, but at different times.

After obtaining the sequence of images 600-1-600-4 module 430 of the processing to process the sequence of images to determine a first repeating sequence of N bits embedded in the light output of the first light source. Fig. 7 illustrates a block diagram of the sequence of operations of the method 700 to determine a first repeating sequence of N bits in accordance with one variant of implementation of the present invention. While the steps of the method described with reference to Fig. 4, those skilled in the art will understand that any system configured to execution of the method steps, in any order, is within the scope of the present invention.

Method 700 may begin at step 710, where the module 430 of the processing performed with the possibility of choosing the sequence of pixels within the image sequence 600-1-600-4, in accordance with the physical position within the scene, which includes only the light output of the first light source.

To illustrate, consider the sequence of pixels within the image sequence 600-1-600-4 corresponding to a particular physical position within the scene, given that the�Noe physical position within the scene is the provision corresponding to the pixel (4,5) on the images 600-1-600-4. Then the sequence of pixels of the received sequence corresponding to this position contains a pixel (4,5) from each image. This sequence is shown as sequence 810 in Fig. 8. The sequence 810 includes pixel (4,5) of the first image 600-1 (shown as pixel 820-1), pixel (4,5) from the second image 600-2 (shown as pixel 820-2), pixel (4,5) of the third image 600-3 (shown as pixel 820-3), and a pixel (4,5) of the fourth image 600-4 (shown as pixel 820-4). As shown in Fig. 8, the intensity in pixels 820-1-820-4 equal to A4,5·c11, A4,5·c12, A4,5·c13and A4,5·c14respectively.

In one embodiment of the determining that the sequence of pixels corresponding to a specific physical position within the scene, includes only the light output of one of the (first) light source, can be performed as follows.

The system controller 210 may, if necessary, to include one of the intermediate sequence in the code that should be embedded into the light output of the first light source, and to include other intermediate sequence in the code that should be embedded into the light output of the second istochnikami, moreover, the module 430 processing has access to both of the intermediate sequence.

Each of the intermediate sequences may contain the identifier of a particular kind, such as, for example, the ID of a specific light source. The number of different light sources that can be identified thus, as described above, depends on the length of the intermediate sequence. For example, for synchronous code Walsh-Hadamard containing a repeating sequence of M binary values, M different light sources can be identified, which means that the module 430, the processing may determine whether the light contribution from any of M different sources of light in a certain position within the scene and, if necessary, to determine the magnitude of the contribution of light. Using a normal camera 50 Hz (that is, the camera can shoot 50 images per second) is arranged to receive the exposure time duration of one bit in each frame, a sequence of images necessary for the resolution of the 50-bit intermediate sequence embedded code can be obtained in 1 second.

In one embodiment, the implementation of the lighting system 200 may directly provide the intermediate sequence into the module 430 processing. All of a sudden embodiment of the system 400 of detecting may include a storage device 440, containing the intermediate sequence. In yet another embodiment of the system 400 of detection can be configured to receive intermediate sequences taken from (wirelessly) light signals. Alternatively, the module 430 of processing can be obtained using any of the means described above, the intermediate sequence contained in each of the embedded codes, but their derivatives (i.e., the parameters of which can be obtained an intermediate sequence). For example, module 430, the processing may obtain the length of a specific known sequence or flat number belonging to one of the sets of possible sequences. Intermediate sequences can then be recreated by module 430 treatment, potentially reducing the amount of transmitted data is provided to the system 400 of detection. In another embodiment, the implementation of the parameters describing the intermediate sequence may be extracted from received light signals.

Having access to intermediate sequences, the module 430, the processing may be configured to correlate the sequence of pixels of the received image sequence corresponding to the selected physical position in PR�Affairs scene with each of the intermediate sequences. It should be noted here that in a variant implementation, where the code 510 additionally includes an intermediate sequence of M bits included, for example, before or after each occurrence of sequence 1, the number of images obtained by the camera 410 may be at least (4+M) images instead of just 4 images (because N=4) as described above. In a variant implementation, in which the code 510 additionally includes an intermediate sequence of M bits included, for example, before or after each of the second instance of the sequence 1, the number of images obtained by the camera 410 may be at least (2*N+M) images. Specialist in the art can infer the minimum number of images required to obtain by the camera 410, to allow all bits of embedded code. In such embodiments, an intermediate sequence is a sequence of pixels of the resulting image sequence can differ from the sequence 810 illustrated in Fig. 8, since the sequence in this case could be more than 4 values. In particular, it could contain one value from each of the received image, for example, (4+M) values for the first variant of implementation described above, and (8+M) for the second variant of implementation described above. Similarly, the frame time of the camera 410 may then be set accordingly to obtain the minimum number of different images - the time frame could be, for example, set to (4+1+M)*Tbitfor the first variant of implementation described above, and (8+1+M)*Tbitfor the second variant of implementation described above.

As a result of correlation of the sequence of pixels of the received image sequence corresponding to the selected physical position within the scene with the intermediate sequence that is embedded in the light output of the first light source (here called the "first intermediate sequence"), the generated output correlation with at least one peak. The output of correlation usually involves a certain number of "peaks", with some peaks smaller than the others, and represent an artificial entity in the correlation process. These small peaks are called here "podperami", and the term "peak" is used here to describe a peak in output correlation, which indicates the presence of a particular intermediate sequence within the sequence of pixels obtained is consistent�STI images corresponding to the selected physical position within the scene. The peak can be clearly higher than podpisy in output correlation, and a specialist in the art can easily identify such a peak. Therefore, based on the correlation of the sequence of pixels of the received sequence of images corresponding to the selected physical position within the scene with the first intermediate sequence, the module 430 of the processing performed with the possibility of determining the contribution of light in the first light source is present at the selected physical position within the scene, if the output of the correlation includes a pic. A similar correlation can be performed for any other selected physical position within the scene by correlating a sequence of pixels of the received images corresponding to the selected position within the scene with the first intermediate sequence.

On the other hand, as a result of correlation of the sequence of pixels of the received image sequence corresponding to the selected physical position within the scene with the intermediate sequence that is embedded in the light output of the first light source (called here “second intermediate sequence”), the generated output correlation, �e containing peaks since, as shown, for example, Fig. 6a, the second light source does not provide any, even at least negligible, the contribution of light to a pixel (4,5). Thus, the module 430, the processing may determine that the sequence of pixels (4,5) obtained from the sequence of images includes only, or primarily, the light output of one of the (first) light source.

If, for example, module 430 processing began with the definition, does it include, for example, the sequence of pixels (7,6) in the sequence of received images only the light output of one of the (first) light source, by correlating a sequence of pixels (7,6) with each of the first and second intermediate sequences, then both outputs of the correlation may include peak. The peak in each of these outputs correlation could indicate the presence of the light output of both light sources in the physical position within the scene corresponding to the pixel (7,6). Module 430 of the processing could then proceed to the analysis of a different series of pixels while at least one sequence of pixels of the received image sequence is found which contains a light output of only the first light source.

For this embodiment of the determining that the sequence p�of xela, corresponding to a specific physical position within the scene, includes only the light output of one of the (first) light source, intermediate sequences can contain sequences that have good autocorrelation properties. When used in a system where each light source assigns a unique intermediate sequence, these sequences preferably are orthogonal. Examples of this can be represented by sequences of the Walsh-Hadamard transform, where the length of the intermediate sequence is equal to the number of light sources, which must be assigned to an intermediate sequence. These sequences, however, typically require synchronous operation, which is not always desirable because of the additional complexity. Therefore, another desirable property of the intermediate sequences is that they have good cross-correlation properties, i.e. a high degree of autocorrelation and low cross-correlation between sequences. Examples of such sequences include pseudo-random sequence, the sequence generated by linear shift registers with feedback, or other sequences that can be used in transmission systems dannys multiple access, with code division of channels.

In yet another embodiment, the implementation of all light sources assign a unique frequency switching, which are used as the identification code. This also allows for a low cross-correlation and high auto-correlation.

For the specialist in the art it will be clear that for all scenarios described here can be performed with other methods for processing the sequence of images, for determining module 430 of processing, whether a light contribution of a particular light source in a selected physical position on the stage. For example, module 430, the processing may generate a different sequence of pixels, for example, corresponding to different intermediate sequences, and to determine which of these sequences corresponds to the sequence of pixels of the received image sequence corresponding to the selected physical position, which may be made of, for example, when searching with the maximum probability. Other methods may also be considered.

In another embodiment of the determining that the sequence of pixels corresponding to a specific physical position within the scene, includes, mainly, the light output of one of the (first) light source, can be�ü performed, using CRC embedded in a sequence of bits and is used to determine whether a sequence of pixels only, or mainly, the contribution of light from the first light source. To determine this can be applied to the process of Fig. 7, as described below. CRC is then used to check whether the data has been correctly detected. Probably this is not the case, when the light contribution of the first light source is too weak, when the contribution of light from the first light source is not in the pixel, or when there is a strong contribution of light from multiple light sources in a given pixel.

When determining whether the sequence of pixels that correspond to physical positions within the scene, only or mainly light output of the first light source, probably the same data stream (corresponding to the light output of the first light source) detects the set of pixels. In one embodiment, the implementation of the signals from these pixels can be combined during the subsequent definition of the individual bits of data embedded into the light output of the first light source. Alternatively, the selection of just a few pixels, resulting in successful detection of the first packet data embedded in the light output of the first source with�ETA, can be used for detection of the following packages. Pixels that do not include only the light output of the first light source can be ignored in the following steps.

In still another embodiment, one implementation, in the case where different light sources generate light output at different wavelengths, the system 400 of detection can ensure that a sequence of pixels corresponding to a specific physical position within the scene, includes only the light output of one of the (first) light source by using a color filter adapted to transmit only light at a specific wavelength (or wavelength range), upon receipt of the image sequence.

The method may then proceed to optional step 720, where the module 430, the processing may synchronize the system 400 with the first detection of repetitive sequence (i.e., the module 430, the processing may determine where the first repeating sequence).

In this embodiment of the intermediate sequence may include the sequence of synchronization, where the module 430 processing has access to the synchronization sequence. Access to the synchronization sequence can be installed in the same way as access to intermediate sequence, described above.

Having access to the synchronization sequence, the module 430, the processing may be arranged to correlate at least one selected sequence of pixels within the image sequence 600-1-600-4 in accordance with the physical position within the scene, which includes only the light output of the first light source in the synchronization sequence. The maximum in the output correlation then refers to the beginning of the synchronization sequence and is used to determine the beginning (beginnings) (at least some of) the first repetitive sequence (sequences) code 510. The sequence is practically used for synchronization are sequences with good autocorrelation properties, such as sequences Baker that have a high degree of correlation for the entire alignment, but low correlations for the shifted version.

Method 700 then proceeds to step 730, where the module 430 determines the type of modulation used for the introduction of the first sequence of N bits of light output from the first light source. In one embodiment, the implementation of this can be accomplished by reusing the intermediate sequence. Field, usually consisting of several�their bits within the intermediate sequence may indicate modulation was used for the introduction of the first sequence of N bits in the luminous output of the first light source. To this end, the module 430 of treatment also needs to have access to the type of modulation used for the intermediate sequence. The modulation type used for the intermediate sequence, can be identified, for example, by including a specific identifier in the intermediate data, and should be provided in module 430 processing, as previously described for identifiers of different light sources or sequences for synchronization of the light sources.

Alternatively, modulation of the intermediate sequence and/or the first sequence of N bits can always be performed with a fixed modulation format. Step 730 then reduce until obtaining module 430 of processing a fixed modulation format, for example, by reading a fixed modulation format from the storage device 440 or by receiving a fixed modulation format from the system 200.

In one another embodiment, the implementation module 430, the processing may determine the modulation method used for implementing a first sequence of N bits in the luminous output of the first light source on you�early sequence of pixels within the sequence of images 600-1 - 600-4, corresponding to the physical position within the scene that includes only the light output of the first light source (for example, the sequence 810). For example, module 430, the processing may determine the number of levels of amplitude modulation as a number equal to the number of amplitude levels in the sequence 810.

Once the type of modulation used for the introduction of the first sequence of N bits in the luminous output of the first light source is known, in step 740, the module 430 of the processing executed with the ability to determine one or more threshold values, differentiating the various bits of the first sequence of N bits. In other words, the module 430 determines the modulation alphabet. For example, for a binary amplitude modulation of the intensity in a pixel sequence 810 above a certain level (threshold value) can be considered as representing a data bit "1" is embedded in the light output of the first light source, while the intensity of a particular pixel sequence 810, which is lower than a certain level, can be considered as representing a data bit "0" is embedded in the light output of the first light source. Similarly, when a multilevel amplitude modulation can be determined by many�ogowych values. Specialist in the art can envision various ways module 430, the processing may determine the threshold value depending, for example, from correlation values of the intermediate sequence, the amplitude of the received sequence of pixels and/or distribution of the received intensity levels.

Method 700 may then proceed to step 750, where the module 430 determines a first sequence of N bits by comparing each pixel of the sequence 810 with at least one specified threshold value (that is, through the use of the alphabet of the modulation obtained in step 740).

If necessary, after step 750, in a variant implementation, where the data is embedded in the light output of the first light source is embedded in a set of packets, data bits, defined in step 750, can be packaged into a data packet. The steps 710-750 may then be repeated for the second data packet embedded in the light output of the first light source, and so on. After all data bits of data packets are identified, the data bits can be recombined to form the original data.

Similar processing of the received image sequences 600-1-600-4 can be performed to determine recurring posledovatelno�and bits embedded in the light output of the second light source.

Even though in the current scenario shows the frame time of the camera is equal to 5*Tbitin other embodiments, the camera frame can be set equal to any integer multiple of Tbitonce the time frame is not an integer multiple or fraction of an integer of the period of the code (in this case, each image would contain the values of the same bit of code). For example, if the period code is 7*Tbitthe time frame of the camera can be set to two bits or more in two bits less than the period of the code, i.e. 9*Tbitor 5*Tbit. In addition, the camera 410 may be configured to obtain images during any time period Tbiteach frame, not necessarily during the first period of time Tbitof each frame.

In one embodiment of the moments of exposure could essentially be the same with all the bits of the embedded data (that is, each time the exposure begins, essentially, when a new bit of code is used to modulate the control signal supplied to the light source and ending when the application of the new bit ends). In other embodiments, the moments of exposure may not coincide with all the bits, and with carefully selected parts in�ex embedded data bits (e.g., with average 50% of the bits).

Moreover, in other embodiments that relate to a repeating sequence of N bits to more than N images can be received and processed by module 430 processing. This can be accomplished, for example, in a variant implementation, where the signal 520 includes an intermediate sequence of M bits. In this embodiment, the implementation of the minimum number of images received and processed by module 430 processing is (N+M) images. In another embodiment, the implementation of this can be used to improve the probability of detection light sources and for improving detection of data bits that are embedded in the light output of light sources. For example, when you get 2N images, you can average more than 2 sets of N images to further suppress the influence of noise during detection. This, in particular, is preferable in low light conditions, since the proposed methods are moments of exposure is usually small compared with the period of exposure.

Scenario 2: multi-bit exposure (exposure per frame

Further, similarly to the first scenario, assume that the illumination system 200 is designed in such a way that two light sources can provide input into the light in a particular scene where the stage before�provide a part of the floor structure 100, shown in Fig. 1, the first light source is one of the sources of light 120, illustrated in Fig. 1 having the circuit 125b lighting within the scene (i.e. on the floor), and the second light source is one of the sources of light 120, shown in Fig. 1 having the circuit 125c lighting on the stage. And again, relevant codes embedded in the light output of the two light sources comprise different repeating sequences of N bits.

And again, consider that the repeating sequence of the code embedded into the light output of the first light source includes 4 bits, c11, c12, c13, c14and that repetitive sequence of the code embedded into the light output of the second light source includes 4 bits, c21, c22, c23, c24. And again, each bit has a duration of Tbitand the period code is therefore equal to 4*Tbit. Code embedded into the light output of the first light source shown in Fig. 9 as code 910 and may be included in the signal 245 that is described in Fig. 2.

Description of the first scenario in terms of how bits of code can be embedded into the light output of the light source, applicable here, and therefore, for brevity, not repeat here. However, for simplicity, as the definition of a repeating sequence of N bits in�Supplement for each light source, independently of the other light source, and since the module 430 processing begins with the selection of pixels of the received sequence of images, in the case where essentially only the light output of one light source is present, the current script now focuses on the example where only one light source provides light contribution to the scene. Of course, those skilled in the art will understand that the illustrative example shown here can be easily extended to multiple light sources, introducing the description of the first scenario in terms of how to choose the sequence of pixels of the received image that includes only the light output of one of the (first) light source (step 710 of method 700, described above).

And again, the camera 410 is configured to receive the sequence of images of a scene. To this end, the exposure of the camera can be set so that it contained many moments of exposition, and every moment of the exposition, had a duration equal to the duration of one bit period of the code, i.e. bits. In this case, the total exposure time of the camera 410, Texp, is the sum of durations of all of the many moments of exposition. In such a scenario, to identify repeating first sequence of N symbols included in the code 91, the system 400 of detection can be configured to work in any other way than described in the first scenario.

First consider that many of the moments of exposition are consistent and that the camera 410 is configured to obtain the image during the first period Texptime of each frame. The exposure of the camera 410, made up, therefore, three moments of exposition in each frame shown in Fig. 9 line 930.

When you shoot the image, the camera receives the intensity of the light output of the lighting system in all positions on the stage. In the illustrative example, the current scenario of the total light output of the lighting system contains only the contribution of light from the first light source.

Because the camera exposure time is set to three consecutive code bits of the code 910, the intensity at a certain pixel in the image will affect the values of all bits of code, coded in light source output 220-1 light while taking the picture. Each pixel represents the intensity of the total light output of the lighting system in a different physical position within the scene. Since the first image is removed, when the light output of the first light source is modulated with the code bits c11, c12and c13(see Fig. 9, as in the camera �ri exposure 1 exposure 930 overlap code 910), the intensity of the dx,y(1)at each pixel (x, y) can be defined as follows:

dx,y=Ax,y·c11+Ax,y·c12+Ax,y·c13(1)

where Ax,yrepresents the value which would have to be the intensity, if the drive signal applied to the first light source, would not be modulated by the bits of the code c11, c12and c13and a subscript (1) in dx,y(1)indicates that this intensity was obtained in frame 1.

As can be seen in the exhibition 930, a second image of a sequence of images of a scene during frame 2 of the camera 410. Since the second image shooting, when the light source output 220-1 light modulated coded bits c12, c13and c14(see Fig. 5, when the exposure of the camera frame 2 on the exposure 530 is superimposed code 910), the intensity of the dxy(2)at each pixel (x, y) can be defined as follows:

dxy(2)=Ax,y·c12+Ax,y·c13+Ax,y·c14(2)

Similarly, for the third image, the intensity of the dxy(3)at each pixel (x, y) �can be defined as:

dxy(3)=Ax,y·c13+Ax,y·c14+Ax,y·c11(3)

Finally, for the fourth image, the intensity of the d4at each pixel (x, y) can be defined as:

dxy(4)=Ax,y·c14+Ax,y·c11+Ax,y·c12(4)

The above intensity (1) to(4) for a particular pixel (x, y) obtained for the four images can be recorded as the intensity matrixdxy:

Using notation thatformula (5) can be rewritten as follows:

In the formula (6) module 430 treatment contains the intensity of thedxyfrom the obtained sequence of images, andHfrom the way the camera 410 and the shutter 420 is made for shooting images. Thus, equation (6) is an equation with one unknown, that is, Ax,yc. Being again rewritten in matrix notation, the module 430, the processing may determine the unknown Ax,ycas:

,

where � accordance with the General designation matrix H-1denotes the inverse of the matrixHor pseudoinverse for non-square matrixH. Performing a calculation in accordance with formula (7), we can consider applying the transformation, which takes into account the presence of multi-bit points of exposure within each time-frame camera in the accepted sequence of intensity values of dxy. This transformation corrects multi-bit exposure per frame, resulting in a sequence of pixels (x, y) obtained images containing data in the form of Ax,yc(similar to the sequence 810, described in the first scenario) instead of the data in the form of Ax,yH·c(that is, unadjusted for multi-bit exposures per frame). Then, similar processing as was described in the first scenario (method 700) may be applied to Ax,yc. In various embodiments, the transformation applied to the sequence of pixels before some or all of the steps 710-750 described above.

While the description of this scenario so far applied to the case of multiple points of exposure within the frame time of the camera 410 that were consistent, a similar approach can be applied to many aspects of exposure that are not consistent. Equations (6) and (7 still be fair while the difference in choosing different moments of exposure to produce images could be reflected in the other matrixH. Because the calculation in accordance with equation (7) requires the determination of the inversion of the matrixHmany moments of exposure for the image sequence you want to select so that the matrixHwas inverted.

One other difference between obtaining images with multiple serial and not many consecutive bits within the frame time of the camera 410 is how can be embodied such exposure. Since the shutter 420, which is integrated into all cameras, can be opened and closed only once within a frame, a shutter normally can only be used to produce images with a set of contiguous bits within the frame time of the camera 410.

Conversely, the shutter 420, which is external to the camera 410, can be used to produce images for consistent and inconsistent for many bits in each frame. Such a shutter may be embodied as an electronic shutter located in front of the camera, which provides a lot of open/close shutter for one frame. In one embodiment, the implementation of such a shutter can be switched � a digital code. One example of the structures of the opening/closing achieved through the use of a shutter that is external to the camera 410, is illustrated as a switching signal 1030 in Fig. 10. Fig. 10 also illustrates an exemplary code 1010.

In one embodiment of the system 400 of detection would then work as follows. Combined light signal falls on the shutter 420, which is placed in front of the camera 410. The shutter 420 is controlled by the signal switch 1030, which determines the open/closed state of the shutter. Assume that the switching frequency of the shutter 420 is the same as the frequency of the coded light, i.e. both use the same Tbit. The camera 410 is then integrates the incoming light during the frame period Tframe(that also applies to other variants of implementation, described here). By switching the shutter 420 bits of code 1010 during the moments of the open shutter will be taken and others not. As a result, the output signal for shutter camera 420 410, thus, would represent a sum of all bits, during which the shutter was opened.

For each frame use different code shutter duration Tframewhere Tframe=Tbit*Nshut. In a variant implementation, where the shutter 420 is embodied as a shutter, external to the camera 410, Nshutpreferably p�ecstasy an integer multiple or fraction of an integer multiple of N code. In a variant implementation, where the shutter 420 is embodied as a shutter, internal to the camera 410, Nshutpreferably not equal to an integer multiple or fractional part of an integer multiple of Ncodein selecting the right set of serial codes for inclusion in the signal switch 1030, i.e. so that the code matrixHwas inverted, the signal d(t) can be restored after an electrical processing of the signals output from the camera 410. Module 430 of the processing may then proceed to specify the sequence of N samples thus, as described above.

In one embodiment of the signal switch 1030 preferably contains such a number 1, which is possible, because the longer the shutter 420 is open, the more light will be taken by the camera sensor 410. The appropriate codes for this purpose are S-matrix, which is composed of Hadamard matrices after removing the first row and column.

All descriptions of the first scenario is applicable here, and therefore they are not repeated, except what relates to the duration of the frame period relative to the duration of the period code. In the scenario where each image get many instances of exposure, and the shutter 420 is embodied as a shutter, external to the camera 410, the frame of the camera vorp�Stateline, establish so that it was an integer or fractional multiple of the integer part of the period code. For example, if the period code is 7*Tbitthe time frame of the camera can be set as twice the period of the code, i.e. 14*Tbit. In the scenario where each image get many moments of exposure and the shutter 420 is embodied as a shutter, internal to the camera 410, the camera frame is preferably set not equal to an integer multiple or fractional part of an integer times the period of the code, as presented in the first scenario.

One advantage of the current scenario is that the receipt of each multi-bit image with exposures per frame is more effective on light, particularly in low light. Therefore, the effect of noise on the detection process can be reduced.

One advantage of the present invention is that on the basis of image sequences obtained by selecting the specific structures of the open/close state of the shutter of the camera, the encoded light, modulated at high frequency and, thus, not visible to the human eye, can be detected using a conventional camera with a low shutter speed. For example, a conventional camera 50 Hz can be used to determine to�encrypted light modulated at a frequency of 1 kHz or higher, which is much above the threshold of human visual perception.

While the exemplary version of the implementation presented here is suitable for systems with synchronized lighting (i.e. systems in which the embedded codes of the various light sources start at the same time), for the specialist in the art it will be clear how to extend the description of the present invention for asynchronous system lighting (i.e. systems where the embedded codes of the various light sources start at different times).

One variant of implementation of the invention may be embodied as a software product for use in a computer system. Program (s) of the program product defines functions of the embodiments (including the methods described herein) and can be contained in the set of computer readable media. Illustrative computer-readable storage media include, without limitation: (i) is not writable storage media (e.g., storage devices, read-only inside the computer, such as a CD-ROM readable by a CD-ROM, storage devices such as memory, ROM chips or solid state non-volatile storage device�and any type), which always contains the information; and (ii) writable storage media (e.g., floppy disks within the disk drive or any type of solid-state semiconductor random access memory), which retain the new information.

While the above is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic volume. For example, aspects of the present invention may be embodied in hardware or software or using a combination of hardware and software. In addition, while the implementation options described in the context of data transmission through visible light, the present invention is also applicable to other wavelength ranges, in addition to visible light. Therefore, the scope of the present invention is defined by the following claims.

1. Detection system for determining a repeating first sequence of N symbols included in the first code, the first code embedded into the light output of the first light source of the illumination system and detection system contains:
the camera made with the possibility of obtaining a sequence of frames of a scene, each frame has breakage, with:
each received frame includes a plurality of pixels, and each pixel represents the intensity of the total light output of the lighting system in selected physical position within the scene, and the overall light output of the illumination system contains a light output of the first light source in at least one physical position within the scene, characterized in that:
the sequence contains at least N different frames, and
each frame from at least N different frames receive with a total exposure time comprising one or more moments of the exposition, and in each of N different frames, one or more points of the exhibition are placed in different positions relative to the repeating first sequence of N symbols,
moreover, for each frame the total exposure time is shorter than the frame time of the camera; and
a processing module, configured to process the received sequence of frames to determine a repeating first sequence of N symbols.

2. Detection system according to claim 1, in which to process the received sequence of frames to determine a repeating first sequence of N symbols, the processing module arranged to:
determine the type of modulation used for the introduction of the first code � light output of the first light source,
on the basis of a particular type of modulation, determining one or more threshold values, and differentiating the different characters of the first code, and
define a repeating first sequence of N symbols by comparing each pixel of a sequence of pixels of the received frame sequence corresponding to at least one physical position within the scene that contains the light output of the first light source with at least one or more defined thresholds.

3. Detection system according to claim 2, in which the processing module is additionally configured to determine that at least one physical position within the scene that contains the light output of the first light source includes only the light output of the first light source.

4. Detection system according to claim 2, in which at least one first sequence of N symbols is separated from another of the first sequence of N symbols, the intermediate sequence of characters.

5. Detection system according to claim 4, in which to process the received sequence of frames to determine a repeating first sequence of N symbols, the processing module is additionally performed with the opportunity to:
the correlation of the sequence of pixels obtained �posledovatelnosti frames corresponding at least one physical position within the scene that contains the light output of the first light source with an intermediate sequence of symbols, and
determining based on the correlation of the beginning of the at least one first sequence of N symbols, determine the start of another of the first sequence of N symbols, and/or determining that at least one physical position within the scene that contains the light output of the first light source includes only the light output of the first light source.

6. Detection system according to claim 4, in which the intermediate sequence of symbols indicates the type of modulation used for the introduction of the first code in the light output of the first light source.

7. Detection system according to any one of claims.1-6, in which the duration of each of one or more points of exposure is equal to the duration of one symbol of the first sequence of N symbols.

8. Detection system according to any one of claims.1-6, in which one or more moments of the exposition contain one exposure of two or more successive moments of exposure.

9. Detection system according to any one of claims.1-6, in which one or more moments of the exposition contain two or more inconsistent moments of exposition.

10. System de�designing according to any one of claims.1-6, in which the processing module is additionally configured to apply the transformation, taking into account the presence of more than one point of exposure within the total exposure time, to a sequence of pixels of the received frame sequence corresponding to at least one physical position within the scene that contains the light output of the first light source.

11. Detection system according to any one of claims.1-6, in which the frame rate of the camera is lower than the modulation frequency used for the introduction of the first code in the light output of the first light source.

12. A method of defining a repeating first sequence of N symbols included in the first code, the first code embedded into the light output of the first light source of the lighting system, the method contains the stages at which:
receive a sequence of frames of a scene, and each frame has a time frame when you do this:
each received frame includes a plurality of pixels, and each pixel represents the intensity of the total light output of the lighting system in different physical positions within the scene, and the overall light output of the illumination system contains a light output of the first light source in at least one physical position within the scene, wherein the
consisten�etelnost contains at least N different frames, and
each frame from at least N different frames receive with a total exposure time comprising one or more moments of the exposition, and in each of N different frames, one or more points of the exhibition are placed in different positions relative to the repeating first sequence of N symbols,
moreover, for each frame the total exposure time is shorter than the frame time of the camera; and
process the sequence of frames to determine a repeating first sequence of N symbols.

13. A method according to claim 12, in which the processing of the received sequence of frames to determine a repeating first sequence of N symbols contains the stages on which:
determine the type of modulation used for the introduction of the first code in the light output of the first light source,
on the basis of a certain modulation type to determine one or more threshold values, and differentiating the different characters of the first code, and
define a repeating first sequence of N symbols by comparing each pixel of a sequence of pixels of the received frame sequence corresponding to at least one physical position within the scene that contains the light output of the first light source with at least one or more definite�and thresholds.

14. A method according to claim 13, in which one or more moments of the exposition contain two or more of exposure time, and processing the sequence of frames to determine a repeating first sequence of N symbols further comprises a stage on which:
apply a transform that takes into account the presence of more than one moment of exposure to the total exposure time, to a sequence of pixels of the received frame sequence corresponding to at least one physical position within the scene that contains the light output of the first light source.

15. Machine-readable media containing sections of code, executed with the ability in the performance of the detection system according to claim 1 to perform the steps according to any of claims.12-14.



 

Same patents:

FIELD: electricity.

SUBSTANCE: invention is referred to interactive illumination, specifically to control and creation of light effects such as regulation of light scenes based on position indication received from the input device, and more specifically to system and method for interactive illumination control in order to control and create light effects using position indicator. The main idea of this invention consists in provision of control over interactive illumination by combined indication of a position and approach of controlled light effects for illumination control in order to perfect creation of light effects such as regulation of light scenes, in particular scenes with large and heterogeneous infrastructures of illumination. One of the invention embodiments is referred to interactive illumination control system (10), which comprises interface (12) intended for receipt of data (14) specifying actual position (16) in real environment from input device (18) adapted for detection of position in the real environment by position pointing and receipt of data related to light effect (32), which is desired in the real position, and light effect controller (20) intended to convert actual position into virtual-type position of the real environment and to identify light effects available in virtual position.

EFFECT: control of interactive illumination and creation of light effects.

14 cl, 7 dwg

FIELD: electricity.

SUBSTANCE: illumination device comprises a metal element 1, light-emitting device 2, on-off switch 3, low-voltage source 4 (accumulator-based), resistor 5, the first and second diodes 6, 7. An anode of the first diode 6 and a cathode of the second diode 7 are jointed and connected to the metal element 1. The metal element 1 should be rather made of tin with the surface area of at least 20 square centimetres. Additional charging of accumulators for a low-voltage source 4 feeding the light-emitting device 2 is made by energy generated during the rectification of electric oscillations occurring as a result of the heat motion of the metal element 1 particles.

EFFECT: development of a universal mobile portable and durable energy-saving illumination device with a long-term continuous operation.

5 cl, 1 dwg

FIELD: electricity.

SUBSTANCE: invention relates to lighting engineering. Systems, networks, devices and methods for development, implementation and sharing of lighting circuits between controllable lighting networks are described. The network (101, 601, 701, 801, 808) according to the invention keeps the lighting circuits developed for network in the remote data storage (802). Other networks (301) address remote data storages for selection of existing circuits for implementation. Also the systems, networks, devices and methods for sharing by user choice between the controllable lighting networks are described. The networks according to the invention can get access to the shared remote data storage (112) for identification of user preferences at detection of a user by sensors in the network. In essence, individual lighting networks can use known user preferences or trained behaviours and environment conditions for more efficient adaptation to such behaviour, preferences or conditions.

EFFECT: improvement of efficient control of lighting.

15 cl, 13 dwg, 1 tbl, 1 ex

FIELD: electricity.

SUBSTANCE: invention is referred to lighting devices and control of the lighting devices operation. The result is attained due to the fact that each node out of a multitude of electric fixture nodes in the group of electric fixture nodes connected electrically may be used at a rated power level of a device; current consumption may be detected at least in the tested node of lighting fixtures out of the nodes of lighting fixtures; and the degree of reduced power for each node of lighting fixtures may be defined as a function of current consumption in the tested node of lighting fixtures.

EFFECT: invention ensures selective use of a group of lighting fixture nodes at a reduced power level in order to prevent an overload of the supply circuit and/or overload of one or more nodes of lighting fixtures in the group of the lighting fixture nodes.

21 cl, 3 dwg

FIELD: electricity.

SUBSTANCE: invention refers to lighting engineering. The lighting system (100) contains many lighting fixtures (101a-d). Each lighting fixture is made with possibility to transmit the identification code ID in the emitted light. To guarantee sufficient registration of the identification codes during selection using the selection device (120) the system additionally contains a control unit (130), made with possibility to identify any lighting fixture which rated excitation is equal pr below the minimum pre-set value or is equal or above the maximum pre-set value. The control unit sets (or commands to excitation system to set) the rated excitation value of such identified lighting fixture to the set value to achieve the appropriate set light output of the lighting fixture ensuring signal sufficient to transmit the identification code ID.

EFFECT: increased efficiency of the light sources control.

11 cl, 7 dwg

FIELD: electricity.

SUBSTANCE: invention relates to lighting units. The light source 1 has LED module 2, at least, with one series circuit of light-emitting diodes. Terminals of the module 2 are connected to terminals of the power supply 5 which is stabilised by the output current. The power of the module 2 is greater than the power of the light source, and on separate light-emitting diodes the bridging pieces 6 are installed. Light-emitting diodes can be placed along the module surface and are connected in several series circuits, connected to each other in parallel, so that each light-emitting diode of one circuit has corresponding light-emitting diodes symmetric to it in other circuits, their similar terminals are unipotential and can be connected to each other. The light source power is regulated by switching on the ready LED module, without any changes in the light source.

EFFECT: simplification of regulation of the light source power.

5 cl, 4 dwg

FIELD: physics, control.

SUBSTANCE: invention relates to controlling the level of light in building illumination. The result is achieved due to that the level of light is controlled by a control system (2), which is operably linked to at least one interior sensor (3) for detecting the interior level of light; at least one electric light control device (4); at least one window drapery control device (5) for controlling the allowable level of daylight; and a processing unit (7) for processing given control values, which include a given value of the interior level of light. The level of electric light and the allowable level of daylight are controlled concurrently by the electric light control device and the window drapery control device.

EFFECT: providing an automatic power-saving system for controlling building illumination.

10 cl, 10 dwg

Lighting device // 2551109

FIELD: lighting.

SUBSTANCE: invention relates to lighting engineering. Lighting device (2) is made with possibility of light quality data inclusion in the light emitted by the lighting device. The light quality data relate to state of the lighting device (2), for example to end of its k service life. Light can be detected by the control device (3) being external for the lighting device (2). The control device (3) can further help in determination of the lighting device (2) state, as result the current maintenance operations for the lighting device (2) can be made. So, the servicing personnel can replace the lighting devices and/or light sources requiring maintenance according to state indication of each lighting device before any lighting device of the system will achieve its service life.

EFFECT: increased quality of the emitted light due to more accurate determination of the light source state.

8 cl, 5 dwg

Led radiator // 2550743

FIELD: electricity.

SUBSTANCE: LED radiator circuit is added by N keys and the ring impulse generator having N outputs, each of which is connected to the control inputs of keys, and cathodes of each of N light-emitting diodes through power electrodes of keys are connected to the negative output pole of the driver. The use of keys 4.1, 4.2, … 4.N and the ring impulse generator 3 ensures reliable functioning of the offered LED radiator when using only one driver that will significantly reduce its cost.

EFFECT: improvement of reliability of LED radiator and reduction of its cost.

1 dwg

FIELD: electricity.

SUBSTANCE: invention relates to lighting engineering. A lighting system for the intensification of the facility visual appearance comprises a lighting unit (5) in order to ensure intensifying lighting and a light register (1) intended for the registration of light reflection by the lighted facility. Reference lighting is mixed up (8) with intensifying lighting. Data on reflected light is filtered (9) in order to filter reflection data (1) from reference lighting. These data are used for the computation of intensified lighting in a processing unit (3).

EFFECT: high efficiency of the lighting system.

13 cl, 17 dwg

FIELD: physics.

SUBSTANCE: method includes steps of, in each cycle in the region of possible information drop, generating a time interval during which information recording is allowed in either a first pulse counter upon the onset of information drop and change thereof from a positive polarity to a negative polarity, or in a second counter if information drop changes from a negative polarity to a positive polarity, analysing the counter state at the end of the time interval and, if a logic one is recorded in the first counter and a logic zero is recorded in the second counter, generating a binary signal of positive polarity; if a logic zero is recorded in the first counter and a logic one is recorded in the second counter, generating a binary signal of negative polarity; if more than one information drop is recorded in any of the counters or there is no information drop, an information bit error signal is generated.

EFFECT: increased reliability.

10 dwg, 1 tbl

FIELD: radio engineering, communication.

SUBSTANCE: group of inventions relates to computer engineering and communication and can be used in local area networks and external storage devices. The device comprises a clocking unit, a clock pulse generating unit, an error detection unit and a channel code conversion unit.

EFFECT: high reliability of reception.

8 cl, 7 dwg

FIELD: radio engineering.

SUBSTANCE: method of signal demodulation in Manchester code consists in the fact that information signal is integrated, and four pulse sequences are formed. During the generation of pulses of the third and fourth sequences, there formed are levels of output binary signal, and then pulses providing integrator setting to zero are shaped.

EFFECT: excluding interferences in signal at the integrator output at the beginning of the next information interval and interferences inside the device if there is no information signal.

5 cl, 2 dwg

FIELD: computer engineering; digital data transfer systems.

SUBSTANCE: proposed binary-to-phase-keyed code converter that has NOR gate 15 and OR gates 11, 12 is also provided with newly introduced R-S flip-flop 1, counter 6, NAND gates 7-9, NOR gate 10, majority gate 13, and inverter 16.

EFFECT: reduced data transfer time, simplified circuit arrangement of code converter.

1 cl, 2 dwg

FIELD: computer engineering; reception of differential two-level coded signal of sequential self-synchronizing code with conversion to two-digit digital signal and further noise-resistant execution of full synchronization function for this signal using input continuous timing pulse sequence.

SUBSTANCE: device consists of detector-transducer (6), inputs (19,20) of first and second components of differential two-level coded signal, triggers (1-5), NAND elements (7,8), AND (21-24), OR (15-17), negative OR (11-13), XOR elements (9-10), asynchronous delay component (14), clocked delay components (27), clock input (18), and four outputs.

EFFECT: increased noise immunity due to noise-resistant forming of output synchronized signal and outputs clock signals of bit synchronization, pause start and pause with obstructing filtration, synchronization of two-digit digital signal of two-level code 1B2B, as noise, with duration of signal changes not exceeding the threshold duration P*Tic for P≥2, where P - threshold number, Tic - duration of clock pulse period.

1 dwg

FIELD: computer engineering; digital data transfer systems.

SUBSTANCE: proposed converter has pulse generator, T flip-flop, D flip-flops, RS flip-flop, EXCLUSIVE OR gate, NAND gate, AND gate, counters, OR gates, input and output data buses, and output clock pulse bus.

EFFECT: reduced data transfer time, simplified code conversion circuit arrangement.

1 cl, 2 dwg

Code transformer // 2282306

FIELD: computer engineering, possible use in systems for transferring digital information.

SUBSTANCE: code transformer contains counting trigger 1, transformation resolution bus 2, clock impulse bus 3, input 4 and output 5 informational buses, shift register 6, power bus 7, two AND-NOT elements 8, 9, OR element 10, element OR-NOT 11, inverter 12, majority element 13 and bus of output clock impulses 14, R-input of counting trigger 1 is connected to transformation resolution bus 2 and to second and first inputs of elements OR-NOT 11 and OR 10, respectively. Output of element OR-NOT 11 is connected to output informational bus 5, while second input is connected to direct output of counting trigger 1, clock input of which is connected to first input of element OR 10, to first input and output of majority element 13 and to bus of output clock impulses 14. Second input of majority element 13 is connected to clock impulse bus 3 and to C-input of shift register 6, D-input of which is connected to power bus 7, while R-input is connected to output of OR element. Outputs of first and third bytes of shift register 6 are connected to first input of first element OR-NOT 8 and to input of inverter 12, respectively. Second input of first element OR-NOT 8 is connected to informational bus 4, and output - to first input of second element OR-NOT 9, second input of which is connected to output of inverter 12. output of second element OR-NOT 9 is connected to third input of majority element 13.

EFFECT: decreased time need for transferring information, simplification of circuit.

1 dwg

Code transformer // 2282305

FIELD: computer engineering, possible use in systems for transferring digital information.

SUBSTANCE: code transformer contains block for generation of code, code transformation block, communication line, code restoration block, code receipt block. Outputs of code generation block are connected to appropriate inputs of code transformation block, outputs of which through communication line are connected to appropriate inputs of code restoration block, outputs of which are connected to appropriate inputs of receipt block.

EFFECT: increased resistance to interference.

4 dwg

Code converter // 2274949

FIELD: computer engineering.

SUBSTANCE: proposed device has D flip-flop, first and second counters, delay circuit, and inverter.

EFFECT: enlarged functional capabilities.

1 cl, 1 dwg

Code transformer // 2262191

FIELD: computer science.

SUBSTANCE: device realizes transformation of input self-synchronization pulse series, not requiring additional forming of sign of start of information packet due to conversion of code with necessary change of level at the beginning of each bit interval, while logical zero with lesser length and one with greater length of bit range during this range take on high or low signal level. Code converter has device for forming pulses by front and cut 1, information input 2 and output 3, decimal divider-counter 4, counter 5, clock input 6 and output 7.

EFFECT: broader functional capabilities.

1 dwg

FIELD: communications.

SUBSTANCE: demodulation device for 64-based quadrature amplitude modulation for receipt of input signal, which contains k-numbered quadrature signal and k-numbered common-mode signal, which contains first and second soft solution generators, which are made with possible generation of soft solution values on basis of given equations.

EFFECT: higher efficiency.

3 cl, 6 dwg

Up!