Driving aid and method of vehicle detection nearby it

FIELD: transport.

SUBSTANCE: proposed device comprises image capture device mounted at the vehicle to catch the imaged sideways therefrom, nearby vehicle detection unit light projecting candidate object detector projecting the light with luminescence equal to or higher than the first threshold magnitude, unit to detect the causes for revealing if light projecting candidate object exists or not as well as supplier of data to driver on nearby vehicle.

EFFECT: higher accuracy of nearby vehicle.

9 cl, 9 dwg

 

The technical field to which belongs the group of inventions

[0001] the Present group of inventions relates to a device for driving assistance and the method of detection of the vehicle next to him.

The level of technology

[0002] Patent document 1 proposes a device for driving assistance, which detects pedestrians through the extraction of a pair of regions of the object are combined in the horizontal direction, as the left pane and the right pane means appropriate to the legs of the pedestrian, and by detecting the change in brightness in the fields.

The list of references

Patent literature

[0003] the Patent document 1. Lined publication of the patent application Japan room 2010-205087.

Summary of the invention

The technical problem

[0004] However, the device for driving assistance, as disclosed in patent document 1 cannot determine whether the object on the adjacent lane of traffic or next adjacent lane that is adjacent to the neighbouring lane by detecting the change in brightness when the detected object is shining. Consequently, may decrease the accuracy of detection of a nearby vehicle (including a motorcycle or Bicycle).

[0005] the Present invention, which is implemented in order to allow tra�elements of the problem, as described above, aims at providing a device for driving assistance, which allows to increase the accuracy of detection of nearby vehicles, and method of detection of the vehicle next to him.

The solution of the problem

[0006] In the device for driving assistance according to the embodiment of the present invention, the detector areas of the reflection from the road surface determines from the image capturing behind the side of the own vehicle in which images are captured by the input capture unit image area with high brightness, in which the brightness is equal to or above a predetermined threshold. In addition, the brightness detector in a predetermined position determines the brightness at the position separated from the own vehicle at a predetermined distance. Position, separated from the own vehicle at a predetermined distance is in a straight direction connecting the position of the above area with high brightness and a block of image capture in the view state of the vehicle from above. In addition, the difference detector calculates the brightness difference between the brightness value of the region with high brightness and a brightness value in PT�and, separated from the own vehicle at a predetermined distance, and the detector is located close to transportation means detects the adjacent vehicle from the difference of brightness.

Advantages of the invention

[0007] According to the present invention, the road surface reflecting the light of another vehicle in an adjacent lane of traffic or the light of another vehicle in a next adjacent lane, as expected, is detected by detecting, from image capture, regions with high luminance in which the brightness is equal to or above a predetermined threshold. In addition, the brightness is determined at the position located in the rectilinear direction connecting the position of the area with high brightness and image capture, and separated from the own vehicle at a predetermined distance. Accordingly, if an area with high brightness is the road surface reflecting the light of another vehicle, it becomes possible to detect the position of the light of another vehicle through the detection position separated at a predetermined distance, and the transmission of the reflecting surface of the road. Then defined time�spine between the brightness value of the region with high brightness and the brightness value of the position, separated at a predetermined distance, and is found near the vehicle in the area of image capturing on the basis of determined difference values of brightness. Accordingly, if an area with high brightness is the road surface reflecting the light of another vehicle, the difference of the brightness values becomes greater because there is more vivid, the plot due to the presence of the light of another vehicle in a position separated at a predetermined distance, otherwise the difference between the brightness tends to decrease. Therefore, the position of the light of another vehicle can be determined from the difference of the brightness values, which allows to increase the accuracy of detection of nearby vehicles.

Brief description of the drawings

[0008] Fig. 1 is a schematic drawing of the configuration of a device for driving assistance according to the first variant implementation, showing an example in which the device for driving assistance is mounted on the vehicle.

Fig. 2 is a top view showing the state of movement of the vehicle shown in Fig. 1.

Fig. 3 is a block diagram showing the details of the computer shown in Fig. 1.

Fig. 4 shows under�oblasti detector brightness in a predetermined position, shown in Fig. 3, while Fig. 4(a) shows the image captured by the camera, and Fig. 4(b) shows the position and the brightness value of the pixel above.

Fig. 5 is a block diagram of the sequence of operations ways, showing the method of detecting an adjacent vehicle device for driving assistance according to the embodiment of the present, and describes details of processing of the brightness detector in a predetermined position and block determine the causes shown in Fig. 3.

Fig. 6 is a block diagram showing the details of the computer device for driving assistance according to the second embodiment of the implementation.

Fig. 7 shows details of the detector brightness in a predetermined position, shown in Fig. 6, while Fig. 7(a) shows the image captured by the camera, and Fig. 7(b) shows the position and the brightness value of the pixel above.

Fig. 8 is a block diagram of the sequence of operations ways, showing the method of detecting an adjacent vehicle device for driving assistance according to the embodiment of the present, and describes details of processing of the brightness detector in a predetermined position and block determine the causes shown in Fig. 6.

Fig. 9 is a block with�increasing, showing details of the computer device for driving assistance according to the third variant of implementation.

Detailed description of embodiments of

[0009] the First variant implementation

First described first variant implementation of the present invention with reference to the drawings. Fig. 1 is a schematic drawing of the configuration of the device 1 for driving assistance in accordance with this embodiment of the showing an example in which the device 1 for driving assistance is mounted on the vehicle V. the Device 1 for driving assistance, shown in Fig. 1, which provides the driver with own vehicle V various information from the result of image capturing by your own vehicle V includes a camera 10 (block/means of image capture), the sensor 20 of the vehicle speed, the computer 30 and the device 40 of the alarm.

[0010] the Camera 10 shown in Fig. 1, attached in position with a height h behind the own vehicle V, so that the optical axis is tilted down from horizontal at an angle of θ1. The camera 10 is arranged to capture images of the detection area of this position. The sensor 20 of the vehicle speed determines the speed of the own vehicle V, for h�th can be applied, for example, a sensor that detects the number of revolutions of the wheels. The computer 30 detects the adjacent vehicle (hereinafter, another vehicle is illustrated as an example, a nearby vehicle), located near the own vehicle V, based on the images captured by the camera 10. In addition, the computer 30 of the present embodiment is configured so as not to erroneously detect the position of another vehicle due to light, for example, light of another vehicle. The unit 40 alarm warns the driver of the own vehicle V, if a nearby vehicle is detected by the computer 30 can get in touch with your own vehicle V.

[0011] Fig. 2 is a top view showing the state of movement of the vehicle shown in Fig. 1. As shown in Fig. 2, the camera 10 can capture images behind the side of the own vehicle V. On the adjacent lane that is adjacent to the lane in which the own moving vehicle V, are defined areas A1 and A2 of detection for detecting the other vehicle. The computer 30 determines things�whether or not there is another vehicle in the areas A1 and A2 of detection. Respectively, can be determined by the ability of the contact, when the own vehicle V changes the traffic lane.

[0012] Fig. 3 is a block diagram showing the details of the computer 30 shown in Fig. 1. It should be noted that Fig. 3 also shows a camera 10, a sensor 20 of the vehicle speed and the unit 40 alarm to clarify the relationship of the connection.

[0013] As shown in Fig. 3, the computer 30 includes an inverter 31 viewpoint difference detector 32, the detector 33 of the moving objects detection for moving objects), the detector 34 of the field of reflection of a candidate (a detecting region of reflection of the candidate), the detector 35 brightness in a predetermined position (a means to detect the brightness in a predetermined position and unit 36 determine the causes. It should be noted that the detector 33 of the moving object and the detector 34 of the field of reflection of a candidate are the part of setting the detection area (a means of setting the detection area) in the present variant implementation. In addition, the detector 35 brightness in a predetermined position detector is svetoprovodyaschego of the object candidate (a detecting svetoprovodyaschego object candidate) in the present version of the implementation.

[0014] the Inverter 31 points �of TAS K enters the data of captured images, received from image capture through the camera 10, and converts the viewpoint of the captured input image data into image data in the form of a bird in flight condition when observing from the height of bird flight. State when observed from the bird's eye view means, for example, the state when viewed from the viewpoint of a virtual camera directed straight down from the sky. The transformation of the viewpoint is performed by a method described, for example, in the publication laid patent application of Japan No. 2008-219063.

[0015] a Differential detector 32 sequentially enters the image data in the form of aerial flight, obtained from the conversion point of the review by the inverter 31 of the viewpoint, and combines the image data in the form of aerial flight in different input times based on the speed of the own vehicle V obtained from the sensor 20 of the vehicle speed. In addition, a differential detector 32, after aligning the image data in the form of aerial flight in different moments of time, determines the difference between the parts to form a differential image data.

[0016] the Detector 33 moving objects determines, from the difference image data generated by differencing children�ktora 32, the difference in the positions corresponding to the areas A1 and A2 of detection, and divides each of the areas A1 and A2 the detection region R1 of the moving object and the area R2 of the road surface by processing thresholds. Here, if the detected region R1 of a moving object, another vehicle may be present in the field, and the detector 33 moving object passes the result to the device 40 of the alarm as data definition the detection of a moving object.

[0017] Here, the region R1 of the moving object can be detected when the light of another vehicle is reflected from the road surface. In other words, as shown in Fig. 2, there is provided a case in which another vehicle is present on the next adjacent lane that is adjacent to the neighboring lane, the light of another vehicle is reflected from the surface of the road adjacent lanes, and the reflected light is detected as the region R1 of the moving object. In this case, issuance of an alarm by means of the device 40 of the alarm leads to the submission of erroneous information to the driver of the own vehicle V. Therefore, the present variant implementation is made with the possibility of preventing this situation by �next configuration.

[0018] the Detector 34 of the reflection area of the candidate finds out in the field capturing images, images which are captured by the camera 10, the area with high brightness, in which the brightness is equal to or above a predetermined threshold. In particular, the detector 34 of the reflection area of the candidate identifies the area with high brightness, in which the brightness is equal to or above a predetermined threshold value, only in the region R1 of the moving object detected by the detector 33 moving objects.

[0019] When the detection area with high brightness detector 34 region reflect the candidate's first sets a predetermined threshold value. The pre-defined threshold value is determined based on the angle θ2 between the optical axis of the camera 10 and the straight line connecting the area which needs to be detected (i.e., the region R1 of a moving object), and the camera 10, in a state of watching your own vehicle V from above, and is regulated so that it is higher for smaller values of angle θ2. In General, the light of another vehicle is projected forward. Therefore, the smaller the angle θ2, the greater the amount of light from the other vehicle must be captured by the camera 10. Therefore, the smaller the angle θ2, the higher regulatory�perform a pre-defined threshold value.

[0020] After a pre-defined threshold detector 34 of the reflection area of the candidate sets a region R1 of a moving object as a region R1 is reflection of the candidate, provided that the value of brightness region R1 of a moving object is equal to or above a predetermined threshold. It should be noted that the range of tasks and the rate of change threshold values are set experimentally, and the task here is only approximate, and thus the threshold value is not limited to the described above.

[0021] Additionally, in practice, the region R1 is reflection of the candidate may include, for example, the noise can be excluded by applying the process of opening to the binary image that expresses the region R1 is reflection of the candidate.

[0022] After specifying the region R1 is reflection of the candidate, as described above, the detector 34 of the field of reflection of a candidate determines the characteristic point P1 and the characteristic value P brightness region R1 is reflection of the candidate. In this case, the detector 34 of the reflection area of the candidate sets of the characteristic point P1 as the centroidal position of the region R1 is reflection of the candidate and sets the characteristic value P of brightness as the brightness values of the centroidal position. It should be noted that the characteristic point P1 and the characteristic value�tion P of the brightness is not limited to the characteristic point and the characteristic value, as described above, and the characteristic point P1 may be a point closer to the own vehicle in the region R1 is reflection of the candidate, for example, and the characteristic value P of the brightness may be a statistical value such as an average brightness value or a value of 90 percentile region R1 is reflection of the candidate.

[0023] the Detector 35 brightness in a predetermined position indicates the area having the brightness value equal to or greater than a predefined value, and the number of pixels equal to or more than a predetermined number, as the area of projection of the light of a candidate region in which the following nearby vehicle is regarded as svetoprovodyaschego object candidate) from the region of the next adjacent lanes in a certain range of guide angles nearly straight direction connecting the position of the region R1 is reflection of the candidate and the camera 10, in a state of watching your own vehicle V from the top. Here, although the predetermined number of pixels is presumably equal to or greater than three in the original captured image, this is not a limitation, and can be given a value equal to or greater than three. Accordingly, the projection light from a light source, such as a street light�, which is not a vehicle can first be excluded from the area of the projection light of the candidate. Then, the brightness values of the scanned and determined upward in the real space from a characteristic position in the field of the projection light of the candidate to determine whether or not sotobayashi object candidate next adjacent vehicle.

[0024] In particular, the detector 35 brightness in a predetermined position detects position P2, which is the point of intersection between a straight direction connecting the position of the region R1 is reflection of the candidate and the camera 10, and in fact the centerline of the next adjacent lanes in the view state of the own vehicle V from the top. Then, the detector 35 brightness in a predetermined position scans and detects the brightness up in the real space from the position P2.

[0025] it Should be noted that the reason job positions P2 as point of actually crossing the centerline of the next adjacent lanes is that the halo effect leads to the fact that very bright area looks larger than its true size, and thus the light from the headlights of another vehicle in a next adjacent lane can be captured, even e�whether the vehicle is to some extent shifted across, the actual centerline of the next adjacent lane is considered as a reference.

[0026] Fig. 4 shows details of the detector 35 brightness in a predetermined position, shown in Fig. 3, while Fig. 4(a) shows the image captured by the camera 10, and Fig. 4(b) shows the position P2 and the brightness value of the pixel above.

[0027] As shown in Fig. 4, when determining the position P2, the detector 35 brightness in a predetermined position scans and detects the brightness up from the position P2. In other words, the brightness values are obtained along the line indicated by the reference position "50" of Fig. 4(a).

[0028] furthermore, the detector 35 brightness in a predetermined position and then determines the section having successive brightness values equal to or exceeding a predefined value. As shown in Fig. 4(b), the fourth to tenth pixels above the position P2 form the SE section having successive brightness values equal to or exceeding a predefined value.

[0029] Then, the block 36 to determine reasons receives the characteristic value Q of the brightness of the section SE. The characteristic brightness value of Q may be a brightness value of the centroid (center) or may be a statistical value, similarly to the characteristic value P of brightness. �ATEM unit 36 determine the causes then calculates the difference of brightness values between the characteristic values of P and Q brightness. In addition, when the difference value exceeds the threshold value TH, the block 36 to determine reasons specifies that the region R1 is reflection of the candidate is the reflection of another vehicle from the road surface next adjacent lane. Otherwise, the unit 36 determine the causes of what defines the region R1 is reflection of the candidate is the light from another vehicle on the adjacent lane of traffic, and believes that is present near the vehicle.

[0030] it Should be noted that the block 36 to determine reasons also performs the following process to detect another vehicle in an adjacent lane of traffic with a higher accuracy. Unit 36 determine the causes first determines whether or less or not the number of pixels in the SE section of the specified value. Here, a case in which the number of pixels in the extracted sections SE, having successive brightness values that is equal to or less than the specified value, offers a case in which a very bright area is small and less than the size of the light of another vehicle in a next adjacent lane. In this case, the block 36 to determine reasons determines that secti� SE is not the light of another vehicle in a next adjacent lane, and the light of a nearby vehicle, and detects a nearby vehicle. If, on the other hand, the number of pixels in the section SE is not equal to or not less than the specified value, then the control unit 36 determine the causes determines that the section SE is the light of another vehicle in a next adjacent lane. In other words, it is determined that the region R1 is reflection of the candidate is the reflection of another vehicle from the road surface next adjacent lane.

[0031] in Addition, the unit 36 determine the causes obtains the distance D between the lowest point P3 of section SE and the position P2 defined by the detector 35 brightness in a predetermined position. Unit 36 determine the causes and then determines whether or less or not the distance D is a predetermined distance. If the distance is equal to or less than a predetermined distance, the position is too low for the light of another vehicle in a next adjacent lane, and thus it is possible to determine that this is not the light of another vehicle in a next adjacent lane. Accordingly, the block 36 to determine reasons determines that the section SE is the light of a nearby vehicle, and the identi�Ivan doesn nearby vehicle. If, on the other hand, the distance D is not equal to or not less than a predetermined distance, the unit 36 determine the causes determines that the section SE is the light of another vehicle in a next adjacent lane. In other words, it is determined that the region R1 is reflection of the candidate is the reflection of another vehicle from the road surface next adjacent lane.

[0032] Summarizing the above, the device for driving assistance in accordance with this embodiment of the provides the driver with various pieces of information from the result of image capturing by your own vehicle and includes: a block of image capture (camera 10) mounted on the own vehicle to capture images behind the side of the own vehicle; a setting block detection (detector 33 of the moving object and the detector 34 of the field of reflection of the candidate) that defines a detection area for detecting, from image data captured by the input capture unit image, the presence of a nearby vehicle on an adjacent lane; detector svetoprovodyaschego of the object candidate (detector 35 brightness in a predetermined position, which obnaruzhiv�t sotobayashi object candidate which projects light with brightness equal to or higher than a predetermined threshold value; block determine the causes (block 36 to determine reasons) that determines whether or not sotobayashi object candidate, which indicates the presence of a nearby vehicle in the detection area specified by the block specifying the detection area; and the information provider (the unit 40 alarm), which provides the driver with information about the presence of a nearby vehicle, if through the block determine the causes determined that the light is projected from a nearby vehicle, and you can say, that block determine the cause determines whether projected or not the light from a nearby vehicle, by detecting the position of the capture image (corresponding to the distance (D) svetoprovodyaschego object candidate detected by the detector svetoprovodyaschego object candidate.

[0033] the following describes a method of detecting nearby vehicles for the device 1 for driving assistance in accordance with this embodiment of the. Fig. 5 is a block diagram of the sequence of operations ways, showing the way to detect nearby transportno� means for the device 1 for driving assistance in accordance with this embodiment of the, and describes details of processing of the detector 35 brightness in a predetermined position and unit 36 determine the causes shown in Fig. 3.

[0034] first, as shown in Fig. 5, the detector 35 brightness in a predetermined position indicates the area having the brightness value equal to or greater than a predefined value, as the region RS of the projection light of a candidate from the area next adjacent lanes in a certain range of guide angles nearly straight direction connecting the position of the region R1 is reflection of the candidate and the camera 10, in a state of watching your own vehicle V from above (S1).

[0035] Then, the detector 35 brightness in a predetermined position determines whether or exceeds or not the number of pixels in region RS of the projection light of the candidate pre-specified number (set value of pixels (S2). Then, if it is determined that the number of pixels in region RS of the projection light of the candidate is not equal to or does not exceed a preset value ("No" at S2), it is very likely that the region RS of the projection light of the candidate is the light projected from the light source, such as a street lamp that is not a vehicle, and thus the unit 36 determine the cause determines the region of RS projection�ETA candidate as region, in which the detected light source from the one near the vehicle (S3). If, on the other hand, is determined that the number of pixels in region RS of the projection light of a candidate is equal to or exceeds the specified value ("Yes" at S2), it is very likely that the region RS of the projection light of the candidate is the light projected from nearby next vehicle or near the vehicle, and thus the detector 35 brightness in a predetermined position scans and detects the brightness up in the real space from a characteristic position in the field of RS projection light of the candidate to determine what exactly is the vehicle.

[0036] If, on the other hand, is determined that the number of pixels in region RS of the projection light of a candidate is equal to or exceeds the specified value ("Yes" at S2), it is very likely that the region RS of the projection light of the candidate is the light projected from nearby next vehicle or near the vehicle, and thus the detector 35 brightness in a predetermined position scans and detects the brightness up in the real space from a characteristic position in the field of RS projecting light of a candidate to determine how �Menen is the vehicle (S4). In other words, the detector 35 brightness in a predetermined position scans and detects the brightness up in the real space from the position P2, given as characteristic positions in the field of RS projecting light of a candidate, and determines the SE section having a brightness value equal to or higher than a predetermined value (S4). Then, the block 36 to determine reasons determines whether or less or not the number of pixels in the SE section a predetermined value (S5).

[0037] If it is determined that the number of pixels in the section SE is not equal to or not less than a predetermined value ("No" at S5), it is very likely that the section SE is the light of another vehicle in a next adjacent lane, as the number of pixels exceeds a specific value, and thus the block 36 to determine reasons specifies that the region R1 is reflection of a candidate reflects the light of another vehicle in a next adjacent lane (S3). Thereafter, ends the process shown in Fig. 5.

[0038] If, on the other hand, is determined that the number of pixels in the section of SE equal to or less than a predetermined value ("Yes" at S5), the unit 36 determine the causes computes the distance D between the lowest point P3 of the SE section and the specified position P2 (S6). Then, the block 36 to determine reasons determines whether or less if�Oh no the distance D is a predetermined distance (S7). The distance D can be set, without limitation, section SE, as the distance from the camera to the light source of the next adjacent vehicle in the vehicle width direction. In addition, can be a combination of the above. For example, if the distance in the height direction (SE section) is fixed equal to 70 cm, the distance in the vehicle width direction can be set in the range of 1-4. 5 m,for example, or if the distance in the vehicle width direction is set to fixed is $ 6.5 m, the distance in the height direction can be set in the range of 60-90 cm, However, these values are only exemplary and is not limited to the above. The light source of the next adjacent vehicle, as it is supposed, is determined by means of passing through this stage.

[0039] If it is determined that the distance D is not equal to or not less than a predetermined distance (No at S7), it is very likely that the section SE is the light of another vehicle in a next adjacent lane, as the distance D is at least not less than the distance to the light of another vehicle, and thus the block 36 to determine reasons specifies that the region R1 is reflection of the candidate I�is a reflection of the light of another vehicle in a next adjacent lane (S3). Thereafter, ends the process shown in Fig. 5.

[0040] If, on the other hand, is determined that the distance D is equal to or less than a predetermined distance ("Yes" at S7), the unit 36 determine the cause determines the characteristic value Q of the brightness of the section of SE (S8). Then, the block 36 to determine reasons determines is satisfied whether or not the equation "typical Q value of brightness" > "the characteristic value P of the luminance" + "threshold value TH" (S9).

[0041] If it is determined that the equation "the characteristic value Q brightness" > "the characteristic value P of the luminance" + "threshold value TH" is satisfied ("Yes" at S9), the difference of brightness values between the two values Q and P of the brightness exceeds the threshold value TH, and thus the block 36 to determine reasons specifies that the region R1 is reflection of a candidate reflects the light of another vehicle in a next adjacent lane (S3). Thereafter, ends the process shown in Fig. 5.

[0042] If, on the other hand, is determined that the equation "the characteristic value Q brightness" > "the characteristic value P of the luminance" + "threshold value TH" is not satisfied ("No" in S9), the difference of brightness values between the two values Q and P of brightness less than the threshold value TH, and thus the block 36 to determine reasons specifies that the region R1 is reflection of a candidate who�hiccups as a result of the light of another vehicle in the neighbouring lane, and determines that there is another vehicle on the adjacent lane (S10). Thereafter, ends the process shown in Fig. 5. It should be noted that the threshold value TH in this document is set equal to, for example, the brightness level in the range of 5-10.

[0043] it Should be noted that if in the process of step S3 is determined that the region R1 is reflection of a candidate reflects the light of another vehicle in a next adjacent lane, the unit 36 determine the causes outputs a signal of alarm suppression, as shown in Fig. 3. Respectively, can be avoided a situation in which is given an alarm when there is no possibility of contact with another vehicle, even if the own vehicle V changes the lane, and thus the alarm is not required.

[0044] If, on the other hand, in the process of step S10 is determined that there is another vehicle in the neighbouring lane, unit 36 determine the causes does not display the alarm suppression alarm and the unit 40 alarm, as expected, notify the driver about the presence of a nearby vehicle via a buzzer or a lamp.

[0045] Summarizing the above, the method for driving assistance in accordance with this embodiment of the drive provides�Yu various pieces of information from the result of image capturing by your own vehicle through the block of image capture (camera 10) and includes: a step of capturing images to capture images behind the side of the own vehicle by means of a block of image capture, mounted on an own vehicle; a step of setting the detection area to set the detection area for detecting, from image data captured at the stage of image capture, the presence of a nearby vehicle on an adjacent lane; a phase detection svetoprovodyaschego object candidate detection svetoprovodyaschego object candidate that projects light with brightness equal to or higher than a predetermined threshold value; a step of determining the reasons for determining whether or not sotobayashi object candidate, which indicates the presence of a nearby vehicle in the detection area, given at the stage of setting the detection area; and a step of providing information for providing the driver with information about the presence of a nearby vehicle, if the step of determining the causes is determined that the light is projected from a nearby vehicle, it is possible to say that the stage of determining the causes determines whether projected or not the light from a nearby vehicle, by detecting the position of the image capture (corresponding to distances between the�NIJ (D) svetoprovodyaschego object candidate detected at the detection step svetoprovodyaschego object candidate.

[0046] Thus, as described above, the device 1 for driving assistance and the way to detect nearby vehicles according to this variant implementation can determine the road surface reflecting the light of another vehicle in an adjacent lane of traffic or the light of another vehicle in a next adjacent lane, by determining, from the image capture region R1 is reflection of the candidate having the brightness equal to or higher than a predetermined threshold value. In addition, it defines the brightness value of the position situated in the rectilinear direction connecting the position of the region R1 is reflection of the candidate and the camera 10 and separated from the own vehicle V at a predetermined distance. Accordingly, if the region R1 is reflection of the candidate is the road surface reflecting the light of another vehicle, it becomes possible to detect the position of the light of another vehicle through the detection position separated at a predetermined distance, and the transmission of the reflecting surface of the road. Then, is determined by the difference between the value of P brightness region R1 atragene� candidate and the brightness value of Q positions, separated at a predetermined distance, and is found near the vehicle in the area of image capturing on the basis of determined difference values of brightness. Accordingly, if the region R1 is reflection of the candidate is the road surface reflecting the light of another vehicle, the difference of the brightness values is greater because there are more vivid, the plot due to the presence of the light of another vehicle in a position separated at a predetermined distance, otherwise the difference between the brightness tends to decrease. Therefore, the position of the light of another vehicle can be detected from the difference of the brightness values, which allows to increase the accuracy of detection of nearby vehicles.

[0047] in Addition, the pre-defined threshold value is adjusted so that it is higher as the angle θ2 between the optical axis of the camera 10 and the straight line connecting the area which needs to be detected (i.e., the region R1 of a moving object), and chamber 10 becomes smaller. Here, since the light of another vehicle is projected forward, the amount of light introduced into the chamber 10 from a position closer to the optical axis of the camera 10, tends to be large. , Investigators s�flax, the proper threshold value can be set by adjusting the pre-defined threshold value so that it is higher for smaller values of angle θ2.

[0048] in Addition, the area in which may be present near the vehicle (i.e., a region R1 of a moving object), is determined from the captured region, and the region R1 is reflection of the candidate having the brightness equal to or higher than a predetermined threshold value is determined in the range, limited to a specific scope. Consequently, the amount of processing can be reduced without the need to perform the above process for the optional fields.

[0049] in Addition, indicates the position P2 to the next adjacent lane, and the brightness values of the scanned and determined upward in the real space from the specified position P2. Therefore, the position of the light of another vehicle in a next adjacent lane can be captured with much higher accuracy.

[0050] in Addition, the brightness values of the scanned and determined upward in the real space from the specified position P2 to extract the SE section having successive brightness values equal to or greater than a predefined value, and determined that nearby conveys�deleterious tool detected, if the number of pixels in the extracted sections SE, having successive brightness values that is equal to or less than the specified value. Therefore, if the number of pixels in the extracted sections SE, having successive brightness values, is small and less than the size of light, it is not erroneous determination in that section SE is the reflection of another vehicle in a next adjacent lane, which allows to increase the accuracy of detection.

[0051] in Addition, if the distance between the lowest point P3 of the extracted SE section having successive brightness values, and the specified position P2 is equal to or less than a predetermined distance, it is determined that the adjacent vehicle is detected. Therefore, if the lowest point P3 of the extracted SE section having successive brightness values, is at least not less than the position of the light of another vehicle, and the distance from the specified position P2 is close, it is not erroneous determination in that section SE is the reflection of another vehicle in a next adjacent lane, which allows to increase the accuracy of detection.

[0052] the Second variant of implementation

It was further explained the second option OS�implementation of the present invention. The device 2 for driving assistance and the way to detect nearby vehicles according to a second embodiment of the are similar to the device and method according to the first variant of implementation except for the fact that different part configurations and processes. The following describes the difference from the first variant implementation.

[0053] Fig. 6 is a block diagram showing the details of the computer 30 of the device 2 for driving assistance according to the second embodiment of the. It should be noted that Fig. 6 also shows the camera 10 and the device 40 alarm to clarify the relationship of the connection.

[0054] As shown in Fig. 6, in the computer 30 of the second variant implementation, the block 36 to determine reasons has a detector 36a of the difference of brightness, the detection of the difference of brightness) and a detector 36b adjacent vehicles (means of detecting nearby vehicles). It should be noted that the detector 33 of the moving object and the detector 34 of the field of reflection of a candidate are the part of setting the detection area (a means of setting the detection area) of the present embodiment. In addition, the detector 35 brightness in a predetermined position detector is svetoprovodyaschego of the object candidate (a detecting svetoprovodyaschego about�EKTA candidate) of the present variant implementation.

[0055] Fig. 7 shows details of the detector 35 brightness in a predetermined position according to the present variant implementation, shown in Fig. 6, while Fig. 7(a) shows the image captured by the camera 10, and Fig. 7(b) shows the brightness value of the pixel above.

[0056] As shown in Fig. 7, upon detection position P2, the detector 35 brightness in a predetermined position scans and detects the brightness up from the position P2. In other words, the brightness values are obtained along the line indicated by the reference position "50" of Fig. 7(a).

[0057] furthermore, the detector 35 brightness in a predetermined position and then determines the section having successive brightness values equal to or exceeding a predefined value. As shown in Fig. 7(b), the fourth to tenth pixels above the position P2 form the section SE1, having successive brightness values equal to or exceeding a predefined value. Similarly, the thirteenth and fourteenth pixels above the position P2 form the section SE2 having successive brightness values equal to or exceeding a predefined value.

[0058] As described above, after defining the sections SE1 and SE2, having successive brightness values equal to or above the advanced�but a certain value, using a detector 35 brightness in a predetermined position, a detector 36b located close to transportation means selects, from sections SE1 and SE2, as described above, the section SE1, having the maximum number of consecutive values of brightness.

[0059] Then, the detector 36a difference brightness obtains the characteristic value Q of the brightness of the section SE1. The characteristic brightness value of Q may be a brightness value of the centroid (center) or may be a statistical value, similarly to the characteristic value P of brightness. Then, the detector 36a difference brightness calculates the difference of brightness values between the characteristic values of P and Q brightness. If the difference value exceeds the threshold value TH, the detector 36b adjacent vehicles specifies that the region R1 is reflection of the candidate is the reflection of another vehicle from the road surface next adjacent lane. On the other hand, otherwise, the detector 36b adjacent vehicles specifies that the region R1 is reflection of the candidate is the light from another vehicle on the adjacent lane of traffic, and believes that is present near the vehicle.

[0060] it Should be noted that the detector 36b nearby vehicles also performs the following process to �narutimate another vehicle on the adjacent lane of traffic with a higher accuracy. Detector 36b adjacent vehicles first determines whether or less or not the number of pixels in the section SE1 specified value. Here, a case in which the number of pixels in the extracted sections SE1, having successive brightness values that is equal to or less than the specified value, offers a case in which a very bright area is small and less than the size of the light of another vehicle in a next adjacent lane. In this case, the detector 36b adjacent vehicles determines that the section SE1 is not the light of another vehicle in a next adjacent lane, and the light of a nearby vehicle, and detects a nearby vehicle. If, on the other hand, the number of pixels in the section SE1 is equal to or less than the set value, the detector 36b adjacent vehicles determines that the section SE1 is the light of another vehicle in a next adjacent lane. In other words, it is determined that the region R1 is reflection of the candidate is the reflection of another vehicle from the road surface next adjacent lane.

[0061] in Addition, the detector 36b adjacent vehicle obtains the distance D between the lowest point of the 3 sections SE1, by means of the detector 35 brightness in a predetermined position, and a position P2. Detector 36b adjacent vehicles then determines whether or less or not the distance D is a predetermined distance. If the distance is equal to or less than a predetermined distance, the position is too low for the light of another vehicle in a next adjacent lane, and thus it is possible to determine that this is not the light of another vehicle in a next adjacent lane. Accordingly, the detector 36b adjacent vehicles determines that the section SE1 is the light of a nearby vehicle, and detects a nearby vehicle. If, on the other hand, the distance D is not equal to or not less than a predetermined distance, the detector 36b adjacent vehicles determines that the section SE1 is the light of another vehicle in a next adjacent lane. In other words, it is determined that the region R1 is reflection of the candidate is the reflection of another vehicle from the road surface next adjacent lane.

[0062] a Device for driving assistance according to the second embodiment carried out�I provides, similar to the first embodiment of implementation, the driver with various pieces of information from the result of image capturing by your own vehicle and includes: a block of image capture (camera 10) mounted on the own vehicle to capture images behind the side of the own vehicle; a setting block detection (detector 33 of the moving object and the detector 34 of the field of reflection of the candidate) that defines a detection area for detecting, from image data captured by the input capture unit image, the presence of a nearby vehicle on an adjacent lane; detector svetoprovodyaschego of the object candidate (detector 35 brightness in a predetermined position), which detects sotobayashi the candidate object that projects the light with a brightness equal to or higher than a predetermined threshold value; block determine the causes (block 36 to determine reasons) that determines whether or not sotobayashi object candidate, which indicates the presence of a nearby vehicle in the detection area specified by the block specifying the detection area; and the information provider (the unit 40 alarm), which provides �oditeli information about the presence of a nearby vehicle if by definition array of reasons is determined that the light is projected from a nearby vehicle, one can say that block determine the cause determines whether projected or not the light from a nearby vehicle, by detecting the position of the capture image (corresponding to the distance (D) svetoprovodyaschego object candidate detected by the detector svetoprovodyaschego object candidate.

[0063] the following describes a method of detecting a nearby vehicle device 2 for driving assistance in accordance with this embodiment of the. Fig. 8 is a block diagram of the sequence of operations ways, showing the method of detecting an adjacent vehicle 2 driving assistance according to the embodiment of the present, and describes details of processing of the detector 35 brightness in a predetermined position and unit 36 determine the causes shown in Fig. 6.

[0064] first, as shown in Fig. 8, the detector 35 brightness in a predetermined position indicates the area having the brightness value equal to or greater than a predefined value, as the region RS of the projection light of a candidate from the area next adjacent lanes � some range swivel around corners rectilinear direction, connecting position of the region R1 is reflection of the candidate and the camera 10, in a state of watching your own vehicle V from above (S11).

[0065] Then, the detector 35 brightness in a predetermined position determines whether or exceeds or not the number of pixels in region RS of the projection light of the candidate predefined value (set value of number of pixels) (S12). Then, if it is determined that the number of pixels in region RS of the projection light of the candidate is not equal to or greater than a preset value ("No" at S12), it is very likely that the region RS of the projection light candidate is projecting light from a light source, such as a street lamp that is not a vehicle, and thus the block 36 to determine reasons specifies that the region RS of the projection light of the candidate identified as the source of light from the one near the vehicle (S13). If, on the other hand, is determined that the number of pixels in region RS of the projection light of a candidate is equal to or exceeds the specified value ("Yes" at S12), it is very likely that the region RS of the projection light of the candidate is the light projected from nearby next vehicle or near the vehicle, and thus the detector 35 brightness in pre - �certain position scans and detects the brightness up in the real space from a characteristic position in the field of RS projecting light of a candidate, to determine what exactly is the vehicle. Here, the set value of the number of pixels is equal to or greater than three in the original captured image. However, this is not a limitation, and can be given a value equal to or greater than three.

[0066] If, on the other hand, is determined that the number of pixels in region RS of the projection light of a candidate is equal to or exceeds the specified value ("Yes" at S12), it is very likely that the region RS of the projection light of the candidate is the light projected from nearby next vehicle or near the vehicle, and thus the detector 35 brightness in a predetermined position scans and detects the brightness up in the real space from a characteristic position in the field of RS projecting light of a candidate to determine exactly what is the vehicle (S14). In other words, the detector 35 brightness in a predetermined position scans and detects the brightness up in the real space from the position P2, given as characteristic positions in the field of RS projecting light of a candidate, and determines the largest of the sections SE1 and SE2, having the brightness value equal to or greater than a predetermined value (S14). Then, the detector 36b ahogadas� next vehicle determines equal to or less than or not the number of pixels in the largest section SE1 predetermined value (S15).

[0067] If it is determined that the number of pixels in the largest section SE1 is not equal to or not less than a predetermined value ("No" at S15), it is very likely that the section SE1 is the light of another vehicle in a next adjacent lane, as the number of pixels exceeds a specific value, and thus the detector 36b adjacent vehicles specifies that the region R1 is reflection of a candidate reflects the light of another vehicle in a next adjacent lane (S13). Thereafter, ends the process shown in Fig. 8.

[0068] If, on the other hand, is determined that the number of pixels in the largest section SE1 is equal to or less than a predetermined value ("Yes" at S15), the detector 36b adjacent vehicle calculates the distance D between the lowest point P3 of the section SE1 and the specified position P2 (S16). Then, the detector 36b adjacent vehicles determines whether or less or not the distance D is a predetermined distance (S17). The distance D can be set without limitation section SE1, as the distance from the camera to the light source of the next adjacent vehicle in the vehicle width direction. In addition, �can set a combination of the above. For example, if the distance in the height direction (section SE1) is fixed equal to 70 cm, the distance in the vehicle width direction can be set in the range of 1-4. 5 m,for example, or if the distance in the vehicle width direction is set to fixed is $ 6.5 m, the distance in the height direction can be set in the range of 60-90 cm, However, these values are only exemplary and is not limited to the above. The light source of the next adjacent vehicle, as it is supposed, is determined by means of passing through this stage.

[0069] If it is determined that the distance D is not equal to or not less than a predetermined distance ("No" in S17), it is very likely that the section SE1 is the light of another vehicle in a next adjacent lane, as the distance D is at least not less light the other vehicle, and thus the detector 36b adjacent vehicles specifies that the region R1 is reflection of a candidate reflects the light of another vehicle in a next adjacent lane (S13). Thereafter, ends the process shown in Fig. 8.

[0070] If, on the other hand, it is determined that the distance D is equal to or less than the pre-determined�Lenno distance ("Yes" at S17), the detector 36a difference brightness determines the characteristic value Q brightness most of the section SE1 (S18). Then, the detector 36a difference brightness determines is satisfied whether or not the equation "typical Q value of brightness" > "the P value of brightness"+"threshold value TH" (S19).

[0071] If it is determined that the equation "the characteristic value Q brightness" > "the characteristic value P of the luminance" + "threshold value TH" is satisfied ("Yes" in S19), the difference of brightness values between the two values Q and P of the brightness exceeds the threshold value TH, and thus the detector 36b adjacent vehicles specifies that the region R1 is reflection of a candidate reflects the light of another vehicle in a next adjacent lane (S13). Thereafter, ends the process shown in Fig. 8.

[0072] If, on the other hand, it is determined that the equation "the characteristic value Q brightness" > "the characteristic value P of the luminance" + "threshold value TH" is not satisfied ("No" in S19), the difference of brightness values between the two values Q and P of brightness less than the threshold value TH, and thus the detector 36b adjacent vehicles specifies that the region R1 is reflection of a candidate occurs as a result of the light of another vehicle in an adjacent lane of traffic, and determines what is present each�e vehicle on the neighboring lane (S20). Thereafter, ends the process shown in Fig. 8. It should be noted that the threshold value TH in this document is set equal to, for example, the brightness level in the range of 5-10.

[0073] it Should be noted that if in the process of step S13 is determined that the region R1 is reflection of a candidate reflects the light of another vehicle in a next adjacent lane, the unit 36 determine the causes outputs a signal of alarm suppression, as shown in Fig. 6. Respectively, can be avoided a situation in which is given an alarm when there is no possibility of contact with another vehicle, even if the own vehicle V changes the lane, and thus the alarm is not required.

[0074] If, on the other hand, in the process of step S20 is determined that there is another vehicle in the neighbouring lane, unit 36 determine the causes does not display the alarm suppression alarm and the unit 40 alarm, as expected, notify the driver about the presence of a nearby vehicle via a buzzer or a lamp.

[0075] a Method for driving assistance according to the second embodiment of the also provides, similarly to the first variant of implementation, the driver with various pieces of information from the result of image capture� about their own vehicles through the block of image capture (camera 10) and includes: a step of capturing images to capture images behind the side of the own vehicle through unit capture images, mounted on an own vehicle; a step of setting the detection area to set the detection area for detecting, from image data captured at the stage of image capture, the presence of a nearby vehicle on an adjacent lane; a phase detection svetoprovodyaschego object candidate detection svetoprovodyaschego object candidate that projects light with brightness equal to or higher than a predetermined threshold value; a step of determining the reasons for determining whether or not sotobayashi object candidate, which indicates the presence of a nearby vehicle in the detection area is defined at the stage of setting the detection area; and the step of providing information for providing the driver with information about the presence of a nearby vehicle, if the step of determining the causes is determined that the light is projected from a nearby vehicle, it is possible to say that the stage of determining the causes determines whether projected or not the light from a nearby vehicle, by detecting the position of the capture image (corresponding to the distance (D) svetoprovodyaschego object candidate detected on this�e detection svetoprovodyaschego object candidate.

[0076] Thus, as described above, the device 2 for driving assistance and the way to detect nearby vehicles according to the present variant of the implementation can detect the road surface reflecting the light of another vehicle in an adjacent lane of traffic or the light of another vehicle in a next adjacent lane, by determining, from the image capture region R1 is reflection of the candidate having the brightness equal to or higher than a predetermined threshold value. In addition, it defines the brightness value of the position situated in the rectilinear direction connecting the position of the region R1 is reflection of the candidate and the camera 10 and separated from the own vehicle V at a predetermined distance. Accordingly, if the region R1 is reflection of the candidate is the road surface reflecting the light of another vehicle, it becomes possible to detect the position of the light of another vehicle through the detection position separated at a predetermined distance, and the transmission of the reflecting surface of the road. Then, is determined by the difference between the value of P brightness region R1 is reflection of the candidate and the Q value of the brightness of a position that is separated during the pre�about a certain distance, and is found near the vehicle in the area of image capturing on the basis of determined difference values of brightness. Accordingly, if the region R1 is reflection of the candidate is the road surface reflecting the light of another vehicle, the difference of the brightness values becomes greater because there is more vivid, the plot due to the presence of the light of another vehicle in a position separated at a predetermined distance, otherwise the difference between the brightness tends to decrease. Therefore, the position of the light of another vehicle can be determined from the difference of the brightness values, which allows to increase the accuracy of detection of nearby vehicles.

[0077] in Addition, the pre-defined threshold value is adjusted so that it is higher as the angle θ2 between the optical axis of the camera 10 and the straight line connecting the area which needs to be detected (i.e., the region R1 of a moving object), and chamber 10 becomes smaller. Here, since the light of another vehicle is projected forward, the amount of light introduced into the chamber 10 from a position closer to the optical axis of the camera 10, tends to be large. Therefore, a proper threshold value can be set�, through the regulation of pre-defined threshold value, because it is higher for smaller value of the above-mentioned angle θ2.

[0078] in addition, the area in which may be present near the vehicle (i.e., a region R1 of a moving object), is determined from the captured region, and the region R1 is reflection of the candidate having the brightness equal to or higher than a predetermined threshold value is determined in the range, limited to a specific scope. Consequently, the amount of processing can be reduced without the need to perform the above process for the optional fields.

[0079] in Addition, indicates the position P2 to the next adjacent lane, and the brightness values of the scanned and determined upward in the real space from the specified position P2. Therefore, the position of the light of another vehicle in a next adjacent lane can be captured with much higher accuracy.

[0080] in addition, the brightness values of the scanned and determined upward in the real space from the specified position P2 to extract sections SE1 and SE2, having successive brightness values equal to or greater than a predefined value, and determined that a nearby vehicle is detected, if the number of pixels in the extracted sections SE1, having successive brightness values, �avno or less than the specified value. Therefore, if the number of pixels in the extracted sections SE1, having successive brightness values, is small and less than the size of light, it is not erroneous determination in that section SE1 is the reflection of another vehicle in a next adjacent lane, which allows to increase the accuracy of detection.

[0081] in Addition, if the distance between the lowest point P3 of the extracted sections SE1, having successive brightness values, and the specified position P2 is equal to or less than a predetermined distance, it is determined that the adjacent vehicle is detected. Therefore, if the lowest point P3 of the extracted sections SE1, having successive brightness values, is at least not less than the position of the light of another vehicle, and the distance from the specified position P2 is close, it is not erroneous determination in that section SE1 is the reflection of another vehicle in a next adjacent lane, which allows to increase the accuracy of detection.

[0082] the Third variant of implementation

Next will be explained a third embodiment of the present invention. The device 3 for driving assistance and the way to detect nearby transport�th this means according to a third embodiment of the are similar to the device and method according to a second embodiment of the except what is different part configurations and processes. The following describes the difference from the second variant implementation.

[0083] Fig. 9 is a block diagram showing the details of the computer 30 of the device 3 for driving assistance according to the third variant of implementation. Fig. 9 also shows the camera 10 and the device 40 alarm to clarify the relationship of the connection.

[0084] As shown in Fig. 9, the computer 30 of the third embodiment includes the detector 37 of moving objects based on optical flow (detection areas), instead of the Converter 31 viewpoint difference detector 32 and detector 33 moving objects.

[0085] the Detector 37 of moving objects based on optical flow detects another vehicle from the motion vector of the object in the image in a manner specifically described in the patent of Japan No. 4367475. In this case, the detector 37 of moving objects based on optical flow to detect a region R1 of a moving object and transmits it to a detector 34 of the field of reflection of the candidate, similar to the detector 33 of the moving objects described in the second variant of implementation.

[0086] it Should be noted that the processes performed by the detector 34 of the reflection area of the candidate detector 35 brightness in a predetermined position and unit 36 determine �of ricin, are similar to the processes according to the second variant implementation. The detector 33 of the moving object and the detector 34 of the field of reflection of a candidate are the part of setting the detection area (a means of setting the detection area) of the present embodiment. The detector 35 brightness in a predetermined position detector is svetoprovodyaschego of the object candidate (a detecting svetoprovodyaschego object candidate) of the present variant implementation.

[0087] Thus, as described above, the device 3 for driving assistance and the way to detect nearby vehicles according to a third embodiment of the can increase the accuracy of detection of nearby vehicles, similarly to the second embodiment of the. Additionally, you may set the proper threshold value, whereby can be reduced amount of processing. In addition, the position of light of another vehicle in a next adjacent lane can be captured with much higher accuracy, which allows to increase the accuracy of detection.

[0088] furthermore, since the third variant of implementation includes the detector 37 of moving objects based on optical flow can be determined in a region R1 of a moving object, even if there is no signal�La from the sensor 20, the speed of the vehicle.

[0089] As described above, although the present invention is described based on the embodiments, the present invention is not limited to variants of implementation described above, and modifications may be made, or implementation options can be combined within the range that does not deviate from the scope of the present invention.

[0090] for Example, although the unit 1 and 2, driving assistance are mounted on the vehicle V in the variants of implementation described above, they can be mounted on a motorcycle or auto robot navigation without limitation aforesaid. In addition, adjacent the vehicle may be a motorcycle or a Bicycle.

[0091] in Addition, although the above-mentioned variant implementation specifically references the weather when moving your own vehicle V, the reflection from the road surface (specular reflection) of light of the other vehicle becomes large when it is raining. Therefore, when it rains, the threshold value THR, which is less than the threshold value TH is used when it does not rain, can be used as a threshold value in step S7 of Fig. 5 and step S17 of Fig. 8 (stages determine the region R1 is reflection of the candidate by the reflection of light of another vehicle from the surface to�OIG on an adjacent lane or light of another vehicle in a next adjacent lane). In this case, determined that the region R1 is reflection of the candidate is the light of another vehicle in an adjacent lane of traffic (S8, S18), only when the difference between the characteristic value P brightness region R1 is reflection of the candidate and the Q value of the brightness of a position separated at a predetermined distance less than the threshold value THR. Therefore, it is possible to identify a nearby vehicle according to the weather, when the own vehicle V moves.

[0092] If, on the other hand, in the above-mentioned stages, if it is determined that the difference between the characteristic value P brightness region R1 is reflection of the candidate and the Q value of the brightness of a position separated at a predetermined distance above the threshold value THR, it is determined that the region R1 is reflection of a candidate is a mirror reflection of the light of another vehicle in a next adjacent lane. In this case, the process of masking relative to specular reflection region R1 is reflection of the candidate is performed for a predetermined period. In this case, the period is set longer when the region R1 is reflection of the candidate is on the back of the own vehicle V (i.e., the angle θ2 in Fig. 2 is less), or longer if�region R1 is reflection of the candidate exists in the overtaking lane with respect to its own vehicle V, than otherwise.

Industrial applicability

[0093] As is apparent from the above description, the present invention can provide a device for driving assistance and the method for driving assistance that can detect a nearby vehicle with high accuracy.

Number list of links

[0094] 1 - a device for driving assistance

10 - camera (block/tool capture images)

20 - speed sensor of the vehicle

30 - PC

31 - the transmitter's point of view

32 - difference detector

33 detector of moving objects (detection areas)

34 detector region of reflection of the candidate (the detection region of the reflection from the road surface)

35 detector brightness in a predetermined position (a means to detect the brightness in a predetermined position)

36 is a block determine the causes

36a - detector brightness difference (mean difference detection brightness)

36b detector nearby vehicles (means of detecting nearby vehicles)

37 detector of moving objects based on optical flow (block/locator fields)

40 - device alarm

V - own vehicle.

1. A device for driving assistance, the cat�Roy provides the driver with various pieces of information from the result of image capturing by your own vehicle contains:
- block of the image capture Assembly mounted on your own vehicle to capture images behind the side of the own vehicle;
- block define the scope of discovery that sets a detection area for detecting, from image data captured by the input capture unit image, the presence of a nearby vehicle in an adjacent lane of traffic;
detector svetoprovodyaschego object candidate which detects sotobayashi the candidate object that projects the light with a brightness equal to or higher than the first threshold value;
block determine the causes that determines whether or not sotobayashi object candidate, which indicates that the light projected from the detection area defined by the block of setting the detection area is the light projected from a nearby vehicle; and
- information provider, which provides the driver with information about the presence of a nearby vehicle, if through the block determine the causes determined that the light projected from the detection area, is light, projected from a nearby vehicle,
wherein the block of determining the causes that determines whether or not light, Preeti�creating from the detection area, light projected from a nearby vehicle, by detecting the position of the image capture svetoprovodyaschego object candidate detected by the detector svetoprovodyaschego object candidate.

2. A device for driving assistance according to claim 1, wherein the part of setting the detection area includes a detector areas of reflection from the road surface, which detects, as a detection area, the area with high brightness, in which the brightness is equal to or higher than the second threshold value, in the field of image capture, image which is captured by the input capture unit image.

3. A device for driving assistance according to claim 2, wherein the detector svetoprovodyaschego of the candidate object includes a brightness detector in a predetermined position, which detects, as svetoprovodyaschego object candidate position, located in a rectilinear direction which connects the position of the area with high brightness that is detected by the detector areas of the reflection from the road surface and the block of image capture, and which is separated from the own vehicle at a predetermined distance, the view state of the vehicle from above, and determines the brightness of the position, and
block determine the causes in�includes:
detector to the difference of brightness, which determines the difference between the brightness value of the region with high brightness that is detected by the detector areas of the reflection from the road surface, and the brightness value of the position specified by the brightness detector in a predetermined position; and
detector located close to transportation means that detects the adjacent vehicle in the detection area based on the difference between the brightness specified by the detector to the difference of brightness.

4. A device for driving assistance according to claim 2 or 3, in which the detector areas of the reflection from the road surface adjusts the second threshold so that it is higher as the angle between the optical axis of the block of image capture and the straight line connecting the area which needs to be detected, and the block of image capture.

5. A device for driving assistance according to claim 2, wherein the detector areas of the reflection from the road surface detects, as a detection area, the area with high brightness, in which the brightness is equal to or higher than the second threshold, in the range bounded by a region that is detected by setting block detection.

6. A device for driving assistance according to claim 3, wherein the brightness detector in advance about�in a particular position indicates the position, located in a rectilinear direction which connects the position of the area with high brightness that is detected by the detector areas of the reflection from the road surface and the block of image capture, and located next adjacent lane, which is further adjacent to the adjacent lane that is adjacent to the lane of the own vehicle.

7. A device for driving assistance according to claim 6, wherein the brightness detector in a predetermined position scans and detects the brightness up in the real space from the specified position and remove the section having successive brightness values equal to or greater than a predefined value, and
block determine the causes determines that a nearby vehicle is detected, if the number of pixels in the section having successive brightness values extracted by the brightness detector in a predetermined position, is equal to or less than the specified value.

8. A device for driving assistance according to claim 7, wherein the block of determining the causes determines that a nearby vehicle is detected, if the distance between the lowest point of the section having successive brightness values extracted by the detector� brightness in a predetermined position, and the specified position is equal to or less than a predetermined distance.

9. Method for driving assistance, which provides the driver with various pieces of information from the result of image capturing by your own vehicle through the block of image capture, comprising:
- stage capture images, which capture the image behind the side of the own vehicle by means of a block of image capture, mounted on an own vehicle;
- the stage of setting the detection area at which set the detection area for detecting, from image data captured at the stage of image capture, the presence of a nearby vehicle in an adjacent lane of traffic;
- phase detection svetoprovodyaschego object candidate, which detect sotobayashi the candidate object that projects the light with a brightness equal to or higher than a predetermined threshold value;
- the stage of determining the causes, which determine whether or not sotobayashi object candidate, which indicates that the light projected from the detection area is defined at the stage of setting the detection area is the light projected from a nearby vehicle; and
- stage p�of dostavlenija information which provides the driver with information about the presence of a nearby vehicle, if at the stage of definition of the reasons it is determined that the light projected from the detection area, is light, projected from a nearby vehicle,
wherein the step of determining the causes that determines whether or not the light projected from the detection area, the light projected from a nearby vehicle, by detecting the position of the image capture svetoprovodyaschego object candidate detected in the detection step svetoprovodyaschego object candidate.



 

Same patents:

FIELD: physics, navigation.

SUBSTANCE: invention is intended to prevent collision of vehicles. When monitoring the surrounding space, a first initial state is obtained for a first object (18) located in the surrounding space (14) as a reference point during a first stopping event, wherein the initial state includes an indication of the initial position of the first object (18). During a second stopping event (S1), a determination (S2) is made of whether a vehicle (10) has moved between the first and second stopping events by a given minimum distance and/or whether a given minimum time has passed. The initial state is refreshed (S3) by indicating the position of at least a second object (18) located in the surrounding space (14) as a reference point if it is determined that a vehicle (10) has moved between the first and second stopping events by a distance exceeding the given minimum value and/or that a time greater than the given minimum time has passed. The actual position of the second object (18) is determined (S5) as the actual state for further intent to begin movement. A possible collision situation is detected (S6) if comparison of the actual and the initial state yields a distance greater than a given value. The invention also relates to an anti-collision apparatus.

EFFECT: warning a driver on presence of objects in the surrounding space.

8 cl, 2 dwg

FIELD: transport.

SUBSTANCE: in the first version, the method includes determination of presence of a vehicle moving ahead along adjacent lane in concurrent direction using video camera, measurement of distance to the vehicle with calculation of its side mirrors "blind area" position and rear vehicle position relative to "blind area", as well as determination of its type and dimensions, warning about hazardous proximity using indicators located in instrument cluster, assistance in the process of vehicle driving through vehicle deceleration in case when driver of vehicle moving ahead along adjacent lane begins to execute unsafe manoeuvre. In the second version, monitoring system switches on video camera, ultrasonic sensors and microcontroller. The ultrasonic sensors located on vehicle front bumper determine position of moving ahead vehicle side mirrors "blind area". Microcontroller produces warning signals for driver about vehicle entry into "blind area" of going ahead vehicle.

EFFECT: higher traffic safety due to automatic monitoring of moving ahead vehicle blind area condition.

7 cl, 3 dwg

FIELD: transport.

SUBSTANCE: alert system comprises annunciator of approach to dangerous point connected via communication line with receiver arranged at locomotive, decoder, memory unit, gate multivibrator, three-input OR element, control circuit, reset pulse generator and people presence control pickups.

EFFECT: higher safety.

1 dwg, 1 tbl

FIELD: transport.

SUBSTANCE: invention relates to automotive parking helps. Control device comprises computer to determine the distance to parking point in parking lot for vehicle to overcome, camera to register situation nearby the vehicle, interface for connection to device to output instructions on manoeuvring at motion to parking point, interface for connection with device for manual input of parking space boundaries and interface for connection to means intended for setting relationship between parking space boundaries and vehicle position. Camera is set to view the space ahead of vehicle. Portable computer comprises above described control device. Proposed method comprises registration of situation nearby vehicle by said camera, calculation of distance to parking point in parking lot for vehicle to overcome, outputting of instructions on manoeuvring at motion to parking point Picture produced by camera is used for manual input of parking space boundary to be compared with current position of vehicle.

EFFECT: efficient parking help for driver.

11 cl, 4 dwg

FIELD: transport.

SUBSTANCE: set of inventions relates to device and method for parking assistance. Device contains available parking space recorder, final target parking position setting facility, contact determination facility, final target parking position calculation facility, the first, the second and the third path calculation facility, the first path, the second path and the third path display facility. The contact determination facility provides determination whether there is the first path for vehicle to reach final target parking position without contacting with the edge of available parking space. The method consists in the following: available parking space is recorded, final target parking position is set, existence of the first path for vehicle to reach final target parking position without contacting with the edge of available parking space is determined. The first path is displayed if it exists. Angle of approach and not final target parking position is calculated if the first path does not exist. Not final target parking position is position that should be reached when vehicle is entering available parking space at approach angle. The second and the third paths are calculated and displayed.

EFFECT: higher efficiency of parking assistance.

10 cl, 15 dwg

FIELD: physics.

SUBSTANCE: device according to the first version includes a steering angle detection unit, an operation detection unit in which the steering wheel returns to a neutral position after turning right or left, and a parking mode selection unit. The device according to the second version includes a steering angle detection unit, an operation detection means in which the steering wheel returns to a neutral position after turning right or left, and a parking mode selection means. The method includes detecting the steering angle of the steering wheel, detecting an operation in which the steering wheel returns to a neutral position after turning right or left, selecting one parking mode on the right side of the vehicle or left side of the vehicle from a plurality of parking modes.

EFFECT: easy establishment of the desired parking position.

21 cl, 29 dwg

FIELD: transport.

SUBSTANCE: invention relates to aviation, in particular to methods of navigation assistance for aircraft path determination. Navigation assistance method consists in determination of future approach path using evaluation of forecasted safe radiuses on future path based on calculation of limit moment starting from which a forecasted safe radius exceeds or is equal to alarm signal generation limit and calculation of limit departure moment which corresponds to maximum moment when an aircraft should leave its predetermined path along which it has been moving, to be able to go to safe altitude.

EFFECT: limitation of departure procedures utilisation during loss of satellite navigational information thus reducing airspace saturation and limiting costs and duration of flights.

15 cl, 5 dwg

FIELD: transport.

SUBSTANCE: invention relates to vehicle driving control device. The device contains side object detector, control device to actuate lateral movement inhibition control, start of entering adjacent lane detector, actuation suppressor to suppress actuation of lateral movement inhibition control. Lateral movement is followed by changing vehicle transversal position relative to lane. Suppression of actuation of lateral movement inhibition control is performed when a vehicle has started to enter adjacent traffic lane, and side object is detected in the adjacent traffic lane.

EFFECT: prevention of improper interference in control for vehicle lateral movement inhibition during changing traffic lane.

22 cl, 11 dwg

FIELD: physics, optics.

SUBSTANCE: invention relates to optoelectronic devices to be used at carries as extra detector of objects in areas invisible for driver. Proposed optical sensor comprises transceiving channels, each including electronic unit, pulse source of optical radiation and photoreceiver, all being connected with electronic unit. Said electronic unit comprises reflected signal processor to execute control algorithm on the basis of measurement of time interval between radiation and registration of signal reflected from the object. Optical radiation source and photoreceiver are arranged, preferably, nearby so that their optical axes are parallel. The number of radiators in said sensor is defined by the relationship n ≥ β/α, where n is the number of radiators, β is central angle of controlled sector, α is beam angle.

EFFECT: higher precision and reliability of detection.

1 dwg

FIELD: transport.

SUBSTANCE: set of inventions relates to driver helps. Proposed device comprises module to determine beginning of the control, vehicle controller and control suppression module. Method of rendering help comprises defining the control beginning, vehicle control step and control suppression step whereat control over turning force application to the vehicle is suppressed for preset time interval after vehicle transverse position relative to road lane approaches said lane.

EFFECT: in situ help for drive in motion.

11 cl, 14 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering and specifically to image processing devices and methods. Disclosed is an image processing device for converting a source image. The device comprises an adjustment means, a computing means and an assembling means. The computing means generates a differential image. The assembling means assembles the differential image and the source image based on the density of the differential image. The adjustment means adjusts the blur level of the source image according to one of capture adjustments of the source image and an object included in the source image, wherein the object is defined by analysing the source image.

EFFECT: converting a source image into an image similar to a painting by adjusting the blur level according to a parameter of adjusting capture of the source image and the object included in the source image.

27 cl, 45 dwg

FIELD: physics.

SUBSTANCE: method comprises, at the sender, dividing an electronic image into macroblocks, each macroblock divided into N≥2 blocks from which K<N synchronisation blocks are selected; calculating a transmission synchronisation sequence and embedding into synchronisation blocks which allow embedding; at the recipient, establishing synchronisation of the digital watermark in the received electronic image for which, beginning with the selected initial point of the recipient, the received electronic image is successively divided into macroblocks and blocks, from which blocks for the intended synchronisation are selected; extracting therefrom verification subsequences and merging into a verification sequence which is compared bitwise with all shifts of the calculated receiving synchronisation sequence; the electronic image received by the recipient is considered an electronic image with established synchronisation of the digital watermark, corresponding to the intended synchronisation sequence with the least number of mismatches. The disclosed method can be used to increase the probability of establishing synchronisation of a digital watermark of an electronic image, divided into component parts of an arbitrary size, and for preventing visually noticeable distortions caused by embedding into blocks of the electronic image with virtually unchanged statistical characteristics of transmission synchronisation sequences.

EFFECT: improved establishment of synchronisation of a digital watermark of an electronic image when dividing the electronic image with an embedded digital watermark into component parts of an arbitrary size.

4 cl, 9 dwg

FIELD: physics.

SUBSTANCE: apparatus comprises: modules for detecting three-dimensional (3D) objects based on information on images of the back part of a vehicle from a camera, a module for evaluating natural objects for determining if a detected 3D object is a natural object, including roadside hedgerows or snow, based on an irregularity estimate calculated based on a number of first pixels for first pixels which represent a first predetermined difference in a difference image containing the detected 3D object, and a number of second pixels for second pixels corresponding to the 3D object and representing a second predetermined difference greater than the first predetermined difference, and a control module for controlling different processes. The control module inhibits the evaluation that a detected 3D object is another vehicle when the detected 3D object is evaluated using the module for evaluating natural objects as a natural object.

EFFECT: higher accuracy of evaluating natural 3D objects.

26 cl, 30 dwg

FIELD: physics.

SUBSTANCE: automatic performance apparatus includes a second performance data receiving section, which receives performance data transmitted without passing through a moving image distribution server from a server device which stores performance data, which are a group of performance information of an instrument terminal and data and time information indicating the date and time of performance of said performance information, a synchronisation signal receiving section which receives a synchronisation signal transmitted via a route for transmitting an audio signal from the moving image distribution server, and a playback unit which plays back performance information from the received performance data synchronously with image distribution during propagation of the synchronisation signal with a time corresponding to the date and time indicated by the data and time information from the performance data received by the second data receiving section for musical performance, and the date and time indicated by the synchronisation signal received by the synchronisation signal receiving unit.

EFFECT: automatic instrumental performance synchronously with video.

7 cl, 15 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to means of controlling video playback. The method comprises presenting an image which corresponds to a section of video data having an image presentation status on a second device (13), in response to a change in image presentation status transmitting a command to a first device (12) to adapt the playback status of the section of video data on the first device depending on the change in the image presentation status. In the method, image presentation on said second device (13) comprises selecting an image during a playback time interval of the corresponding section of video data on the first device (12).

EFFECT: adapting playback of a section of video data depending on change in image presentation status.

17 cl, 11 dwg

FIELD: physics.

SUBSTANCE: in the method, computed tomography is carried out with a scanning step of not more than 2.5 mm, analysis of the bitmap image of a structural component is carried out using a viewing program which realises a RGB colour model; the method includes assigning a coordinate system and boundaries of the analysis region, obtaining a digital matrix of colour indices of the pixels of the analysis region, determining the outline of the section of the structural component, determining the average value of the colour index of pixels of sections of the structural component; the distribution of modulus of elasticity values in the sections of the structural components is judged from the values of the elements of the digital matrix generated by the viewing program during analysis.

EFFECT: high accuracy of determining modulus of elasticity of heterogeneous materials of the analysed component.

1 tbl, 21 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to methods of presenting animated objects. Disclosed is a method of presenting an animated object in the form of an animation sequence by creating a sequence of vector-based separate objects for a defined moment of an animation sequence of an object and connecting the vector-based separate objects for the defined moment to form an animation sequence. The method includes calculating surface changes of the object as textural animation, where the textural animation of the object is created using a graphics program and is merged with the object using a program with a vector-based page description language. The method also includes projecting the textural animation of the object in the animation sequence using a program with a vector-based page description language to form an animation sequence with textural animation of the object.

EFFECT: faster operation and resource saving when presenting an animated object with interactive change of said presentation by the user.

15 cl, 5 dwg

FIELD: physics, photography.

SUBSTANCE: invention relates to an image processing device and method, which can improve encoding efficiency, thereby preventing increase in load. The technical result is achieved due to that a selection scheme 71 from a prediction scheme 64 by filtering selects a motion compensation image for generating a prediction image at a high-resolution extension level from key frames at a low-resolution base level. The filter scheme 72 of the prediction scheme 64 by filtering performs filtration, which includes high-frequency conversion and which uses analysis in the time direction of a plurality of motion compensation images at the base level, selected by the selection scheme 71, in order to generate a prediction image at the extension level.

EFFECT: reducing load in terms of the amount of processing owing to spatial increase in sampling frequency at the base level for encoding the current frame.

19 cl, 26 dwg

FIELD: physics.

SUBSTANCE: method comprises phases, during which video bit stream containing a sequence of image frames is received, the error occurrence is determined in the display frame segment, time distribution and spatial distribution of the named error is identified by means of the motion vector information and internal prediction relating a segment, affected by the error, the quality of video bit stream is assessed on the basis of the named error distribution.

EFFECT: improvement of accuracy of assessment at the expense of identification of error spatial distribution in the display frame segment.

18 cl, 16 dwg

FIELD: physics.

SUBSTANCE: invention relates to means of analysing a signal image. The method includes selecting two subsystems in which processes are observed in form of synchronised quasi-periodic signals x1(t) and x2(t), synchronously recording signals during a time T, constructing a phase portrait of the analysed dynamic system on the interval T in coordinates x1(t)-x2(t), determining a closed curve delineating the portrait, determining Fourier descriptors of the closed curve, classifying the system using a trained classifier constructed in the space of Fourier descriptors.

EFFECT: high information value of signal analysis data.

3 cl, 13 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to image forming apparatus with an identity authentication function. Disclosed is an image forming apparatus which includes an image forming unit. The apparatus also includes a recording unit, configured to record a plurality of data on characteristic features, up to the maximum limit, in non-volatile memory with reference to the same object. The apparatus also includes an authentication unit, configured to identify whether an object detected based on an image captured by the image forming unit is a recorded object based on data on characteristic features recorded in the nonvolatile memory. Furthermore, the apparatus includes a selection unit, configured to select one of the objects recorded in the nonvolatile memory, for which new data on characteristic features were recorded through the recording unit.

EFFECT: high accuracy of authenticating an object by selecting new data on characteristic features.

18 cl, 22 dwg

Up!