Navigation device with information received from camera

FIELD: physics; navigation.

SUBSTANCE: invention relates to navigation equipment of vehicles. The proposed navigation device can display directions on a display, receive a video signal from a camera and display a combination of the image from the camera and directions on the display. The device, which is a portable navigation device, includes a built-in camera. The device can provide an option from the menu which enables the user to regulate relative position of the displayed image from the camera with respect to the directions.

EFFECT: using the proposed device, instructions which can be quickly and easily interpreted are displayed for the user.

15 cl, 12 dwg

 

The technical field to which the invention relates

The present invention relates to a navigation device, the navigation device is configured to display navigation instructions on the display.

In addition, the present invention relates to a vehicle containing such a navigation device and method of providing navigation instructions. Moreover, the present invention relates to a computer program and data carrier.

The level of technology

The navigation device of the prior art, constructed based on GPS (global positioning system), a well-known and widely used as car navigation systems. This navigation device, GPS based, refers to a computing device that is in functional connection with the external (internal) GPS receiver can determine its own global position. In addition, the computing device can determine a route between the start and end addresses that can be entered by the user computing device. In a typical embodiment, the computing device is capable of using the software to calculate the "best" or "optimal" route between the locations of the start and of course what about the addresses from the map database. The "best" or "optimal" route is determined on the basis of predefined criteria and does not necessarily have to be the fastest or the shortest.

The navigation device may, in a typical embodiment, be affixed on the instrument Board of the vehicle, and can be implemented as part of the on-Board computer of the vehicle or car radio. The navigation device may also be (part of) a wearable system, such as PDA.

Using positional information obtained from the GPS receiver, the computing device may determine on a regular basis, their position, and can display the current position of the vehicle user. The navigation device may include a memory device for storing map data and a display for displaying the selected portion of the map data.

In addition, it can provide instructions on how to follow a particular route, by displaying on the display and/or formation in the quality of the audio signals from the speaker (e.g. "turn left in 100 meters") corresponding to the navigation instructions. Chart depicting steps that must be performed (for example, the left arrow indicates the rotation of the left front), can be displayed in the status bar, which can also be independently imposed on the corresponding junction/turns, etc. on the map.

The well-known ability of car navigation systems to provide the driver when he is driving along a route calculated by the navigation system, to trigger the re-calculation of the route. This is useful when on the road car meet road works or a large automobile congestion.

Well-known ability to provide the user an opportunity to choose the type of algorithm to calculate the route used by the navigation system, by selecting, for example, the "normal" mode and a "fast" mode (which calculates the route in the shortest time, but does not explore the many alternative routes, as in normal mode).

Also well-known for the ability to calculate the route with user-defined criteria, for example, the user may elect to review the route, which should be calculated by the device. Device software then calculates various routes and assess the most preferred of which contain along the route with the greatest number of points of interest (known as POI)that are marked as representing, for example, picturesque architecture.

In prior art navigation device displays the cards that are, like most to the RT, stylized or schematic representation of the real world. Most people difficult to translate a rather abstract version of the real world into something that can be easily recognized and understood. The navigation device is also known that reflect the (pseudo) three-dimensional map projection, as seen from the front and/or rear of the vehicle. This helps to simplify the interpretation by the user of the displayed map data, as it corresponds to the visual perception of the world by the user. However, such (pseudo)is a perspective view of a stylized or schematic diagram, which is still difficult to interpret by the user.

Still need to give people the opportunity to quickly and easily follow the instructions that appear on the display, which is especially important in personal navigation systems, such that can be used as a car navigation system. It should be understood that the driver of the vehicle must spend as little time on the viewing and interpretation of the displayed map data, since his/her attention should be focused on the road and traffic.

Disclosure of invention

Therefore, the aim of the present invention is provided shall be the navigation device, allowing at least one of the problems described above, and displays instructions for the user, which allow easy interpretation.

To achieve this goal, the invention provides a navigation device according to the preamble, characterized in that the navigation device additionally has a capability to receive a signal from the camera, and the navigation device is additionally configured to display a combined image of the signal from the camera and navigation instructions on the display.

By overlaying or combining navigation instructions over the image from the camera to the driver seems to be readable form, which allows easy and fast interpretation. The user does not need to translate an abstract representation of the real world, because the image is a representation of one-to-one real form that the user sees. The combined signal from the camera and the navigation instructions may be any kind of combination, for example, by overlaying one on top of another, while displaying in different parts of the display. Combining, on the other hand, can be a combination of time, i.e. an alternative display signal from the camera and navigation instructions. what neither can change each other after a predetermined time interval (for example, 5 seconds) or can change each other as a result of input from the user.

According to another variant implementation of the invention relates to a navigation device, in which the chamber is integrated with a navigation device. This navigation device does not require a signal from an external camera. The navigation device may, for example, be affixed on the instrument Board of the vehicle so that the camera provided picture through the windshield.

According to another variant implementation of the invention relates to a navigation device, in which navigation instructions are one or more positional arrow route arrow, point of interest, road, building, map data, such as vector data stored at least in a storage device such as a hard disk, ROM, electrically erasable programmable ROM and random access memory device. All kinds of navigation instructions can be displayed. It should be noted that such navigation instructions can also provide information, which in fact is not necessary for navigation (route search), but may also provide the user with additional information is to her.

According to another variant implementation of the invention relates to a navigation device additionally has a capability to impose navigation instructions over the image from the camera so that the position of the navigation instructions is in a predefined spatial relationship with respect to the corresponding parts of the image from the camera. This provides the user image, which can be very easily interpreted, so all navigation instructions can be displayed so that they coincide with the current position of the corresponding element on the camera image. For example, an arrow indicating a right turn, can be superimposed over the camera image so that it coincides with the rotation, as seen in the camera image.

According to another variant implementation of the invention relates to a navigation device and a navigation device includes a processor, a device for positioning and orientation sensors, device positioning and orientation sensors made with the ability to communicate with the processor, the processor configured to use the readings from the device positioning and orientation sensors in order to calculate the position and orientation of the camera and/or the navigation device is VA, on the basis of which the processor calculates the position of the navigation instructions on the display. Knowing the exact position and orientation of the camera and/or the navigation device allows you to more precisely apply navigation instructions over the signal from the camera.

According to another variant implementation of the invention relates to a navigation device and the navigation device determines the geographic position using location technology such as GPS, the European system Galileo or any other satellite system global navigation or location detection based on ground makovich radio.

According to another variant implementation of the invention relates to a navigation device, in which the processor calculates the orientation of the camera relative to the first axis of rotation, which is almost vertical in use, by comparing the positions of the camera and/or the navigation device defined by the positioning device, in successive moments of time. Comparing the position of the camera and/or the navigation device in consecutive moments of time can be calculated direction of movement of the camera and/or the navigation device. This can be calculated orientation and changes in orientation of the camera.

According to another the variant implementation of the invention relates to a navigation device, moreover, the navigation device has a compass, providing the compass readings to the processor, the processor configured to calculate the orientation of the camera relative to the first axis of rotation, which is almost vertical in use, on the basis of the compass. Compass provides easy, efficient way to determine the orientation of the camera.

According to another variant implementation of the invention relates to a navigation device in which the orientation sensors include tilt sensors to determine the orientation of the camera relative to the second and third axes of rotation, the second and the third rotation axis is almost horizontal when in use. In order to combine or overlay navigation instructions more accurate way towards the camera image is measured angular orientation of the camera relative to the second and/or third direction.

According to another variant implementation of the invention relates to a navigation device, in which the processor uses the technology of pattern recognition in order to impose navigation instructions over the image from the camera so that the position of the navigation instructions were in a predefined spatial relationship with respect to the corresponding parts of the image from the camera. is using the technology of pattern recognition, navigation instructions can be combined and/or superimposed over the signal from the camera without having to know the exact orientation of the camera. Determining the position of the navigation directions over the image from the camera can be achieved using technologies such as pattern recognition, however, the technology of pattern recognition can also be used in combination with a specific orientation of the camera in order to further increase the precision.

According to another variant implementation of the invention relates to a navigation device and the navigation device uses the map data as input for the technology of pattern recognition. The use of mapping data can simplify the technology of pattern recognition, for example, to simplify recognition, for example, roads, when the device knows about from the map data, where is the road. This makes the recognition more accurate and/or may reduce the computation time.

According to another variant implementation of the invention relates to a navigation device and the navigation device has a capability to make calibration adjustments to keep these calibration adjustments and apply calibration corrections when combining navigatio the different indications and images from the camera. This is in particular effective when navigation instructions are combined in such a way that navigation instructions superimposed over the camera image to have a predefined spatial relationship with respect to the camera image. Calibration adjustments can be used to correct for offset errors.

According to another variant implementation of the invention relates to a navigation device and the navigation device is configured to accept and read the camera settings and use the settings of the camera in order to calculate the position of the navigation instructions on the display. Various camera settings can result in different signals from the camera. Providing the navigation device with such camera settings can further increase the accuracy of merging navigation instructions from the camera image.

According to another variant implementation of the invention relates to a navigation device and the navigation device is configured to receive signals from more than one camera, and the navigation device is configured to select one of the signals for display on the display. Signals from more than one camera, providing different perspectives, can, for example, be the used technologies of pattern recognition, to improve the quality of pattern recognition using mathematical relationships. More than one camera can also be used to provide the user with choices between different camera angles.

According to another variant implementation of the invention relates to a navigation device, in which the camera is sensitive to electromagnetic radiation outside the range of the electromagnetic spectrum that is visible to the human eye.

According to another variant implementation of the invention relates to a navigation device, in which the camera is an infrared camera. This camera allows you to use the navigation device at night.

According to another variant implementation of the invention relates to a navigation device, in which the camera is made with the ability to zoom in and/or delete the image. This allows the user to adjust the camera view according to his or her preferences.

According to another variant implementation of the invention relates to a navigation device, in which the camera is made with the ability to zoom in and/or remove the image depending on, for example, on the speed of the navigation device/vehicle. This provides the signal from the camera, which automatically adapts to korotyaevirhinus device. Thus, when the speed of the navigation device is relatively high, the camera can zoom in the image to give the user additional best look forward.

According to another variant implementation of the invention relates to the dashboard that contains the navigation device, as described above.

According to another variant implementation of the invention relates to a vehicle containing the navigation device, as described above.

According to another variant implementation of the invention relates to a vehicle, and the vehicle contains a tilt sensor of the vehicle in order to determine the tilt of the vehicle, providing indications of the inclination of the vehicle navigation device. This is an effective way to measure the tilt of the vehicle.

According to another variant implementation of the invention relates to a method of providing navigation instructions, the method comprises the steps:

- display the navigation instructions on the display, characterized in that the method further comprises the steps:

- reception of a signal from the camera, and

- display combination camera image of the signal from the camera and navigation instructions over the camera image on the display is.

According to another variant implementation of the invention relates to a computer program which, when loaded into a computer, allows to perform the above method.

According to another variant implementation of the invention relates to a data medium containing a computer program described above.

Brief description of drawings

Embodiments of the present invention will be described only as an example, with reference to the accompanying schematic drawings in which corresponding reference symbols indicate corresponding parts and in which:

Figure 1 is a schematic block diagram of the navigation device;

Figure 2 is a schematic view of the navigation device;

Figure 3 is a schematic block diagram of the navigation device according to a variant embodiment of the invention;

Figure 4 schematically represents the vehicle containing the navigation device according to a variant embodiment of the invention;

Figure 5 schematically represents the navigation device according to a variant embodiment of the invention;

6 schematically represents the navigation device according to a variant embodiment of the invention;

7 schematically represents a camera according to a variant implementation is subramania;

Figa and 8b schematically represent various movements of the camera image on the display as a result of different slopes camera;

Figure 9 schematically represents a sequence of operations of the operation of the navigation device 10 according to a variant embodiment of the invention;

Figure 10 schematically represents the navigation device according to a variant embodiment of the invention;

11 represents the navigation device according to a variant embodiment of the invention; and

Fig is the navigation device according to an additional variant embodiment of the invention.

Detailed description of the invention

Figure 1 shows a schematic block diagram of a variant of implementation of the navigation device 10 containing the processor 11 to perform arithmetic operations. The processor 11 is configured to be connected with the memory blocks that store instructions and data, such as a hard disk 12, a persistent storage device (ROM) 13, an electrically erasable programmable permanent memory (EEPROM) 14 and a random access memory (RAM) 15. The memory blocks may include map data 22. These map data can be two-dimensional map data (latitude and longitude), but may also contain third change is giving (height). The map data may further comprise additional information, such as information about petrol/gas filling stations, points of interest. Map data can also contain information about the shape of buildings and objects along the road.

The processor 11 can also be made with the ability to connect to one or more input devices, such as keyboard 16 or the mouse 17. The keyboard 16 can, for example, be a virtual keyboard presented on the display 18, which is sensitive to touch-screen. The processor 11 can optionally be contacted with one or more output devices, for example, a display 18, a speaker 29, and one or more blocks 19 are read in order to read, for example, floppy disks 20 or the CD-ROM 21. The display 18 may be a conventional computer display (such as LCD), or may be a projection display type, such as located at the level of the windshield display that is used for projection of instrumental data on the windshield or wind shield of the car. The display 18 may also be display, made with the ability to function as a touch screen that allows the user to enter instructions and/or information by touching the display 18 with his finger.

The CPU 11 additional is but can be done with the ability to communicate with other computing devices or communication devices using device 25 input/output. The device 25 I/o is shown as made with the ability to communicate through the network 27.

Speaker 29 may be implemented as part of the navigation ustroystva. When the navigation device 10 is used as a car navigation device, the navigation device 10 can use the speakers of the car radio, on-Board computer or the like.

The processor 11 can additionally be made with the ability to communicate with the device 23 positioning, such as GPS receiver, which provides information about the position of the navigation device 10. Under this option, the exercise device 23 positioning the device 23 positioning GPS based. However, it should be understood that the navigation device 10 may be implemented using other types of GNSS (satellite systems global navigation), such as the European system Galileo. Also it is not limited to systems determine the position/speed satellite-based and can also be deployed using ground makovich signals or any other type systems that allow the device to determine its geographic position.

However, it should be understood that they may be presented with other or different number of memory blocks, at what tristani input and readers, well-known specialists in this field of technology. In addition, one or more of these devices may be physically located remotely from the processor 11, if required. The processor 11 is shown as one unit, however, it may contain multiple processors that operate in parallel or controlled by one main processor, which may be located remotely from the other, as is well known to specialists in this field of technology.

The navigation device 10 is shown as a computer system, but can be any signal processing system with analog and/or digital and/or software technology made with the possibility to perform the functions described in this document. It should be understood that since the navigation device 10 shown in figure 1 as a set of components, the navigation device 10 may be implemented as a single device.

The navigation device 10 may use navigation software, for example, the navigation software Navigator from TomTom B.V. Software Navigator can be run on a PDA-Pocket PC device with a touch screen (for example, to control the stylus), such as the Compaq iPaq, or any other device that has built-in GPS receiver 23. The combined system PDA and PS receiver designed to be used as a car navigation system. The invention can also be implemented in any other form of navigation device 10, such as a GPS receiver/computer/display, or device designed for use outside the vehicle (e.g., for pedestrians or vehicles other than the vehicle (e.g. aircraft).

Figure 2 shows the navigation device 10, described above.

Navigation software, when running on the navigation device 10, the display shows the normal navigation mode, the display 18, as shown in figure 2. This species may provide instructions to control the vehicle using a combination of text, symbols, voice prompts and roaming the map. Key elements of the user interface are the following: 3D map, which occupies most of the screen. Note that the map can also be displayed as a 2D map.

Map shows the position of the navigation device 10 and its immediate environment, rotated so that the direction in which the navigation device 10 is moved, always aimed "up". Line 2 States can pass through the screen in the bottom quarter of the screen. The current position of the navigation device 10 (which navigation is TES device has determined independently using traditional GPS positioning) and its orientation (which is inferred from the direction of motion) position shown by the arrow 3. Route 4, calculated by the device (using the algorithms calculate the route stored in the devices 11, 12, 13, 14, 15 memory as applied to the map data stored in the map database in the devices 11, 12, 13, 14, 15 memory), shown as a darkened path. Route 4 all the basic steps (for example, rounding bends, junction, roundabout junctions and the like) is shown schematically by the arrow 5, superimposed over route 4. Line 2 status also includes a left side schematic icon depicting the following step 6 (in this case, turn to the right). Line 2 status also shows distance to next step 6 (for example, a right turn here a distance of 50 m) as extracted from the database of the entire route, calculated by the device (i.e. from a list of all roads and related actions specified route). Line 2 status also shows the name of the current road 8, remaining travel time 9 (in this case 2 minutes 40 seconds), the actual estimated time 25 arrival (11:36) and the distance to the target 26 (1.4 km). Line 2 States can optionally display additional information, for example, the power of the GPS signal as the signal strength indicator style like used in mobile phones.

As already mentioned the e, the navigation device may include input devices such as touch screen, which allows the user to cause the navigation menu (not shown). From this menu can be initiated and managed other navigation functions. Resolution selecting the navigation functions from the menu screen, which themselves very easily invoked (for example, one action from the map display to the menu screen), greatly simplifies the user interaction and makes it faster and easier. The navigation menu includes options for the user to enter the destination.

The actual physical structure of the navigation device 10 itself is fundamentally may not be different from any traditional pocket computer except integrated GPS receiver 23 or GPS data received from an external GPS receiver. Therefore, the devices 12, 13, 14, 15 memory store algorithms for computing the route, a map database and software user interface; a processor 12 interprets and processes the user input (for example, using a touch screen) to enter the start and end addresses and all other input of control commands, and applies algorithms calculate a route that is facility the best route. "Optimality" can refer to criteria such as shortest time or shortest distance, or some other associated with a user factors.

More specifically, the user enters their initial position and the desired destination in the navigation software running on the navigation device 10, using the provided input devices such as touch screen 18, the keyboard 16 and the like, the User then selects a method, which will be calculated route: various modes, for example, "fast" mode, which calculates the route very quickly, but the route may not be the shortest; "full" mode, which loops through all possible routes and selects the shortest, but it requires more time to calculate, etc. other options to the user-defined route that is scenic, for example, passes through many POI (points of interest)that are marked as types with picturesque architecture, or go through a number of POIs of possible interest to children or uses the least number of road crossings, etc.

Roads themselves are described in the map database, which is part of the navigation software (or otherwise available)running on the navigation device on the e 10, as lines, i.e. vectors (e.g., start point, end point, direction for the road, fully road is made up of many hundreds of such sections, each uniquely defined by start point/end point, direction). A map is then generated from these vectors roads plus points of interest (POIs), plus the names of the roads, plus other geographical elements such as Park boundaries, borders, rivers and so on, all of them are defined in terms of vectors. All map elements (e.g., vectors roads, POI, etc) are specified in a coordinate system that matches or relates to the coordinate system of GPS, allowing you to place the position of the device, as defined through the GPS system, the corresponding road shown on the map.

The calculation of the route uses complex algorithms that are part of the navigation software. The algorithms are applied in order to calculate a large number of different potential routes. Navigation software then evaluates them by user-specified criteria (or the default settings of the device), such as scanning in full mode, with the route through the scenic, historic museums and without cameras speed measurement. The route that best matches the specified criteria, then vychislyaet is by the CPU 11 and then stored in the database in the device 12, 13, 14, 15 memory as a sequence of vectors, names of roads and actions needed to execute on the endpoint of the vector (for example, corresponding to the predefined distance along each road on the route, for example, after 100 meters turn left into the street.

Figure 3 shows a schematic block diagram of the navigation device 10 according to the invention, in which corresponding reference characters refer to corresponding parts as in figures 1 and 2.

According to the invention, the camera 24 is provided so that it is executed with the opportunity to provide a real time signal processor 11. The chamber 24 during use is positioned so that it registered the road ahead of the user. When positioned in the vehicle, the camera 24 is positioned so that it registered the road ahead of the vehicle. The camera 24 may be built into the navigation device 10 or may be physically separated from it. If separated, the camera 24 may be connected to the processor 11 via a cable or wireless connection. The camera 24 may be positioned on the roof of the vehicle or in front of the vehicle, for example, in the immediate vicinity of the headlights.

The navigation device 100 may also be provided with more than one Cam is Roy 24, to allow the user to switch between different angles of the cameras. Can also be provided for the rear view camera. The camera may be a camera of any type, for example, a digital camera or an analog camera. The image, as recorded by a camera 24, is displayed on the display 18.

The camera 24 may also be a camera that is sensitive to electromagnetic radiation outside of the electromagnetic spectrum that is visible to the human eye. The camera may be an infrared camera that allows you to use it at night.

Figure 4 shows an example of the navigation device 10, positioned on the dashboard of the vehicle 1. The navigation device 10 includes a camera 24, which focused on the road ahead of the vehicle 1. Figure 4 additionally shows that the display 18 is turned face to the user.

According to the invention the navigation device 10 is configured to display the real time signal from the camera to the display 18 and to combine or overlay one or more navigation instructions. Navigation instructions can be one of the following: positional arrow 3, route 4, arrow 5, point of interest, roads, buildings and all additional navigation instructions stored in the navigation device 10. They can also include actually kartograficheskie data for example, vector data describing the road. A more detailed description of how this is done is as follows.

Image provided by the camera 24 will not be stable due to road irregularities, vibrations of the vehicle caused by the engine, etc. Therefore, the navigation device may be provided with software that removes these unwanted vibrations in order to provide a stable image. Software that removes unwanted vibration of the images provided by the camera 24, widely used in video cameras, where it is called "steady cam" (image stabilization). This is well known to specialists in this field of technology.

The signal from the camera 24 can optionally be processed to improve the image quality. This processing may include the brightness, contrast, and can be any suitable filter. Filters can be used to improve image quality in rainy weather.

The signal from the camera 24 may be displayed in real time, but can also be displayed as a static image, which is updated at certain points in time, for example, every 0.5 seconds. Suitable time intervals between full updates mouthbut determined depending on the speed of the navigation device 10/vehicle, changing the direction of movement (including curves).

Also, the navigation device can be configured with the ability to zoom in or remove depending on, for example, on the speed of the navigation device/vehicle. This scaling operation can be performed by sending a control signal to the camera 24, giving her instructions to perform the zoom operation. The scaling operation can, however, also be carried out by displaying the combined image on the display 18 of the part of the received signal from the camera.

An implementation option 1

Figure 5 depicts a first variant implementation of the present invention. Figure 5 shows a static image recorded by the camera 24, as displayed by the navigation device 10. As you can see, the arrow 5, to indicate a right turn, imposed by the processor 11. According to a variant implementation easy for the perception of the image is displayed to the user, facilitating interpretation. This implementation has the advantage that it does not require complex mathematics or data.

Instead of the navigation instructions, depicted in figure 5, can be displayed other navigation instructions that are mentioned above, including the presents in the future navigation instructions, n is the sample, presented in the perspective of the arrow.

An implementation option 2

6 shows another static image recorded by the camera 24. According to this example, the navigation device 10 imposes route 4 and the arrow 5. Route 4 arrow 5 superimposed so that their position on the display 18 are consistent with the picture presented by the camera 24. 6 clearly shows that the route 4 is displayed so that it is consistent with the road that is shown on the display 18. In addition, the needle 5 is displayed so that it accurately indicates a right turn in the image represented by the camera 24.

It should be understood that an implementation option, shown in figure 5, can be easily obtained by blending or combining images provided by the camera 24, and navigation instructions, such as arrow 5. However, in order to create the image, as shown in Fig.6, requires more complex processing of data in order to map the image represented by the camera 24, the navigation instructions. This will be described in more detail below.

In order to impose navigation instructions so that they have a predefined spatial relationship with respect to the respective parts of the camera image, it is necessary to know the exact position of the camera direction is executed and the camera settings. If all information is known, the processor 11 calculates the position, for example, the road on the display 18 and imposes route 4.

First, you must determine the position of the camera 24. This can easily be accomplished by using the GPS information determined by the processor 11 and/or device 23 positioning. The positioning information of the navigation device 10, and hence the chamber 24, is always available in the navigation device 10 according to the usage in the prior art.

Second, it is necessary to define the orientation of the camera 24. This can be accomplished by using the orientation sensors made with the possibility to communicate with the processor 11. The orientation sensors may be device 23 positioning or sensors 27, 28 tilt. The sensors 27, 28 of the tilt can be gyroscopes.

Fig.7 depicts the chamber 24 according to a variant implementation of the present invention. You need to define a first direction of rotation relative to the axis, as shown in Fig.7. It can also easily be done using GPS information determined by the processor 11 and/or device 23 positioning. Comparing the position of the navigation device 10 in consecutive moments of time can be determined as the direction of movement of the navigation device 10. This information is also always available in the navigation device is TBE 10 according to the usage in the prior art. Suppose that the camera 24 is directed in the direction of travel of the navigation device 10. However, this is not a necessary condition, as will be further disclosed below.

The first direction With the rotation of the camera 24 may also be determined using (e) a compass is included in the design of the navigation device or camera 24. The compass may be an electronic compass or an analog compass. Compass provides the compass readings that are transmitted to the processor 11. On the basis of the compass, the processor 11 first determines the rotation direction of the camera 24.

In order to further define the orientation of the camera 24, the camera 24 may be equipped with sensors 27, 28 tilt, as shown in Fig.7. The sensors 27, 28 tilt made with the possibility to measure the tilt of the camera 24. The first sensor 27 tilt is configured to measure the tilt in the second direction of rotation, as shown by the curved arrow And 7, i.e. the rotation around an axis essentially perpendicular to the plane of the drawing. The tilt in the second direction of rotation determines the height of the horizon in the camera image displayed on the display 18. The effect of this rotation on the displayed image from the camera is shown schematically in figa.

The second sensor 28 tilt is configured to measure the tilt as a result of rotations is of around a third axis of rotation, which is the Central axis of the chamber 24, is depicted in Fig.7. the dotted line C. the Effect of such a rotation of the displayed image from the camera is shown schematically in fig.8b.

When using the first rotation axis is essentially vertical, and the second and third axis of rotation essentially perpendicular relative to the first axis of rotation and relative to each other.

The skew value defined by the sensors 27, 28 tilt, are sent to the processor 11. Tilt sensors 27 and 28 can also be designed as one integrated tilt sensor.

In addition, the camera settings, in particular the degree of zoom of the camera lens 24, the camera angle, focal length, etc. can be transferred to the processor 11.

On the basis of information available to the processor 11 in the described position, direction, and camera settings 24, the CPU 11 specifies the position at which to display on the display 18 roads, intersections, crossroads, points of interest, etc. corresponding to the map data stored in the devices 11, 12, 13, 14, 15 memory.

Based on this information, the processor 11 may impose navigation instructions, for example, route 4, arrow 5, the points of interest POI, etc. on top of the camera image is displayed by the processor so that they coincide with the views from the camera. It may be useful to overlay navigation instructions so that h is about as though they are floating on the surface of the road or have some other predefined spatial relationship with her.

Because the navigation device calculates how far is any connection roads or rotation (or other change of direction), it can be calculated approximately, what form should have the navigation indication on the display 18, and where it should be positioned to match the actual position of changing the direction shown in the signal from the camera 24.

However, errors can occur for several reasons. First, the navigation device can be mounted on the dashboard of the vehicle in a variety of ways. For example, when determining the first direction of rotation of the chamber 24 relative to the axis by comparing the positions of the navigation device 24 in successive moments of time we assume that the camera is pointing exactly forward. However, when the camera 24 is insufficient carefully aligned with the vehicle, may lead to a divergence imposed navigation instructions.

As described above, when provided by the camera 24 with a built-in compass, the first orientation of the rotation of the camera relative to the axis can be calculated by comparing the readings of the compass with the certain directions of movement of the navigation device 10. However, there may still encounter an error, result is Asa in the divergence imposed by the navigation guidance and the signal from the camera.

In addition, the tilt sensors 27, 28 may be capable of measuring only the relative tilt, rather than the absolute slope. This means that the navigation device 10 must be calibrated to allow accurate positioning and navigation instructions over the image from the camera.

In order to compensate for such errors, the navigation device 10 may be provided with the options in the menus that allow the user to adjust the relative position of the displayed image relative to the displayed image. Such adjustment can be performed by the navigation device 10 by changing the position at which to display navigation instructions, and/or by changing the position, which displays the camera image, and/or by changing the orientation of the camera 24. For the latter option, the camera 24 may be provided with a drive device in order to change its orientation. The camera 24 may be activated regardless of the navigation device 10. When the camera is built into the navigation device 10, the drive unit may change the orientation of the navigation device 10 or only chamber 24 relative to the navigation device 10.

The user can simply use the arrow buttons to calibrate the navigation instructions so so that they coincided with the camera image. For example, if the camera 24 is positioned so that it is inclined to the left relative to the axis, as shown in Fig.7, the navigation instructions are to the right of the relevant parts of the image from the camera. The user can simply correct this error, using the left arrow button to drag the navigation instructions to the left. The navigation device 10 may optionally be configured to provide the user the option to adjust the display orientation rotation imposed navigation guidance on the displayed image from the camera.

The navigation device 10 may also be configured to provide the user with options for correcting perspective differences, for example, arise due to the different altitudes of the camera 24. The camera 24 is positioned at the top of the car, provides another view of the road (another perspective)than the camera 24 is positioned on the dashboard or between the headlights of the vehicle. To make navigation instructions, for example, the 3D direction (for example, the 3D arrow), or a vector representing a road, coincident with the camera view, you must apply the perspective deformation of the navigation decree of the deposits. This perspective deformation depends on the height of placement of the camera 24, the camera settings and the second direction of rotation of the camera 24 in the direction of arrow a as shown in Fig.7.

The CPU 11 stores such entered calibration adjustments and applies such calibration adjustments to all optionally display the images. All additional changes in the measured position, direction and orientation of the camera 24 can be processed by the CPU 11 for further ensure accurate overlay navigation instructions. This allows accurate compensation of any camera movement caused by changes of the vehicle or caused by high-speed intersections, steep turns, accelerations, braking, etc. or other reasons that cause fluctuations in the orientation of the camera 24.

Fig.9 depicts the sequence of operations of the operation of the navigation device 10 according to the second variant embodiment of the invention. The steps shown in the sequence of operations that can be performed by the processor 11. It should be noted that all of the steps associated with the input address, destination, routing, etc. are omitted in this drawing, as these stages are well known in the prior art.

At the first stage 101 of the navigation device 10 is turned on and p is lovatelli selects the camera mode. This is shown in Fig.9 as "the Beginning."

In the second step 102, the CPU 11 determines the position of the navigation device 10. This can be accomplished by inputting a signal from the positioning device 23, such as a GPS device, as discussed above.

In the next step 103, the CPU 11 determines the direction of movement of the navigation device 10. Again the input signal from the positioning device 23 is used for this.

Next, at step 104 the processor 11 determines the orientation of the camera 24 and the camera settings. Again uses the input signal from the positioning device 23. It also uses the input signal from the tilt sensors 27, 28 in order to define the orientation of the camera 24.

According to step 105 camera image is displayed on the display 18 by the CPU 11. At step 106, the CPU 11 applies the selected number of navigation instructions (e.g., arrow 3 current position, route 4, arrow 5, points of interest, roads, map data and the like). In order to do this, the collected information is used to calculate the position and shape of the displayed navigation instructions. If necessary, the user can calibrate the calculation by adjusting the position and/or shape imposed navigation instructions. This optional step shows etapa is 107.

Steps 102-107 may be repeated as often as necessary or desirable in use.

Virtual characters of other species in addition to arrow 5 directions can also be stored in the devices 12, 13, 14, 15 memory. For example, icons related to the names of roads, traffic signs, speed limits, cameras speed measurement or point of interest stored in the devices 12, 13, 14, 15 memory can be saved. All of them can be superimposed over the signal from the camera 24 with spatial position on the displayed image, which corresponds to the real-world items, which correspond to the virtual characters. Here, the processor takes a two-dimensional map data of a navigation software, which include data about the position of these elements of the real world, and apply geometric transformation to correctly place them in the overlay with the video.

If, for example, when a vehicle with a navigation device 10 is traveling up or down the mountain, the sensors 27, 28 slope determine the slope in the direction of arrow a as shown in Fig.7. However, in order to correctly impose navigation instructions over the image from the camera so that navigation instructions match the image with the ameres this slope should not be adjusted. This can be achieved by providing a navigation device with map data containing information on height. On the basis of the map data on the altitude of the navigation device 10 calculates the tilt of the camera 24, which corresponds to the orientation of the road on which the vehicle is moving. This predicted slope is compared with the inclination detected by the sensors 27, 28 tilt. The difference between the predicted tilt detected by the tilt is used to adjust the position imposed navigation instructions.

If the map data does not contain information on height, may provide the vehicle with a sensor 30 of the tilt of the vehicle. The sensor 30 of the tilt of the vehicle is configured to provide indications of the inclination of the vehicle to the processor 11. The sensor 30 of the tilt of the vehicle is then compared with the readings of the sensors 27, 28 of the slope, and the difference caused by unwanted vibrations and the like, is used to adjust the position imposed navigation instructions.

It should be understood that they can be taken into account all types of variations disclosed above and shown examples.

Figure 10 depicts an example in which a map dannemarie contain data describing objects along the road, such as building 31. According to this example, the navigation instructions 3, 4, 5, which are superimposed on top of the building 31, can be shown dashed or flashing lines. This allows the user to observe the map data, the route arrows 4 and 5, which would be otherwise invisible because of the buildings.

A third option exercise

According to the third variant of implementation of the navigation instructions superimposed over the image from the camera using the technologies of pattern recognition.

In recent years there has been considerable progress in the analysis of real-time image frames (for example, video signal, such as provided by the camera 24), in order to identify real objects in the video signal. Widely available literature in this area, for example, reference can be made to US 5627915 (Princeton Video Image, Inc.), in which video from the scene, for example, with a sports stadium, is analyzed by the software of pattern recognition; the operator manually specifies a high-contrast area in the stadium, for example, the line marked on the playing surface; the boundaries of the playing surface, billboards, and the software builds a geometric model of the entire stadium with the help of such high-contrast marks on the ground. For the em software capable of analyzing the video signal in real time in search of such marks on the ground; then it is able to capture stored a computer generated image (for example, advertising for billboards), to apply a geometric transformation to a stored image so that when it is inserted into the video signal at the position specified relative to the geometric model, using technologies such as image synthesis, it looks like a completely natural part of the scene for the viewer of this video.

Can be made the reference to US 2001/0043717 (Facet Technology); it reveals a system that can analyze a video taken from a moving vehicle, in order to recognize traffic signs.

Summarizing, the technical field of pattern recognition applied to analyze real time video in order to identify the elements of the real world is a big and well developed area.

In one implementation, the navigation device 10 uses a software pattern recognition to recognize real-world items in the video signal from the camera 24 and displays navigation instructions (such as arrow 5) on the display 18 in a predefined spatial relationship with elements of the real world, recognized in the video. For example, the video may show the current road, along which moves the navigation device 10, navigazione instructions become three-dimensional directions (for example, three-dimensional arrow), which are superimposed on top of this road. The bends in the road and other elements can be graphically or by using the icons presented and positioned so as to be superimposed over the real-world items to which they relate.

The processor can be programmed so that it can recognize the elements with high visual contrast that associative associated with this road. Elements can also be vehicles moving in the same direction, or road markings (for example, marking the edge of the road, marking the dividing line and so on).

It should be noted that the navigation device 10 is programmed so that it can recognize the elements with high visual contrast, which associatively linked with the road. For example, elements can also be vehicles moving in the same direction, or road markings.

The navigation device 10 may, for example, be programmed with a geometric model of the road ahead: the model can be just two lines. The model may simply be vector data, saved in order to generate map data, as described above.

Then when you use the software pattern recognition finds the visual elements in videopo is a CMOS real-time provided by the camera 24, which correspond to the stored geometric models (e.g., two lines). Once having determined the position of these elements, it will have the effect of the recognition of the road ahead. In a typical embodiment, this will require the rapid transfer and transformation that is applied to items that are recognized in the video signal (e.g., two lines), in order to match with the stored model; the translation is x-y-translation to approximately align the recognized elements with the saved model. Transformation involves mapping in the future in order to interact with different heights, and relative orientation between the two lines in order to interact with different camera angles and the relative angle between the camera and expensive. Similar transformation can be applied to the stored model to align and adapt to the recognized elements.

As will be clear to experts in the art useful for pattern recognition algorithm to have input map data. Pattern recognition can be more simple and fast way, when the algorithm in advance has information on templates for recognition. This information can be easily obtained from available cartog epicheskih data.

Because the transformation is known, it is relatively easy to give shape to the previously saved icons arrows so that their perspective, shape, and orientation corresponded to those for roads in any given video frame (this can be suitable to different kinds of geometric transformations), and then to impose the arrow directions over the road that is shown on the display, using traditional image synthesis. It may be useful to impose the arrow so that it seems to be floating on the surface of the road or is it some other predefined spatial relationship.

Because the navigation device 10 calculates how far is any connection roads or rotation (or other change of direction), it can be calculated approximately, what form it is necessary to give navigation instructions on the display 18, to match the actual position of changing the direction shown in the signal from the camera.

It should be understood that the navigation device 10 may also use a combination of the embodiments described above. For example, the navigation device can be used to measure position and orientation, in order to roughly determine the position of the navigation instructions on the display 18 and to use technology for the GII pattern recognition, to determine the position of the navigation instructions on the display 18.

It should be understood that it is possible to consider many alternatives and variations of the above embodiments. For example, other elements that indicate the names of roads, road signs (e.g., one-way street, no entry, exit, names of objects, terrain and so on), speed limits, cameras, measuring the speed and points of interest stored in the devices 12, 13, 14, 15 memory, can also be superimposed over the video signal spatial location of such "virtual signs" in the video frame can match the real-world items, which include these "virtual signs". Here the speed limit (for example, the text "30 mph") may be imposed so that it appears to be lying or is a part of the surface of the road with a speed limit of 30 miles/HR, an Icon representing a specific road sign may be superimposed over the video so that it appears in the location in which you have to stand real character.

Other types of virtual signs, in addition to the arrows 5 indicate the direction, can be stored in the devices 12, 13, 14, 15 memory. For example, icons related to the names of roads, road signs, speed limits, cameras, speed measurement, the bus is output stops museums, house numbers and points of interest can be stored in the devices 12, 13, 14, 15 memory. They can be also be superimposed over the video signal with the spatial position in the displayed video that corresponds to the real-world items, which include these "virtual signs". Here, the software takes a two-dimensional map data from the navigation software, which is supplied with data about the location of these elements of the real world, and applies a geometric transformation, which leads to their correct placement of the overlay on the video stream.

According to additional alternative technologies of pattern recognition can be performed in the ability to recognize objects on the road such as, for example, other vehicles and trucks. When such objects are detected, the display route 4 can be shown as a dashed line, as shown in figure 11. This provides an image that is more easily interpreted by the user.

The fourth option exercise

According to the fourth variant of implementation, the signal from the camera 24 and navigation instructions, such as hand 3 position, route 4, arrow 5, points of interest (POI), roads, buildings, map data, i.e. the vector data is not naladiv who are, but are displayed on the display 18 of the combined method.

This combination can be achieved by dividing the display on the first part and the second part, where in the first part of the signal is displayed with the camera, and the second part displays the navigation instructions. In addition, the combination can also be made over time, i.e. the navigation device may be configured to sequentially alternately display the signal from the camera and the navigation instruction. This can be achieved by the display signal from the camera in the first period (for example, 2 seconds), and then display the navigation instructions during the second period (for example, 2 seconds). However, the navigation device may provide the user with an option to switch between the signal from the camera and navigation instructions according to his desire.

Of course, can be used more than one camera. The user can be provided with an option to switch between the signal of the first chamber to the signal of the second camera. The user can choose simultaneous display on the display 18 of the signal with more than one cell.

According to an additional alternative, the user can zoom in (enlarge) or remove (reduce) the image. When deleting the image on the display 18 will display all greatly the e environment of the navigation device 10. It should be understood that the user can select, for example, "view from a helicopter, as shown in figure 2, which includes the position of the navigation device 10. This type provides the image of the navigation device 10 (or vehicle) when viewed from behind. Of course, this kind cannot be provided by a camera fixed on the navigation device 10 or the vehicle. Therefore, the navigation device 10 may provide the image, as shown in Fig, where only part of the picture is the view from the camera, surrounded map data and navigation instructions.

As specific embodiments of the invention have been described above, it should be recognized that the invention can be implemented in other ways than described. For example, the invention can take the form of a computer program containing one or more sequences of machine-readable instructions describing the method disclosed above, or media storage (e.g. semiconductor memory, magnetic or optical disk)having such a computer program stored on it. As will be clear to experts in the art, any software components can also be implemented as hardware components.

VicePresident the e embodiments of the invention are considered as illustrative, and not limiting. Thus, for specialists in the art it is obvious that can be implemented modifications to the described invention without departure from its scope defined by the claims below.

1. The navigation device (10)which has a capability to display the navigation instructions on the display, to receive the video signal from the camera (24) and display combination camera image from a video signal from a camera and navigation instructions on the display, characterized in that it is a personal navigation device that includes a camera (24)made with him for one, and it is made with the ability to provide the option from the menu that allows the user to adjust the relative position of the displayed camera image relative to the navigation instructions.

2. The navigation device according to claim 1, in which the option from the menu that allows the user to adjust the relative position of the displayed camera image relative to the navigation instructions, operates to change the position in which the camera image and the position where the navigation guidance remains unchanged.

3. The navigation device according to claim 1 or 2, in which the option from the menu that allows the user to adjust otnositelnoi the position of the camera image relative to the navigation instructions act to change the situation in which navigation instructions, and the position in which the camera image remains unchanged.

4. The navigation device according to claim 1, in which the option from the menu that allows the user to adjust the relative position of the displayed camera image relative to the navigation instructions will act to change the situation in which navigation instructions, and the position in which the camera image.

5. The navigation device according to any one of the preceding paragraphs, which prevents processing of complex data types by displaying navigation instructions as promising some form of arrows.

6. The navigation device according to claim 1, in which navigation instructions are one or more of the route (40) and arrow (5).

7. The navigation device according to claim 6, which is performed with the opportunity to overlay navigation instructions (4, 5) on top of the image from the camera so that the position of the navigation guidance (4, 5) is in a predefined spatial relationship with respect to the corresponding parts of the image.

8. The navigation device according to claim 1, which is performed with the opportunity to make calibration adjustments, save this calibration adjustment is ovci and apply calibration corrections when combining navigation guidance (4, 5) and camera image.

9. The navigation device of claim 8, in which the calibration adjustments are used to correct offset errors.

10. The navigation device according to claim 1, which is performed with the opportunity to modify and display the combination of video (24) and navigation instructions (3, 4, 5), (b) displaying navigation instructions together with the selected part of the map data in the input made by the user.

11. The navigation device of claim 10, which processes the signals using pattern recognition.

12. The navigation device of claim 1, wherein when the recognition define the characteristics of the real world.

13. The navigation device according to claim 1, in which when image recognition are determined by the signs on the road.

14. The navigation device according to claim 1, which has a touch screen.

15. The navigation device 14, in which the option from the menu is selected via the touch screen.



 

Same patents:

FIELD: physics; measurement.

SUBSTANCE: invention relates to portable navigation systems particularly for installation in an automobile. The portable personal navigation device is programmed with possibility of linking any function, related to a basic set of functions, with a non-overlapping input sensory area, which is sufficiently large for reliable activation by touching with a finger. The invention is based on understanding that, a set of basic functions can be identified, and can then be reliably selected/activated by touching the input sensory area with a finger, where the input sensory area is sufficiently large for reliable activation. This is especially preferable for a navigation device installed in an automobile, in which the basic functions are those functions which are likely to be activated by the driver when driving the automobile.

EFFECT: design of a portable navigation device with a non-overlapping input sensory area, which is sufficiently large for reliable activation by touching with a finger.

18 cl, 4 dwg

FIELD: physics, measurement.

SUBSTANCE: device of information provision enables relevant confirmation of information content which facilitates movement of moving object and is represented by image display unit, even in conditions of vibration affecting image display unit at a level not lower than given value. Equipment includes image display unit mounted in vehicle and allowing display of information facilitating movement of vehicle, vibration sensor detecting vibration equal or exceeding specified level applied to image display unit, and transmitting detection output signal, and operation control unit modifying display mode for information presenting image display by image display unit into information including data content which can be recognised if detection output signal of vibration sensor indicates than image display unit is affected by vibration equal or exceeding specified level for time period longer or equal to specified period.

EFFECT: device of information provision enabling relevant confirmation of information content, facilitating movement of moving object.

8 cl, 6 dwg

FIELD: physics, measurement.

SUBSTANCE: device of information provision enables relevant confirmation of information content which facilitates movement of moving object and is represented by image display unit, even in conditions of vibration affecting image display unit at a level not lower than given value. Equipment includes image display unit mounted in vehicle and allowing display of information facilitating movement of vehicle, vibration sensor detecting vibration equal or exceeding specified level applied to image display unit, and transmitting detection output signal, and operation control unit modifying display mode for information presenting image display by image display unit into information including data content which can be recognised if detection output signal of vibration sensor indicates than image display unit is affected by vibration equal or exceeding specified level for time period longer or equal to specified period.

EFFECT: device of information provision enabling relevant confirmation of information content, facilitating movement of moving object.

8 cl, 6 dwg

FIELD: physics; measurement.

SUBSTANCE: invention relates to portable navigation systems particularly for installation in an automobile. The portable personal navigation device is programmed with possibility of linking any function, related to a basic set of functions, with a non-overlapping input sensory area, which is sufficiently large for reliable activation by touching with a finger. The invention is based on understanding that, a set of basic functions can be identified, and can then be reliably selected/activated by touching the input sensory area with a finger, where the input sensory area is sufficiently large for reliable activation. This is especially preferable for a navigation device installed in an automobile, in which the basic functions are those functions which are likely to be activated by the driver when driving the automobile.

EFFECT: design of a portable navigation device with a non-overlapping input sensory area, which is sufficiently large for reliable activation by touching with a finger.

18 cl, 4 dwg

FIELD: physics; navigation.

SUBSTANCE: invention relates to navigation equipment of vehicles. The proposed navigation device can display directions on a display, receive a video signal from a camera and display a combination of the image from the camera and directions on the display. The device, which is a portable navigation device, includes a built-in camera. The device can provide an option from the menu which enables the user to regulate relative position of the displayed image from the camera with respect to the directions.

EFFECT: using the proposed device, instructions which can be quickly and easily interpreted are displayed for the user.

15 cl, 12 dwg

FIELD: physics, navigation.

SUBSTANCE: invention relates to a vehicle navigation system. The navigation system includes a vehicle, an information display (40) fitted in the vehicle, a portable GPS unit (10) and an interface (30) for transmitting data between the portable GPS unit and the information display fitted in the vehicle. The information display (40) is mounted on the vehicle and is visible to the driver. The portable GPS unit (10) includes a GPS sensor for determining location of the GPS unit and a portable information display (20). The portable GPS unit (10) is fitted in a positioning unit in the vehicle such that the portable information display is visible to the driver. Data from the portable GPS unit (10) can be displayed on the information display fitted in the vehicle. In the first version, the portable GPS unit (10) includes a central processing unit (15) for storing several locations. The information display (40) fitted in the vehicle and the portable information display (20) display different information on location of the GPS unit relative the stored locations. An input device (50) is designed for transmitting a signal from the portable GPS unit (10) through the data transmission interface (30). The input device (50) is fitted as an alternative solution on the information display (40) fitted in the vehicle or is fitted such that the driver can operate it without taking hands off vehicle control elements. The input device (50) is designed for transmitting a signal to the portable GPS unit (10) for storing the location of the GPS unit in the central processing unit (15). In the second version, the information display (40) fitted in the vehicle displays data from the portable GPS unit (10) when receiving data from the data transmission interface (30) and displays data from a sensor fitted in the vehicle when the data transmission interface (30) and the portable GPS unit (10) are interrupted.

EFFECT: easy vehicle control.

31 cl, 6 dwg

FIELD: physics.

SUBSTANCE: route guidance system includes: a unit for detecting current location; processing apparatus for compiling a list of strips which a list of strips (Ls1) taking into account connection between strips for groups of strips (from Lk1 to Lk3) in road junctions in the road list displaying area; processing apparatus for determining the visualisation region which determines whether the number of strips in the list of strips (Ls1) is greater than the number of strips in the display unit; and apparatus for processing and controlling the display region, which selects predetermined strips in a list of strips (Ls1) and displays selected strips only. Strips which may not be displayed can be deleted.

EFFECT: possibility of displaying a guide map on strips which takes into account connections between the strips, thereby preventing deterioration of visibility of the guide map.

4 cl, 21 dwg

FIELD: physics.

SUBSTANCE: destinations of a trip are based on at least one of a prior and a likelihood based at least in part on the received input data. The destination estimator component can use one or more of a personal destinations prior, time of day and day of week, a ground cover prior, driving efficiency associated with possible locations, and a trip time likelihood to probabilistically predict the destination. In addition, data gathered from a population about the likelihood of visiting previously unvisited locations and the spatial configuration of such locations may be used to enhance the predictions of destinations and routes. The group of inventions make easier probabilistic prediction of destinations.

EFFECT: output of distributions of probabilities on destinations and routes of a user from observations on content and partial trajectories.

FIELD: instrument making.

SUBSTANCE: there introduced are adaptive modules and connections between them, which allow combining current data on road traffic, weather and time with information on driving habits of particular driver. This information is used during profile formation of particular driver. This driver profile is used for adaptation of navigation instructions. Submission of adaptive instructions to a particular driver can contribute to safer road traffic.

EFFECT: enlarging functional capabilities.

19 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: navigation device has apparatus for digital processing of sounds and audible transmission thereof, memory which stores multiple data in form of text pointers and pre-recorded sounds, apparatus for transmitting data between the processor of the device and memory, an operating system for controlling processing and flow of data between the processor and memory, and determining whether said sounds are reproduced in an audible manner through repeated determination of physical conditions comparable with reference values built into the memory, so that satisfaction of the condition causes the device to generate a sound through the pre-recorded sounds stored on the device, or a sound which is digitally presented by a text to speech (TTS) program component by transmitting a text point to it, which corresponds to an event, a combination of the above said, wherein when determining the event which requires reproduction of sound by the TTS program component, the operating system invokes a set of options selected or marked by the device user during its configuration in order to determine the extent to which this event can be audibly indicated.

EFFECT: possibility of audible indication during enroute navigation of user-predefined information.

14 cl, 6 dwg

FIELD: instrument making.

SUBSTANCE: colour pattern and screen content of a navigation device monitor are assessed and generated. At the same time it is defined at least for one specified condition of the surrounding lighting, monitoring and evaluation of a signal that specifies conditions of the surrounding lighting, whether display settings are used for current conditions of the surrounding lighting, and, if required, changes are made to display settings, so that they correspond to current conditions of the surrounding lighting.

EFFECT: expansion of functional capabilities.

34 cl, 9 dwg

FIELD: instrument making.

SUBSTANCE: signals of interruption of audiopresentation are received in one of the versions of the method's implementation. At that, interruption command is executed on the basis of commands supplied immediately from navigation device in response to interruption signals reception. When this operation is being performed, audiorepresentation interruption state is maintained. Therefore, there is the possibility of resetting the state of audiorepresentation process after each interruption command supplied immediately from navigation device.

EFFECT: enlarging functional capabilities.

12 cl

FIELD: physics; navigation.

SUBSTANCE: invention relates to navigation equipment of vehicles. The proposed navigation device can display directions on a display, receive a video signal from a camera and display a combination of the image from the camera and directions on the display. The device, which is a portable navigation device, includes a built-in camera. The device can provide an option from the menu which enables the user to regulate relative position of the displayed image from the camera with respect to the directions.

EFFECT: using the proposed device, instructions which can be quickly and easily interpreted are displayed for the user.

15 cl, 12 dwg

Up!