|
Method and device for providing multilayer depth model of scene and signal containing multilayer depth model of scene |
|
IPC classes for russian patent Method and device for providing multilayer depth model of scene and signal containing multilayer depth model of scene (RU 2513894):
Method of drawing advanced maps (versions) / 2485593
In the method of drawing advanced maps based on a three-dimensional digital model of an area, involving central projection of points of the three-dimensional digital model of the area by a beam onto a plane, the mapping object is selected in form of a three-dimensional digital model of the area and its boundaries in the horizontal projection are determined; settings of the advanced map to be drawn are given; optimality criteria for advanced display of the mapping object are selected; the value of the horizontal and vertical viewing angle is given; a certain preliminary path of observation points is drawn around the mapped object in the horizontal projection such that the mapped object fits into the section of horizontal viewing angles.
Method of recognising geometrically arranged objects / 2460138
Method of recognising geometrically arranged objects based on a graphical technique of constructing a spherical perspective on a plane does not include lists of measurements and postponements, and is based on plane-parallel displacements in conditions of changing projection planes.
Method for automatic linking panoramic landscape images / 2384882
Invention relates to image processing methods. Images are linked by forming a square grid. Units of the square grid are then mapped and incorrect mapping is eliminated by using a verification procedure during which the initial order of conjugated points on the grid is detected. Collineatory transformation between all images is then evaluated and the resultant image is subsequently formed using an adaptive blending procedure.
Method for automatic linking panoramic landscape images / 2384882
Invention relates to image processing methods. Images are linked by forming a square grid. Units of the square grid are then mapped and incorrect mapping is eliminated by using a verification procedure during which the initial order of conjugated points on the grid is detected. Collineatory transformation between all images is then evaluated and the resultant image is subsequently formed using an adaptive blending procedure.
Method of recognising geometrically arranged objects / 2460138
Method of recognising geometrically arranged objects based on a graphical technique of constructing a spherical perspective on a plane does not include lists of measurements and postponements, and is based on plane-parallel displacements in conditions of changing projection planes.
Method of drawing advanced maps (versions) / 2485593
In the method of drawing advanced maps based on a three-dimensional digital model of an area, involving central projection of points of the three-dimensional digital model of the area by a beam onto a plane, the mapping object is selected in form of a three-dimensional digital model of the area and its boundaries in the horizontal projection are determined; settings of the advanced map to be drawn are given; optimality criteria for advanced display of the mapping object are selected; the value of the horizontal and vertical viewing angle is given; a certain preliminary path of observation points is drawn around the mapped object in the horizontal projection such that the mapped object fits into the section of horizontal viewing angles.
Method and device for providing multilayer depth model of scene and signal containing multilayer depth model of scene / 2513894
Invention relates to a method and a device for providing a multilayer depth model of a three-dimensional scene. The method of providing a multilayer model comprising primary view information for a primary view of the scene from a primary viewing direction (PVD) and occlusion information associated with the primary view information for use in rendering other views of the scene, wherein the primary view information comprises layer segments of the model which are depth-wise closest with respect to the primary viewing direction, and the occlusion information comprises further layer segments of the model and wherein the occlusion information comprises a safety region (SR) adjacent to a depth transition for which occlusion information is provided (J1, J2, J3, J4), and wherein the safety region comprises corresponding segments of the primary view information, and wherein the safety region is located on that side of the respective depth transition which is depth-wise furthest with respect to the primary viewing direction.
|
FIELD: physics, computer engineering. SUBSTANCE: invention relates to a method and a device for providing a multilayer depth model of a three-dimensional scene. The method of providing a multilayer model comprising primary view information for a primary view of the scene from a primary viewing direction (PVD) and occlusion information associated with the primary view information for use in rendering other views of the scene, wherein the primary view information comprises layer segments of the model which are depth-wise closest with respect to the primary viewing direction, and the occlusion information comprises further layer segments of the model and wherein the occlusion information comprises a safety region (SR) adjacent to a depth transition for which occlusion information is provided (J1, J2, J3, J4), and wherein the safety region comprises corresponding segments of the primary view information, and wherein the safety region is located on that side of the respective depth transition which is depth-wise furthest with respect to the primary viewing direction. EFFECT: reduced artefacts resulting from reduction of the multilayer depth model. 13 cl, 17 dwg
The technical FIELD TO WHICH the INVENTION RELATES The invention relates to a method and device for providing a multi-layer model with a depth of three-dimensional scenes, as well as to the signal containing multilayer model scene with depth. The prior art INVENTIONS A display device suitable for displaying three-dimensional images get more and more interest in studies. In addition, considerable research is underway to establish how to provide end users a satisfactory visual perception of high quality. Three-dimensional (3D) displays add a third dimension to visual perception by providing each of the viewer's eyes are different types of the viewed scene. This can be achieved by forcing the viewer to wear glasses to separate the two species show. However, since points can be considered inconvenient for the user, in many scenarios, preference is given to using autostereoscopic displays, which uses the display (such as lenses or partitions) to separate the species, and sending them in different directions where they can separately to achieve the viewer's eyes. For stereodisplays need two kinds, whereas in autostereoscopic displays require a higher quantity the STV species (e.g., 9 species). In order to effectively support the 3D view, it is important to use an appropriate representation of the generated 3D content. For example, for different stereodisplays two types are not necessarily equal, and optimal visual perception usually requires adaptation data content for a specific combination of screen size and viewing distances. The same considerations generally apply to autostereoscopic displays. A popular approach to the representation of three-dimensional images is the use of one or more multi-layered images in combination with the representation of depth. For example, the foreground image and the background image, each with associated information about the depth, can be used to represent the three-dimensional scene. In the "High-quality video view interpolation using a layered representation" L. Zitnick, et al., ACM TRANSACTIONS ON GRAPHICS, ACM, US, vol. 23, no. 3,8, August 2004, it was shown that the species may be displayed based on several input types and accompanying pixel depth maps, the so-called major species. However, the images displayed thus still suffer from distortions caused by sudden transitions between foreground and background. In order to solve this problem, the proposed two-layer representation for the major species, in which the edge of the area near the power is with standard depth, and created a docking information for regions near discontinuities foreground/background. Thus, it becomes possible to display images of higher quality. While displaying selected two main types, next to the new mind, and then all the appropriate layers are bent. Curved layers are combined based on their respective depths of the pixels, opacity pixels and proximity to the new type. Depending on context, the term "depth" is used to indicate information indicating the distances from the respective picture elements to the point of view or information indicating the divergence of the corresponding image elements between two respective points of view. As divergence, i.e. the apparent shift of the picture elements between the types of the left and right eyes, is inversely proportional to the depth, therefore, one and the other view can be used as input to display the types of multi-layered models with depth. The use of image and representation of depth has several advantages in that it allows you to display two-dimensional types with relatively low complexity, and provides an efficient representation of the data compared with storing multiple species, t is m reducing, for example, the resource requirements of the storage and transfer of data for three-dimensional images (and video). The approach also allows to generate two-dimensional image points of view and angles of view that are different from two-dimensional images, which are included in the three-dimensional representation. Moreover, the view can easily be adapted to support different configurations of display. While displaying view angle of view that is different from that presented in the multi-layer images, the foreground pixels are shifted depending on their depth. This leads to the fact that areas hidden when the initial angle of view, become visible. These areas are then filled with a background layer, or, if suitable information of the background layer is not available, for example, repeating the pixels of the foreground image. However, such copying pixels can lead to visible artifacts. Data of the background layer is usually required only near the boundaries of objects in the foreground image, and thus, for most of the content have a high degree of compressibility. It is known that a multilayer model scene with depth can be reduced from a variety of layers to two layers, the upper layer and a layer of occlusion. In this model, layer occlusion is used to predotvrashteniya artifacts associated with repetition of pixels or interpolated background. In systems of the prior art such layers are presented for some of all possible hidden fields for a given number of angles. The INVENTION There is a problem, namely, that the reduction in the size of this layered model with depth can lead to visible artifacts in the display. Also, the encoding of the multi-layer model with depth can lead to similar artifacts. This is especially true in cases where the use of data occlusion is activated implicitly by the content of the material, for example, in the presence of transitions depth reduced multilayer model with depth, rather than through explicitly encoded metadata (which cannot be properly reduced using traditional methods). The present invention is to reduce such artifacts, the resulting decrease. This task is achieved by way of presenting multi-layered model of the scene with depth, layers model with depth in containing the data in the main view the main view of the scene in the main viewing direction and occlusion data associated with data in the main view to use when displaying other kinds of scenes, in which: Dann is E. main types contain segments of the layer model, which are the closest in depth in relation to the primary direction of viewing, and data occlusion contain other segments of the layer model, in which the data occlusion contain the safety area, adjacent to the junction depth, for which the feature data of the occlusion, and in which the safety area contains the appropriate data segments of the main form, and in which the safety area is located on the other side relative to the transition depth, which is further in depth in relation to the main direction of view. Due to resizing or compression multi-layer model with depth, containing the data of occlusion, for example, for storage or transmission, will be broken view of a multilayer model with depth and, in particular, the occlusion. In the display device may be unable to correctly determine the depth change. In particular, the navigation depth is used to enable the use of data occlusion, it can lead to clearly visible artifacts. These artifacts themselves may be more irritable than those that appear when using traditional methods, such as repeating pixels, because they are typically not constant over time and, as a result, secreted sredstv the E. its flapping. Alternate resizing may lead to the identification in the display unit changes depth, where no such transitions before scaling. However, as a rule, artifacts arising from past mistakes, are less visible. The present invention effectively ensures that in the areas of safety, representing areas outside the navigation depth and adjacent thereto, for which data is available occlusion, occlusion data identical to the data occlusion of the main form. The result : better definition of navigation depth in a reduced signal, which typically affects the depth of the data to the main form and depth of the data occlusion. As a result, the reliability enable the use of occlusion data in the video content itself is improved. Usually the data in the main view and occlusion data contain: image data, which provide the color/texture data for use in displaying diocletianic areas, as well as - depth data that provide depth data for use in displaying diocletianic areas. Using the data of the occlusion, the appropriate textures that are visible from a different angle of view, can the apply with suitable offsets corresponding texture occlusion, based on the angle of view data and depth. In one embodiment, the implementation of the multi-layer model with depth, containing more than two layers, is compressed in a two-layer model. This model itself is again a multilayer model with depth. In the last multilayer model with the depth of the upper layer contains the data for the main species, and the layer occlusion contains the data of the occlusion. The obvious advantage is that the top layer can actually be used on the traditional two-dimensional display device without further processing. If the original multi-layer model with depth also contains transparency, the top layer may be a composite layer containing the data of many other layers. In one embodiment, the implementation layer occlusion contains the segments of the upper layer always, when there are no additional segments of the layers of the model, which in its depth are the closest to the top layer. As a result, the layer of occlusion is more complete image to use when displaying, and there is no need for alarm data availability occlusion layer occlusion, or in the dynamic construction of the full layer occlusion. In one embodiment, the implementation of the respective data segments occlusion in the data layer of occlusion based on the maximum discard the AI on the angle of view of the main species and water depth data of the scene. In another embodiment, the size of the respective segments based on a predetermined width from the inner side synchronous transitions of depth in both the upper and in the composite layers. In yet another embodiment, the size of the respective segments based on a predetermined width from the inner side and on the advanced a predetermined width from an outer side of the transition depth for which data is available occlusion. Each of these embodiments provides the advantage that the first version of the implementation allows the ability to encode layer occlusion, which makes it possible to display correctly within the limits of the angle of view and depth. The second variant implementation makes possible more convenient for storage implementation. The third variant embodiment of the invention allows to prevent compression artifacts, such as mosquito noise by encoding the data occlusion. In the following embodiment, the amount of security based on a predetermined maximum reduction ratio of the size. This feature allows the content provider to be able to define a range of permissions, which reduces the size of information about occlusion should lead to the minimum scaling artifacts. DL is an example consider the content, distributed with a resolution of 1920×1080 pixels, also consider that the elasticity of the reduction of the size you want up to 640×480. In this case, in order to preserve the safety area width to 2 pixels at the lowest resolution, introduced the safety area width of 7 pixels. Similarly, the amount of security may be based on the degree of detail of the codec used to encode data occlusion. Since protection size reduction with the use of the invention can be applied to both x and y directions independently, the safety area can be selected independently for each corresponding direction, based on the desired stability. In a preferred embodiment, the multilayer model c depth also contains transparency data, as a minimum, data for the main species. In accordance with the embodiment, the value of transparency in the security field, adjacent to the transition depth, mostly transparent, and the value of transparency in security at the other end security is mostly opaque. Strictly speaking, transparency does not require security for this variant of the invention, since the data in the main view and occlusion data in this variant the implementation is identical. However, to make multi-layer model with depth, more resilient, it is preferable to apply a smooth transition with or without additional security zones. In a further preferred embodiment, the safety area contains a predetermined number of consecutive transparent pixels adjacent to the transition depth, perhaps in addition to the gradient from essentially transparent (adjacent to the serial transparent pixels), to essentially opaque at the end of the security area, distant from the junction depth. Thanks to the particular choice of the values of transparency, any values of opacity/transparency/alpha of the top layer can be largely preserved when reducing the size. Similarly, a predetermined number of consecutive values of transparency can be used at the end of the security areas after gradient to ensure that the values in the immediate vicinity of the security is not "broke" during size reduction. Alternatively, instead of using gradient between successive transparent/opaque pixels, as may be possible using a sharp transition, provided that it is located on the border of the block to the EPA decoder. Consistent, transparent and/or serial opaque pixels can be used for the correct positioning of the transition at the border. The obtained aligned transition can be encoded more efficiently. This, in turn, can help prevent the introduction of encoding artifacts that will lead to increased sustainability. In a further preferred embodiment provides a protective area of deep pits for primary data type for the field that contains the deep pit; that is, the transition depth down, followed by a transition depth upwards, not later than the first threshold number of pixels. For such a deep pit is preferable that the data depth of the main species were assigned to either one of the values of the depth of the top edge of the deep pit, or an average of depth values of both the upper edge or gradient between the depth of the upper edges or the interpolated segment, based on depth values of both upper edges of the deep pit. In addition, it is preferable to set transparency values between adjacent irregular depth essentially transparent. In the deep pit is better protected from the smaller size. In a further preferred embodiment, the invention provides a protective area glubinnogo for primary data type for the field, containing in-depth peak; that is, the transition depths up, followed by a transition depth down, not later than the second threshold number of pixels. For such in-depth peak preferable that the data depth of the main species were assigned a depth value of the vertex depth of the peak within a protective area, maximum depth, and transparency values in the protective area maximum depth outside depth of the peak were determined by being transparent. In the peak depth is better protected from the smaller size. The method in accordance with the present invention can be used in the method of processing three-dimensional model of the scene by obtaining three-dimensional models, views, multi-layered model of the scene with depth according to the present invention and a three-dimensional model of the scene. The present invention also relate to a signal containing a multilayer model of the scene c depth, layers model with depth containing encoded data of the main view the main view of the scene in the main viewing direction data for the main species contain a layer segments of the model that is closest in depth with respect to the primary viewing direction, and the coded data of occlusion associated with data in the main view to use when displaying other kinds of scenes, in which Dan is haunted occlusion contain additional layer segments of the model and the data occlusion contain the safety area, adjacent to the junction depth for which data is provided to the occlusion, and in which the safety area contains the appropriate data segments of the main form, and in which the safety area is located on the same side relative to the transition depth, which depth is more in relation to the main direction of view. The present invention also relates to a device representing a multilayer model of the scene depth, layers model with depth, containing the data in the main view the main view of the scene in the main viewing direction, and occlusion data associated with data in the main view to use when displaying other types of scenes, the device is organized to provide: primary data type, such that contain the segments of the layer model, which in relation to their depth closest to the point of view of the main form, and the data of the occlusion such that contain additional segments of the layer model, in which the data occlusion contain the safety area, adjacent the transition depth for which data is provided to the occlusion, and in which the safety area contains the appropriate data segments of the main form, and in which the security is located on the other side relative to the transition depth, which depth is more in relation to the main direction of view. These and other aspects, features and advantages of the invention will be apparent and explained, referring to embodiments of described next. BRIEF DESCRIPTION of DRAWINGS Below, solely as an example will be described embodiments of the invention on the example of the drawings, where the same numbers correspond to elements with the same functions, which are: in Fig. 1A shows a horizontal cross section of a multilayer three-dimensional model of a scene with depth, in Fig. 1B shows the top layer of the multilayer model with depth from Fig. 1A, in Fig. 1C shows the corresponding prior art layer occlusion multilayer model with depth from Fig. 1A, in Fig. 1D illustrates the concept of the security areas of occlusion for multilayer model with depth from Fig. 1A in accordance with the embodiment of the present invention, in Fig. 2A illustrates adding transparency to a multilayer model with depth from Fig. 1A, in Fig. 2B illustrates adding transparency for security in accordance with the embodiment of the present invention, in Fig. 3 illustrates adding transparency to the security area, adjacent to the junction depth in accordance with a preferred embodiment of the invention, in Fig. 4A to prolly is concentrated the process of securing the deepest pits data for the main species, in Fig. 4B illustrates the process of securing maximum depth data for the main species, in Fig. 5 illustrates the method according to the present invention, in Fig. 6A illustrates the processing device and the signal in accordance with the present invention, in Fig. 6B illustrates the device and the signal in accordance with the present invention, in Fig. 6C illustrates an additional device and alarm according to the present invention, in Fig. 7A illustrates the relevant bits of texture, depth, texture, occlusion and depth data occlusion two-layer model with depth in accordance with the current prior art, in Fig. 7B illustrates the corresponding fragments of texture, depth, texture, occlusion and depth data occlusion two-layer model with depth in accordance with the embodiment of the present invention, in Fig. 7C illustrates the displayed view, based on the input data of Fig. 7A, and in Fig. 7D illustrates the displayed view, based on the input data of Fig. 7B. DETAILED description of the INVENTION The use of multi-layered representation of depth to use when displaying new species on the basis of a multi-layered representation of depth attracts the attention of researchers for prodolzhitelnost the period of time. In the "Layered Depth Images" Shade and collaborators, published in the Proceedings of ACM SIGGRAPH 1998 described storing three-dimensional model based on the multi-layer image, and displaying content based on it. In the "High-quality video view interpolation using a layered representation" Zitnick and co-authors, published in Proceeding of ACM SIGGRAPH 2004, revealed a multilayered model of the scene with depth, where the multi-layer model with depth contains the data of the occlusion in the form of edge color, edge depth and boundary alpha (opacity). It also provides a way to display the types of the multi-layer model with depth. The authors of the present invention realized that in practice, when the content is distributed using a multilayer mode with depth, you may need to reduce the size and/or shrink layered model with depth. However, if this is a software scaler or a hardware scaler, which are indifferent to the problem (problems)are addressed by the present invention, it is, as a rule, leads to visible artifacts. An example of such artifacts is shown in Fig. 7C, where the artifacts in the field 745 displays a zoomed-in region 740. This problem is particularly true for situations where the data depth of the multilayer model with depth, are used to on the treatment of data use occlusion. In this situation, reducing the size of the multilayer model with depth may affect the data in the main form and the data of the occlusion so that the transition depth is no longer recognized correctly, or will be recognized in the case when it was not to reduce the size. Possible heuristic procedure for encoding data occlusion below. Specified that it is only illustrative and other inclusion mechanisms can be applied. However, from this description it will be clear that reducing the size of the data depth in the main view and in the data occlusion may affect the correct inclusion of data use occlusion. Data occlusion should be provided for the transition depth when: the low pixel data of the main form and occlusion data like, for example, can be used a fixed threshold value, high pixel data in the main transition should be much larger than the pixel data of the occlusion; for example, there may be additional fixed threshold value, and high pixel data of the main form should be significantly more than the low pixel data of the main form. This prevents the detection of transitions, when the data for the main types of homogeneous data and occlusion suddenly "finish what I am". In the present invention, this reduction in size is solved in that the invention provides a solution that allows you to display the model after size reduction in a way that leads to fewer artifacts, as shown in Fig. 7D, which in the field 755 (compare with 745) shows the artifacts, and in the field 750 shows an increase of 755. In Fig. 1A presents a horizontal cross-section of a multilayer model of three-dimensional scenes with depth. This particular model has five layers; the layer L1 layer L2 comprising the segments L2,1 and L2,2, layers L3 and L4, and the background layer L5. Although in this example, all layers are represented by horizontal lines, i.e. they have the same depth throughout, it is not mandatory, but merely a simplification for clarity. It should be noted that the depth within the layer, as a rule, will vary. In practice, when using object-based encoding, such nutricline variations of depth, as a rule, belong to the depth of the object. When layered image with depth, as shown in Fig. 1A, is shown for observations in the primary viewing direction (PVD)received view will contain segments of layers of the model, which in relation to their depth are the closest to the point of view of the main form. These segments taken together, the spine form the top layer TL. To define the upper layer, the camera model used to create a layer of occlusion, is the camera placed at infinity in the direction of the main direction of view indicated by PVD. In the top layer in multilayer image with depth is a layer in which a certain position (x, y) is the depth closest to the camera, in this case the highest layer. In Fig. 1C illustrates the segments of the multi-layer model with depth, forming a layer of occlusion in accordance with the prior art. Layer occlusion contains additional segments of the layer for layers in the model, relative to their depth are the closest to the top layer, as indicated in Fig. 1C through layer occlusion OL. As a result, when there is a layer, next on the depth of the upper layer, then this layer provides data to the occlusion. It should be noted that in the figures for, with respect to the direction of view PVD, corresponds to the "under". As the layer of occlusion in this example is located under the top layer, the layer of occlusion, as a rule, changes the layer c into account discontinuities or jumps in the upper layer, as noted with the use of jumps J1, J2, J3 and J4. A layer of occlusion OL it should be noted that, although Fig. 1C, the Central segment under L1 included in the layer of occlusion, it is not mandatory the service is quality. In fact, because there is only a very small probability that the displayed view will need to display this layer segment occlusion, it may be deleted from the model completely, thus saving storage space. However, not the inclusion of these data it may be necessary to resort to interpolation or repetition of the pixels in order to fill such a hole in the layer of the occlusion during the display. The foregoing clearly shows that the heuristic procedures can play an important role in determining what is included in the layer occlusion, and what is not. In Fig. 1D illustrates the occlusion data, here in the form of a layer occlusion OL2, in accordance with the present invention. As shown in Fig. 1D, a layer segments occlusion in the areas of safety SR1, SR2, SR3 or SR4, adjacent to the simultaneous transitions depth J1, J2, J3 or J4 as in the top layer TL and the layer of occlusion, are replaced by the corresponding segments of the upper layer TL1, TL2, TL3 or TL4. The safety area is located on the outer side of the crossing depth, which corresponds to the side of the transition depth in the upper layer, where the depth is smallest with respect to the primary viewing direction PVD. Layers of occlusion and layers of depth, is shown in Fig. 1A-1D, contain both the texture data (here, the black/grey), and the data depth the ins (z-component). For professionals it will be obvious that the multilayer model with depth, containing the data for the main species and data occlusion OL2, as shown in Fig. 1D, can be encoded by a large number of ways. The first approach, for example, is to encode the multi-layer model with depth, as shown in Fig. 1A, and in addition to encode information related to security areas TL1, TL2, TL3 and TL4 as if it were information occlusion. Another approach is to encode the top layer, as shown in Fig. 1B, and a layer of occlusion OL2, as shown in Fig. 1D, and use these two layers are combined as a simplified layered model with depth. However, regardless of how multi-layered model with depth coded the data according to the occlusion OL2 data of the upper layer TL in the areas of safety is what distinguishes a multilayer model with depth in accordance with the present invention. The layer of the occlusion, as shown in Fig. 1C corresponds to a layer of occlusion, where each pixel is assigned the values of brightness and depth. As a result, dynamic construction layer occlusion is not required. In Fig. 2A shows a multi-layer image with depth, including the top layer TL, corresponding shown in Fig. 1A. In addition, in Fig. 2A illustrates the COI is whether the transparency of the segments of the top layer TL order to obtain smooth edges of the object when displayed. Here, transparency is used to prevent jagged edges. It should be noted that towards the ends of the respective segments, layers, alpha values corresponding to the opacity reduced, which leads to transparent edges in the top layer of the multilayer model with depth. In Fig. 2B shows a layered image with depth, including layer occlusion in accordance with the embodiment of the present invention, as described in relation to Fig. 1D. Data transparency of the top layer, which are presented in accordance with the embodiment of the present invention with fixed values of transparency/opacity in the areas of safety, is included in Fig. 2B. Although, strictly speaking, transparency is not required in the areas of safety, as shown above, the addition of such data can improve the size reduction or compression multi-layer model with depth. In Fig. 3 shows a further refinement of the transparency within the field of security in accordance with the preferred embodiment of the invention. In this figure a predetermined number of consecutive transparent pixels are located adjacent to the transition depth. Also on the left side of the transition used a predetermined number of consecutive p the color of the opaque pixels. Due to these additional transparent/non-transparent pixels in the security field, the influence of gradient transparency security on the scaled transparency of the top layer is minimized. Thanks to these pixels, data transparency on the main view will remain essentially unaffected by the gradient of transparency in the field of security; that is, provided that the number of fully transparent pixels selected large enough to compensate for the reduction ratio of the size. Although the invention has been described to ensure security along the horizontal direction, i.e. along the x axis direction, the present invention can also be applied along the vertical direction. For example, when reducing the size of the content, provided with a maximum resolution of 1920×1080, which should be able to be reduced to 640×480 pixels, which corresponds to the reduction ratio of the size equal to 3, in the horizontal direction, preferably in the horizontal direction of the safety area width of 7 pixels. It is preferable to have the security scope 2 pixels, so as not to allow the encoding to influence the boundary layer. In the result, theoretically, would be enough to use the safety area 6 pixels. On ractice as a result, for example, asymmetric nuclei filter, the safety area width of 7 pixels is preferable. As artifacts in the vertical direction for most current applications are less important, it uses the security scope 3 pixels. Utilization of security for data protection occlusion is illustrated in Fig. 7A-7D. In Fig. 7A presents a set of fragments of the input images, representing the top layer and the layer occlusion multilayer model with depth. The image 710 shows a fragment of a color image, in which the elliptical object partially overlaps the rectangular striped object. The image 712 shows a fragment, which shows the corresponding depth map, where a lighter tone closer to camera, image 710 and 712 together form the top layer. Similarly, the layer occlusion presents images 714, 716, comprising overlapping the color image and the depth map occlusion, respectively. In Fig. 7B presents a set of fragments of the input image 720, 722, 724 and 726 corresponding to the respective images 710, 712, 714 and 716, except that the layer of occlusion was established in accordance with the embodiment of the present invention. In particular, it is stated that the color data and depth data of the upper layer relating to gender is - a rectangle object, cover a large area in the layer occlusion in Fig. 7B than in Fig. 7A (i.e. less data occlusion for the stars behind the striped rectangle). This is the result that the data of the upper layer "copied" in the layer occlusion. In Fig. 7C shows a displayed view, based on the input images, fragments of which are shown in Fig. 7A; i.e. based on the layer occlusion in accordance with the current prior art (after encoding/decoding). In Fig. 7C significant artifacts display, as noted in the field 745 and again in the increase in field 740. In Fig. 7D shows a displayed view, based on the input images, fragments of which are shown in Fig. 7A; i.e. based on the layer occlusion (after encoding/decoding) in accordance with the present invention. In Fig. 7D shows significantly fewer artifacts, as noted in the field 755 and again in the increase in the 750. Layer occlusion, as described above, can be used to store or distribute content. As a result, the content that was presented with the thus treated layer occlusion can be reduced with the use of funds decrease, which is widely available at the priority date. Such content is also more elastic to the encoding/decoding. Moreover, there is no need to configure algori which we scale to service-specific features actually used format occlusion, instead, as a rule, the occlusion can be reduced by the way, close to the normal two-dimensional images. Alternatively, the described method can be applied directly before size reduction (not necessarily on the side of generation of the content). Or it can be used on the side of generating content with protective fields, large enough to protect from coding, but not necessarily with large coefficients reduce the size, and then later with large protective fields taking into account the forthcoming operations reduce the size. In Fig. 4A illustrates the concept of protecting the deepest pits DHP data for the main species in accordance with another embodiment of the present invention. In Fig. 4A shows the profile depth, containing two holes. Provided that such pits is quite narrow, so that the width of the field of protection of the deepest pits DHP is less than the first threshold value, for example, 3 pixels when image three times, deep pit after scaling may disappear entirely. Fig. 4A illustrates a protection circuit of the deep pit, which is complementary areas of data security occlusion. In Fig. 4A two holes in the data main type "patched" by establishing depth for sections D1 and D2 are equal to these values, which are linear in the echolalia high edge of the deep pit. In addition, the opacity of the sections D1 and D2 set to 0; that is, they are made transparent. As a result of this approach, two deep pits in the data of the main species are in fact replaced by two transparent Windows, which are less likely to cause artifacts. In Fig. 4B illustrates the concept of protective field depth peak DSP for data on the main view, in accordance with the embodiment of the present invention. In Fig. 4B shows the profile depth, containing in-depth peak. Assuming that this peak is quite narrow, so that the width of the peak is less than a certain number of pixels, for example, 3 pixels when image three times, after scaling the deep peak may disappear entirely. In Fig. 4B illustrates a protection circuit maximum depth, which is complementary areas of data security occlusion. In Fig. 4B, the peak data in the main form expanded by establishing depth for sections D3 and D4 are equal to the values that represent a continuation of the high edges of the peak. In addition, the opacity of the sections D3 and D4 set to 0, i.e. they are made transparent. As a result of this approach the peak is made wide enough to survive the process of scaling, but the "side wings" made transparent by means of opacity values. In Fig. 5 shows a block from the EMA method of processing 500 multi-layer model of the scene with depth. The method includes the step of receiving 505 a three-dimensional model of the scene, which can be layered model with depth, or wireframe model, or other suitable three-dimensional model of the scene. The method also includes providing 510 a multilayer model of the scene with depth, where the multi-layer model with depth contains the data in the main view the main view of the scene in the main viewing direction and occlusion data associated with data in the main view to use when displaying other types of scenes. The data for the main species, in turn, contain the segments of the layer model that depth are the closest relative to the primary viewing direction. Data occlusion contain additional layer segments of the model. Data occlusion contain the safety area, adjacent to the junction depth, for which the feature data of the occlusion. Security, in turn, contains the appropriate data segments on the main view, and the safety area is located on the side toward the transition depth, which depth is a distant relative to the primary viewing direction. Optionally, the method depicted in Fig. 5, may also contain processing step 515 multi-layer model with depth, such as reducing the size, encoding, retention, transfer and/or alter the exploring mapping. In Fig. 6A shows a device 600 for processing a three-dimensional model of the scene. The device includes means for receiving 670 located for receiving the three-dimensional model of the scene. As a means of reception can be a receiver for receiving data over the wireless network, or alternative input node for receiving a three-dimensional model of the scene, for example, from the data medium. The device 600 also includes a device 610 to provide a multilayer model of the scene with depth 605. Layers model with depth 605, as applicable, contain the data in the main view the main view of the scene in the main viewing direction and occlusion data associated with data in the main view to use when displaying other types of scenes. The device 610 is configured to provide data to the main types of these, which contain the segments of the layer model, which in relation to their depth are the closest to the point of view in the main viewing direction, and the data of the occlusion, which contain such additional layer segments of the model. Data occlusion contain the safety area, adjacent to the junction depth, for which the data occlusion feature, and where the safety area contains the appropriate data segments on the main view, and where the safety is located on the side toward areas and transition which depth far with respect to the primary viewing direction. Optionally, the device may also include means for processing 680, which may be, for example, a General purpose processor, a specialized for a specific task integrated circuit (ASIC) or other processing platform for processing multi-layer model with a depth of 605. The processing may include, for example, reducing the size, encoding, storage, transmission and/or alternative display. The device 610, as shown in Fig. 6A may also be adapted to output the signal 615 separately to the consuming device 620, or, alternatively, as shown in Fig. 6B, the signal 615 may be distributed over the network 630, which may be, for example, a home network, the other internal network or the Internet. Although the device 610 has been described for processing a three-dimensional model of the scene, a three-dimensional model of the scene can also be a multilayer model with depth, which in the private favourable case, it may be a two-layer form, including upper layer 625, which may also be referred to as a composite layer, and a layer of occlusion 635, as shown in Fig. 6S. It should be borne in mind that in the above description, for clarity, described embodiments of the invention with reference to the distinctions of the major functional units and processors. However, it will be obvious that any suitable distribution of functionality between the functional nodes or processors may be used without diminishing the invention. For example, the functionality performed in the description of separate nodes, processors or controllers, can be executed in the same processor or controller. Hence, references to specific functional sites should only be considered as references to suitable means for providing the described functionality, and not as indicating a strict logical or physical structure or organization. The invention can be implemented in any suitable form, including a device, a program, firmware, or any combination thereof. The invention can optionally be implemented at least partially as a computer program running on one or more data processor and/or digital signal processor. The elements and components of a variant embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Accordingly, the functionality can be implemented in a single node, multiple nodes, or as part of other functional units. Thus, the invention can be implemented as a single node, or may be physically and fun is Ino distributed between different units and processors. Although the present invention has been described in connection with some of the options for implementation, it does not imply a limitation of the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims. In addition, while it may seem that the characteristic described in connection with a specific embodiment, the person skilled in the art recognizes that various aspects of the described embodiments can be combined in accordance with the invention. In the claims the term "contains" does not exclude other elements or steps. Moreover, although many of the tools, elements, or steps of the method described individually, they can be implemented, for example, one node or processor. Additionally, although individual features may be included in different claims, it may be advantageous to combine them, and the inclusion in different claims does not imply that the set of features is not possible and/or beneficial. Also, the inclusion of the feature in one category of claims does not mean the limitation of this category, but rather indicates that the sign is equally applicable to other categories of claims if necessary. Moreover, the order of the signs in the formula from which retene does not affect any particular order where these signs should work, and, in particular, the order of individual steps in the claim on the way does not imply that the steps must be performed in this order. Rather, the stages can be performed in any suitable order. In addition, references in the singular does not exclude a plurality. Thus, "first", "second" etc. do not exclude a plurality. Positional notation in the formula of the invention, given only as an illustrative example, should not be construed as in any way limiting the scope of the claims. 1. Method of providing (510) a multilayer model with depth (605) scenes and layers model with depth (605) contain information on the main view to the main view of the scene with the mainstream view (PVD) and the occlusion information associated with the main view to use when displaying other kinds of scenes, in which 2. The method according to claim 1, in which 3. The method according to claim 1, in which 4. The method according to claim 3, in which 5. The method according to any one of claims 1 to 4, in which the size of occlusion is based on one of: 6. The method according to any one of claims 1 to 4, in which 7. The method according to any one of claims 1 to 4, in which 8. The method according to claim 7, in which 9. The method according to claim 7, in which 10. The method according to claim 7, in which 11. The processing method (500) three-dimensional model of the scene, and the method comprises: 12. Device (610)providing a multilayer model with depth (605) for a scene, and layers model with depth (605) contain information on the main view to the main view of the scene with the mainstream view (PVD) and the occlusion information associated with the main view to use when displaying other types of scenes, and the device (610) is configured to provide: 13. The device (600) for processing three-dimensional model of the scene, and the device includes:
|
© 2013-2014 Russian business network RussianPatents.com - Special Russian commercial information project for world wide. Foreign filing in English. |