|
Methods for automatic identification of participants for multimedia conference event |
|
IPC classes for russian patent Methods for automatic identification of participants for multimedia conference event (RU 2488227):
Method of creating virtual space effect with composite video / 2461143
First image of a room in which said videoconferencing participant is located is obtained; an image of the room in which remote videoconferencing participants are located is obtained with the remote videoconferencing participants; second images are selected therefrom, which contain an image of remote videoconferencing participants; a third image is obtained by superposing the first and second images, wherein the position of second images relative the first image is selected based on conditions for creating the most realistic virtual space effect, and the third image is displayed for said videoconferencing participant.
Facility to control sessions of secure video conferencing in coded communication network / 2460235
In a system of distributed access control functions there are mechanisms used to control sessions of video conferencing based on analysis of data flows send along alarm protocols to establish connections when sending audio and video data in IP networks.
System, method and multipoint control unit for providing multi-language conference / 2426247
System includes conference terminals (CT) and a multipoint control unit (MCU). The CT can transmit speech of a conference site to MCU after processing, process audio data received from the MCU and output said data. At least one of the CT is an interpreting terminal capable of interpreting the speech of the conference site according to the audio data transmitted from the MCU, process the interpreted audio data and transmit the processed audio data to the MCU. The MCU is adapted to perform a sound mixing process for the audio data from the CT in different sound channels according to language types, and then transmits mixed audio data after the sound mixing process to the CT.
Connection of independent multimedia sources into conference communication / 2398362
System includes a node which has content and a network address and is connected to a network, a first user node and a second user node connected to each other over the network in order to set up a conference. The first user node can provide the address of the node with content through the network to the first and second nodes so that the first and second nodes can both access content of the said node with content during the conference.
Control of conference layout and control protocol / 2396730
System for organisation of teleconferences includes network, unit of teleconferences organisation, multiple units in connection with each other to form a conference. Each unit has a video display with layout of images, at least one of units individually controls at least partially layout of image of each unit in conference with specific format, which may be unique for each unit. Method to provide conference communication includes stages of conference formation with multiple units in connection with each other via network, at the same time each unit communicates to other units only in case of change, when this changes takes place.
System and methods of projecting content from computer devices / 2389067
At least one indicator which corresponds to alternative display devices available for projecting content is displayed through a user interface mechanism. At least one alternative display device is selected through the user interface mechanism from alternative display devices indicated as available for projecting content through the said at least one indicator, and the content is sent to the selected at least one alternative display device.
Notification about information services by means of broadcast or multi-address transmission / 2298288
Receiver receives notification signals about IP-sessions from ground-based system for broadcasting digital video data. Each notification about IP-session (RM) contains information, related to notification at next lower level, and also information, identifying amount of notification data, such as number of notification messages and allocation of waiting time. Quantitative data are reported to receiver, how many messages (from X1 to X3) it is supposed to receive, before it receives all relevant notification messages. Waiting time value informs receiver, how long it may wait, before receiving messages from X1 to X3 again, which may have altered content. After original receipt of messages, receiver may be in inactive state (E), or may receive notification messages from another channel Y(D).
System for organization of video conference service, metod for providing video conference service and service center / 2279767
Module for processing services, included in service center, realizing control over video conference, uses expandable multilevel client-server architecture, wherein module for processing services of n-th level processes module for processing services of (n-1)th level as a client, where n - natural number greater than one.
Method for realization of combining multi-sets of multiple digital images and of bus interface equipment / 2273963
Bus interface has block for processing image for m channels of sub-images, block for processing image for combining images of n set, n sets of video data buses, address and control buses, logical module for controlling access to pixel data and bus commutation, and also module for forming clock and synchronization signals. Gating signals of bus control pixel data of m channels of sub-images, and output them to one of bus of n-numbered video data group in certain time moment. Sets of blocks for image processing for combining images combine pixel data of sub-mages on buses of video-data of n groups and form data of combined image.
Method for calling video communication service using video communication terminal and appropriate form of message data / 2263410
At least one device for controlling multi-side communication is connected to video communication terminals through communication network, while multi-side controlling device call is performed using first video communication terminal and information about conference is sent to multi-side communication control device, which information is necessary for gathering a conference, through main channel, formed between first video communication terminal and device for controlling multi-side communication and conference is gathered using device for controlling multi-side communications in accordance to information about conference.
Method for implementing radiophone based conference call and dynamic grouping / 2438266
Dynamic grouping function and members of an intended group are selected through a man-machine interface of a radiophone, a control packet is generated in a coding format and an extensible interface, the coding format and the extensible interface being inherent to a preset protocol stack module, and the control frame is then transmitted in a control frame to a corresponding receiving end. The receiving end decodes the control frame and initiates a frequency modification module to modify frequency modification information by adding the group ID of the intended group.
System, method and multipoint control unit for providing multi-language conference / 2426247
System includes conference terminals (CT) and a multipoint control unit (MCU). The CT can transmit speech of a conference site to MCU after processing, process audio data received from the MCU and output said data. At least one of the CT is an interpreting terminal capable of interpreting the speech of the conference site according to the audio data transmitted from the MCU, process the interpreted audio data and transmit the processed audio data to the MCU. The MCU is adapted to perform a sound mixing process for the audio data from the CT in different sound channels according to language types, and then transmits mixed audio data after the sound mixing process to the CT.
Methods and device for provision of direct telephone communication and teleconference service / 2407235
Method to control permitted time of conversation in service of direct telephone communication or teleconference using terminal and controller, includes a stage, at which, in response to received decision by controller to provide a permission by request for conversation, received from terminal, indication of permitted time of conversation is transmitted into terminal for provided packet of voice data. Besides, this indication is intended for use in terminal to track remaining permitted time of conversation during provided pack of voice data.
Method for establishment of connection / 2407210
Method is proposed for establishment of connection via private automatic exchange of calling subscriber with internal subscriber. Method contains the following stages: establishment of connection of calling subscriber to private automatic exchange, sending of information on multiple internal subscribers of this exchange in graphical form to calling subscriber, selection of internal subscriber presented in graphical form from multiple internal subscribers, obtaining information on selection to establish connection by private automatic exchange.
Method of designing two-level audio conferencing system for railway transport / 2400938
Invention is meant for audio conferencing between parties on spatially spaced apart studios and personnel facilities. In switching stations for a microprocessor control device and special protocol of information-logical interaction of objects of a communication network communicating with each other, there is remote control of conferencing from one point, transmission of information on the state of equipment and possibility of configuring the communication circuit, there is faster control of the transportation process and current content of infrastructure of the railway transport, as well as increased reliability of the audio conferencing system.
Intelligent method, audio limiting unit and system / 2398361
System for setting up teleconferences includes a network and multiple units such as terminals exchanging audio streams with each other over the network, whereby the terminals transmit to each other in order to set up a conference. Each terminal can detect overload state, for which there are more simultaneous audio streams transmitted by the terminals than was predefined, and control the number of simultaneously transmitted audio streams together with other terminals in order to end the overload state.
User equipment, method and system for managing simultaneous communication session / 2394393
Invention relates to communication sessions based on multimedia messaging subsystem services based on IMS Internet protocols and particularly to a system for managing simultaneous communication sessions for such services as multipoint half duplex communication service (Push-to-Talk) of cellular mobile telephone communication (PoC service). The communication system includes user equipment (UE) in order to send a session initiation protocol (SIP) message, which contains information on the setting for supporting simultaneous communication sessions (SSS), associated with the user equipment (UE) to a home network which includes a server; and the said server in order to store information on the setting for supporting simultaneous communication sessions (SSS) and in order to establish the operation mode of the communication session of the server for a simultaneous communication session of the user equipment (UE) partly based on information on the setting for supporting simultaneous communication sessions (SSS).
Operational communication system for section of rail road / 2393642
System comprises switching stations, channels of station and dispatcher communication and speaking devices, at the same time cellular communication gateway is connected to switching station via additional linear set of wayside communication to provide for communication in a section of rail road from any point of span covered with cellular communication.
Call setup in videophone network / 2364051
Invention concerns methods of video call setup in mobile videophone network. Claimed methods of call setup reduce delays in call setup due to anticipating repeated packet transfer (NSRP) till expiration of typical waiting time period required for reception of NSRP acknowledgement packet (ACK) from remote terminal. Retransmitted NSRP packets can be sent again in free periods when modem level of transmitting mobile terminal does not receive other data for transfer to remote mobile terminal, thus taking advantage of vacant pass band. Multiplexed data received by modem can be stored in local buffer at modem level for retransmission. Thus, retransmission does not require generation of original call setup messages or multiplexing. Previously generated and multiplexed data are instead store in buffer at modem level for retransmission.
Method of mixing subscriber voice signals during voip-conferencing / 2341907
On a VoIP server, the encoded voice signal of each subscriber, received from a data transfer network, is decoded. The volume level of the voice signal of each subscriber is measured. Signals whose volume level exceeds a preset level are summed up, and the obtained sum is encoded and transferred to each subscriber. When transferring the obtained sum to a subscriber, the volume level of the voice signal of which exceeds the preset level, the current subscriber signal is subtracted from the sum.
Method and apparatus for generating security data and computer program product / 2487482
Method of generating security data for conducting a secure communication session between a first entity and at least a second entity on a defined protocol for creating secure communication sessions, wherein the security data are data for executing a SSL protocol or a TLS protocol, the method comprising: said first entity installing a secure smart card associated with said first entity; generating at least a portion of said secure data in said secure smart card based on instructions transmitted by said first entity; transmitting said secure data from said secure smart card to said first entity; and setting up said secure communication session between said first entity and said at least second entity with said transmitted security data.
|
FIELD: radio engineering, communication. SUBSTANCE: apparatus to automatically identify participants for a multimedia conference event comprising a content-based annotation component that operates to receive a meeting invitee list for a multimedia conference event; receiving multiple input media streams from multiple meeting consoles, and annotating video content from each input media stream with identifying information for each participant within each input media stream to form a corresponding annotated media stream, wherein the identifying information for each participant moves with that participant when the participant moves within the video content. EFFECT: easier identification of participants in a virtual conference hall. 20 cl, 6 dwg
Prior art [0001] the multimedia conferencing usually allows multiple participants to communicate and share different types of media (audio-visual) content collaboration and meeting in real time over the network. System multimedia conferencing can display various types of media content, using various graphical user interface (GUI) or species. For example, one type of GUI may include video images of the participants, another GUI may include presentation slides, another type of GUI may include text messages between participants, etc. So different geographically dispersed participants can interact and exchange information in virtual meetings, similar to the physical environment of the meeting, where all participants are in the same room. [0002] In a virtual environment meetings, however, can be difficult to identify the different participants. This problem usually increases when increasing the number of participants, thus potentially achieved the confusion and awkwardness among the participants. Techniques aimed at improving methods of identification in a virtual environment meetings, can increase the experience and convenience of the user. <> The invention[0003] Various options for implementation may be directed to systems and multimedia conference ties. Some of the options for implementation may be particularly directed to techniques for automatic identification of participants for the event multimedia conferencing. Event media conference may include multiple participants, some of which may gather in the conference room, while others can participate in the event media conference from a remote location. [0004] In one embodiment, for example, the device may contain a component annotation, content-based, working on reception of the list of participants invited to the meeting for event media conference. Component annotation, content-based, can take multiple media (audio-visual) input streams from multiple consoles meetings. Component annotation, content-based, can annotate media frames of each media stream input identifying information for each participant within each media stream input to generate an appropriate annotated media stream. Other embodiments of the described and claimed. [0005] This invention is provided for the introduction the of the selection of concepts in a simplified form, which further described below in the detailed description. This invention is not intended to identify key features or essential features of the claimed subject matter of the invention, and it is not intended to limit the scope of the claimed subject matter. Brief description of drawings [0006] Fig. 1 illustrates an implementation option multimedia conferencing. [0007] Fig. 2 illustrates an implementation option component annotations, based on the content. [0008] Fig. 3 illustrates an implementation option server multimedia conferencing. [0009] Fig. 4 illustrates a variant implementation of the logical stream. [0010] Fig. 5 illustrates an implementation option computing architecture. [0011] Fig. 6 illustrates an implementation option of the product. Detailed description [0012] Various embodiments of include physical or logical structure, arranged to perform certain operations, functions or services. Patterns can contain the physical structure, logical structure, or their combination. The physical or logical structure are implemented using hardware components, software elements, or combinations thereof. Description of embodiments with reference to oncrete items of hardware or software elements, however, are intended as examples and not limitations. The decision to use elements of the hardware or software elements for the implementation in practice of embodiments depend on many external factors, such as desired computational rate, power levels, heat, supply of the processing cycle, the data transfer rate input data rate output, memory, bus speed data, and other structures or performance constraints. In addition, the physical or logical structure can have the appropriate physical or logical connection for the exchange of information between the structures in the form of electronic signals or messages. Connections can include wired and/or wireless connections as appropriate for information or specific patterns. It should be noted that any reference to "one implementation" or "an implementation option" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one version of the implementation. The appearance of the phrases "in one embodiment" in various places in the specification do not necessarily mean a reference to the same options for implementation. [0013] Various embodiments of mo is ut in General be directed to multimedia conferencing, linked to the provision of services meeting and shared services for multiple participants over a network. Some systems multimedia conferencing can be developed to work with different networks based on packet transmission, such as the Internet or world wide Web ("web"), to provide services conference-based communication network. These implementations are sometimes called web conference. An example of the web conference can include MICROSOFT(R) OFFICE LIVE MEETING provided by Microsoft Corporation, Redmond, Washington. Other systems multimedia conferencing can be developed to work for the private network, business, organization or enterprise, and can use server multimedia conferencing, such as MICROSOFT OFFICE COMMUNICATIONS SERVER developed by Microsoft Corporation, Redmond, Washington. It should be appreciated however, that implementations are not limited to these examples. [0014] the System of the multimedia conference may include, among other elements of the network, server, multimedia conferencing, or other processing device, arranged to provide service web-conference. For example, the server of the multimedia conference may include, among other elements of the server, the server component meetings, working to control and mixing of various types m the dia (audio-visual) content for the participants of the meeting and event collaboration such as a web conference. Events, meetings and General operation can refer to any event multimedia conferencing, offering various types of multimedia information in real time or "live" environment in the online mode are sometimes referred to here simply as "the event", "media event" or "event multimedia conferencing". [0015] In one embodiment, the system of the multimedia conference may optionally include one or more computing devices, implemented as console meetings. Each console meetings can be arranged to participate in the media event through a connection to the server of the multimedia conference. Various types of media information from different consoles meetings may be accepted by the server a media conference during a media event, which in turn distributes the media information on some or all of the other console meetings, participating in a multimedia event. As such, any given console meetings may have a display with multiple types of media content of different types of media content. Thus, various geographically distributed participants can interact and exchange information in virtual meetings, similar physically the th environment meeting where all the participants are in the same room. [0016] In the environment of virtual meetings can be difficult to identify the different participants. The participants of the event multimedia conferencing are usually listed in the form of a GUI with a list of participants. The list of participants may have some identifying information for each participant, including name, location, image, name, etc. Information about the participants and identifying information for the list of participants, however, usually obtained from the console meetings used to join the multimedia conference event. For example, the participant typically uses the console of the meeting to join a meeting room virtual meetings for event media conference. Before attaching party feature various types of personally-identifiable information to perform authentication in the server of the multimedia conference. As soon as the server of the multimedia conference authenticates the participant, the participant is allowed access to the conference hall, virtual meetings, and server multimedia conferencing adds identifying information to the participants list. In some cases, however, multiple participants can gather in the conference room and shared use is to use different types of multimedia equipment, connected to the local console of the meeting to communicate with other participants who have remote console meetings. Due to the fact that there is only one local console of the meeting, the only participant in the conference hall usually uses a local console to the meeting to join the event media conference on behalf of all the participants in the conference hall. In many cases the participant using the local console of the meeting, may not necessarily be registered in the local console of the meeting. Therefore, the server of the multimedia conference may not have any identifying information for any of the participants in the conference hall and so could not update the list of participants. [0017] the Script conference hall defines additional problems to identify participants. The list of participants and the corresponding identifying information for each participant is usually shown in a separate GUI from other types of GUI with multimedia content. There is no direct correspondence between the participant from the participants list and the image of the participant in the streaming video content. Therefore, when the video content for the meeting room contains images for multiple participants in the conference hall, it becomes difficult to establish a correspondence between the participant and identifier is the fact that information with the participant in the video content. [0018] To solve these and other problems, some embodiments of directed to methods for automatic identification of participants for the event multimedia conferencing. More specifically, some embodiments of directed to methods for automatic identification of multiple participants in a video content recorded from the conference hall. In one embodiment, for example, a device such as a server of a multimedia conference, may contain a component annotation, content-based, for the reception of the list of invitees to the meeting for event media conference. Component annotation, content-based, can accept multiple input media streams from multiple consoles meetings, one of which may come from the local console of the meeting in the conference room. Component annotation, content-based, can annotate media frames of each input media stream identifying information for each participant within each input media stream to generate a corresponding annotated media stream. Component annotation, content-based, can annotate, to determine the location or position identifying information in close proximity to the participant in the video content, and move identificiralo information when the participant moves within the video content. Thus, the method of automatic identification may allow event attendees multimedia conferencing to more easily identify each other in the virtual conference room. As a result, the method of automatic identification can enhance the capabilities, scalability, modularity, extensibility, or the ability to interact operator, device or network. [0019] Fig. 1 illustrates a block diagram for a system 100 for multimedia conferencing. System 100 multimedia conference can present the overall architecture of a system suitable for implementing various embodiments. System 100 multimedia conferencing can contain multiple elements. The element can contain any physical or logical structure, arranged to perform some operations. Each element may be implemented as hardware, software, or any combination thereof, as preferred for a given set of structure parameters or performance constraints. Examples of hardware components may include devices, components, processors, microprocessors, circuits, circuit elements (e.g. transistors, resistors, capacitors, coils Indus is aktivnosti, etc), integrated circuits, application-specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), programmable gate array (FPGA), memory blocks, logic gates, registers, semiconductor device, the signal elements, microelements signal, microprocessor kits, etc. are Examples of software may include any software components, programs, applications, computer programs, application programs, system programs, machine programs, software, operating system, middleware, firmware, software modules, standard software, standard subroutines, functions, methods, interfaces, software interfaces, application programming interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Although the system 100 multimedia conferencing, as shown in Fig. 1, has a limited number of elements in a certain topology, it should be appreciated that the system 100 multimedia conference may include more or less elements in additional topologies as desired for a given implementation. Options for implementation is not limited to this context. [0020] In various embodiments, the implementation of the system 100 multimedia conference may contain or be part of wired communication, wireless communication system, or combinations thereof. For example, the system 100 multimedia conference may include one or more elements arranged to transmit information on one or more type of wired communication lines. Examples of wired communication lines may include, without limitation, wire, cable, bus, printed circuit (PCB), an Ethernet connection, peer-to-peer connection (P2P)system Board, the public switched network devices, semiconductor material, twisted wire pairs, coaxial cable, fiber optic connection, etc. the System 100 multimedia conferencing may also include one or more elements arranged to transmit information on one or more types of radio links. Examples of wireless communication lines may include, without limitation, radio-channel, infrared channel, radio-frequency channel (RF)channel technology wireless access (WiFi), part of the RF spectrum, and/or one or more licensed or unlicensed bands. [0021] In various embodiments, the implementation of the system 100 multimedia conferencing can be the t be configured to transmit, control or processing various types of information such as media information and control information. Examples of media information can include any data representing content meant for a user, such as voice information, video information, audio information, image information, textual information, numerical information, information, alphanumeric symbols, graphics, etc. Media information may also sometimes be referred to as "media content". The management information may refer to any of the teams representing data, instructions or control words meant for an automated system. For example, the management information can be used to route media information through the system to establish a connection between devices, issuing commands to a device for processing media information in advance in a certain way, etc. [0022] In various embodiments, the implementation of the system 100 multimedia conference may include the server 130 multimedia conferencing. The server 130 multimedia conferencing can contain any logical or physical object, which is arranged for installation, regulation or management call multimedia conference between consoles 110-1-m meetings over the network 120. The network 120 m which may contain for example, the network packet, the network is switched or a combination of both. In various embodiments, the implementation of the server 130 multimedia conference may include or be implemented as any device processing or computing device, such as a computer, server, set of servers or a server farm, a workstation, a minicomputer, a main computer, supercomputer, etc. the Server 130 of the multimedia conference call may contain or implement General or specific computing architecture suitable for calculation and processing of multimedia information. In one embodiment, for example, the server 130 multimedia conferencing can be implemented using a computing architecture, as described with reference to Fig. 5. Examples for server 130 multimedia conference may include without limitation MICROSOFT OFFICE COMMUNICATIONS SERVER, MICROSOFT OFFICE LIVE MEETING, etc. [0023] the Specific implementation for the server 130 multimedia conferencing may vary depending on a set of protocols or communication standards that should be used for the server 130 multimedia conferencing. In one example, the server 130 multimedia conferencing can be implemented in accordance with standards and/or variants such as the Internet Engineering Task Force (IETF), Multipary Multimedia Session Control (MMUSIC) Working Group Session Initiation Protocol (SIP). SIP is the proposed standard for initiating, modifying, and terminating an interactive user session that contains multimedia elements such as video, voice, instant messaging, network games, and virtual reality. In another example, the server 130 multimedia conferencing can be implemented in accordance with standards and/or variants of the International Telecommunication Union (ITU) H.323. The H.323 standard defines a multipoint control unit (MCU) for coordination conference calls. In particular MCU includes a multipoint controller (MC), which handles signaling H.245, and one or more multipoint processors (MP) for mixing and processing of data streams. And SIP and H.323 standards are predominantly signaling protocols for voice over Internet Protocol (VoIP) or voice-over packet (VOP), operations of the multimedia conference call. It should be appreciated that other signaling protocols can be implemented to server 130 multimedia conferencing, however, and are also within the scope of protection according to the options implementation. [0024] In normal operation, the system 100 multimedia conferencing can be used for calls, multimedia conferencing. The multimedia calls the conference typically includes voice, video, and/or information data between multiple endpoints. For example, public or private network 120 packet can be used for audio calls, conference calls, video calls, conference calls, audio/video calls, conferencing, collaboration, document sharing and editing, etc. Network 120 packet may also be connected to the public switched telephone network (PSTN) using one or more suitable VoIP gateways, arranged to convert between circuit-switched and information package. [0025] in Order to establish a call to a multimedia conferencing over the network 120 packet, each console 110-1-m meetings can connect to the server 130 multimedia conferencing via network 120 packet, using various types of wired or wireless communication lines, working on various connection speeds or bandwidths, such as a telephone connection with a low bandwidth PSTN modem connection with an average bandwidth DSL or cable modem connection, and Internet connection with a higher bandwidth local area network (LAN), for example. [0026] In various embodiments, the implementation of the server 130 multimedia conferencing can mouth the install, to control and manage calls, multimedia conferencing between consoles 110-1-m meetings. In some embodiments, the implementation challenges of the multimedia conference call may contain web-based conference call live, using web-conference, which provides full collaboration. The server 130 multimedia conference acts as a Central server that manages and distributes media information at the conference. He receives media information from various consoles 110-1-m meetings, performs a blending operation for multiple types of media information and sends the media information to some or all other participants. One or more consoles 110-1-m meetings may join the conference by connecting to the server 130 multimedia conferencing. The server 130 multimedia conference may implement different ways of managing permissions for authentication and add consoles 110-1-m meeting a safe and controlled manner. [0027] In various embodiments, the implementation of the system 100 multimedia conference may include one or more computing devices, implemented as console 110-1-m meeting, to connect to the server 130 multimedia conferencing on one or more of the enum communication with the network 120. For example, the computing device may implement a client application that can hostroute multiple console meetings, where each is represented by a separate conference at the same time. Similarly, the client application may receive multiple streams of audio, video and data. For example, video streams from all or a subset of participants can be displayed as a mosaic on the display of the participant with the main video window for the current active speaker, and panoramic views of the other participants in other Windows. [0028] Console 110-1-m may contain any logical or physical entity that can be linked to participation or entry in a multimedia call conferencing, managed by the server 130 multimedia conferencing. Console 110-1-m may be implemented as any device that includes, in its most basic form, the processing system comprising a processor and a memory, one or more components, input/output (I/O) of the multimedia information, and a wireless and/or wired network. Examples of components (I/O) multimedia information may include audio components (I/O) (for example, microphones, speakers, video components (I/O) (for example, video camera, display, tactile components (I/O) (e.g., a vibrator), the components(I/O) user data (for example, keyboard, trackball, keypad, touch screen), etc. are Examples consoles 110-1-m may include phone, VoIP or phone VOP, telephone packet, designed to work on PSTN, Internet phone, video phone, mobile phone, personal digital assistant (PDA), a combined mobile phone and PDA, a mobile computing device, smart phone, one-way pager, two-way pager, messaging, computer, personal computer (PC), a desktop computer, laptop computer, laptop computer, laptop computer that the network device, etc. In some implementations console 110-1-m may be implemented using a General or specific computing architecture similar computing architecture described with reference to Fig. 5. [0029] Console 110-1-m may contain or implement appropriate client components 112-1-n meetings. Client components 112-1-n may be designed to interact with the server component 132 meetings server 130 multimedia conferencing for setup, control or management of the event media conference. For example, the client components 112-1-n may include or implement the corresponding application program and management Paul is overskim interface, in order to allow the appropriate consoles 110-1-m meeting to participate in a web conference, facilitated by server 130 multimedia conferencing. They may include equipment input (for example, video camera, microphone, keyboard, mouse, controller, etc) to capture the media information provided by the operator console 110-1-m meetings and corollary equipment (e.g., display, speaker, etc) for playback of media operators other console 110-1-m meetings. Examples of client components 112-1-n may include without limitation MICROSOFT OFFICE COMMUNICATOR or Base Console Windows Meeting MICROSOFT OFFICE LIVE MEETING, etc. [0030] As shown in the illustrated embodiment of Fig. 1, system 100 multimedia conferencing may include a meeting / conference room 150. Enterprise or business typically use conference rooms for meetings. Such meetings include events multimedia conference calls with participants, locally located inside the conference room 150, and remote participants outside the conference room 150. Meeting 150 may have various computing resources and communications to support multimedia events conference and for providing multimedia information between one or more remote consoles 110-2-m meetings and local console 110-1 meetings. For example, meeting 150 may include a local console 110-1 meetings, which are located inside the conference room 150. [0031] the Local console 110-1 meetings may be connected to different devices multimedia input and/or devices multimedia output, able to capture, exchange or playback of multimedia information. Device multimedia input can contain any logical or physical device arranged to capture or receive the multimedia information input from operators within the conference room 150, which includes input devices audio, video input devices, input devices, image, input text, and other multimedia equipment input. Examples of multimedia devices input may include, without limitation, video cameras, microphones, sets of microphones, phone conferencing, projection equipment for presentations, interactive projection equipment for presentations, the translation components of the voice-to-text features text-to-voice, voice identification, the pointing device, keyboard, touch screens, tablet computers, devices, handwriting recognition, etc. example of a video camera may include ringcam, such as MICROSOFT ROUNDTABLE, developed by Microsoft Corporatin, Redmond, Washington. MICROSOFT ROUNDTABLE is a device for video conference camera with a viewing angle of 360 degrees, which provides remote participants panoramic video of everyone at the table conference. Device multimedia output can contain any logical or physical device, arranged for playback or display of multimedia information output from the operators of the remote console 110-2-m meetings, including the device's audio output, video output device, display device, input device, text and other multimedia equipment output. Examples of multimedia devices, the output may include, without limitation, electronic displays, video projectors, speakers, modules vibropreobrazovateley, printers, Fax, etc. [0032] the Local console 110-1 meeting in the conference room 150, may include various devices for multimedia input, arranged to capture media content from the conference room 150, which includes participants 154-1-p, and streams media content to the server 130 multimedia conferencing. In the illustrated embodiment, shown in Fig. 1, a local console 110-1 meeting includes a video camera 106 and a set of microphones 104-1-r. Video camera 106 may capture the video content is with a video content participants 154-1-p, present in the conference room 150, and to stream video content to the server 130 multimedia conferencing using the local console 110-1 meetings. Similarly, a set of microphones 104-1-r can capture audio content, including audio content from the participants 154-1-p present in the conference room 150, and stream audio content to the server 130 multimedia conferencing using the local console 110-1 meetings. Local console meetings may also include various devices media output, such as a display or video projector for displaying one or more types of GUI with video content, or audio content from other participants using remote console 110-2-m meeting, accepted by server 130 multimedia conferencing. [0033] Console 110-1-m meetings and server 130 multimedia conferencing can transmit media information and control information using various media connections established for a specified event media conference. Media connections can be established using different VoIP signaling protocols such as SIP sets of protocols. Sets the SIP Protocol is a control Protocol (alarm) level of applications for the creation, modification and termination of the data session from one or more of the participants. These sessions clucalc in itself a multimedia Internet-conference Internet telephone calls and multimedia distribution information. Members in session data can transmit data using multicasting or through a network of unicast relations, or a combination of both. SIP is designed as part of a full IETF multimedia data and control architecture currently uniting protocols such as resource reservation Protocol (RSVP) (IEEE RFC 2205) for reserving network resources, the transport Protocol real-time (RTP) (IEEE RFC 1889) for transporting real-time data and providing quality of service (QOS) feedback, the streaming Protocol real-time (RTSP) (IEEE RFC 2326) for controlling delivery of media streams, the Protocol announcements session data (SAP) for advertising multimedia sessions of data transmission using multicasting, the session description Protocol data (SDP) (IEEE RFC 2327) for describing multimedia sessions transmissions and others. For example, console 110-1-m may use SIP as the signaling path for the installation media connections, and RTP as media channel for transporting media along the media connections. [0034] In normal operation, the device 108 planning can be used to create redundancy events multimedia conferencing system 100 multimedia conference the ligature. The device 108 planning may include, for example, the computing device having the appropriate hardware and software for planning events multimedia conferencing. For example, the device 108 planning may include a computer that uses software MICROSOFT OFFICE OUTLOOK (R), developed by Microsoft Corporation, Redmond, Washington. Software MICROSOFT OFFICE OUTLOOK contains messages and client software collaboration that can be used for planning events multimedia conferencing. The operator can use MICROSOFT OFFICE OUTLOOK to convert query planning in the event MICROSOFT OFFICE LIVE MEETING, which is sent to the list of invitees to the meeting. The scheduling request may include a hyperlink to the virtual room for events multimedia conferencing. Invited can click on the hyperlink and console 110-1-m meeting launches a web browser that connects to the server 130 multimedia conferencing and connects with the virtual room. Then, participants can submit the presentation slides, documents, annotations, or to conduct collective discussion on the built-in projection equipment for presentations, among other tools. [0035] the Operator can use the device 108 planning to create a backup event media conference for event media conference. Redundancy events multimedia conference may include the list of invitees to the meeting for event media conference. The list of invitees to the meeting can contain a list of the people invited to the event media conference. In some cases, the list of invitees for the meeting may include only people invited and accepted for multimedia events. A client application, such as Microsoft Outlook mail client sends a request to the backup server 130 multimedia conferencing. The server 130 multimedia conferencing can take backup events multimedia conferencing and retrieve the list of invitees to the meeting and associated information for the invited to a meeting of the network device, such as a directory 160 enterprise resource. [0036] the Directory 160 enterprise resource may include a network device that publishes public directory operators and/or network resources. A typical example of network resources, published by the directory 160, enterprise resource includes printers on the network. In one embodiment, for example, the directory 160 enterprise resource may be implemented as a MICROSOFT ACTIVE DIRECTORY (R). Active Directory is an implementation of a directory service Protocol lightweight directory access (LDAP) to provide the treatment services centralized authentication and authorization for network computers. Active Directory also allows administrators to assign policies, deploy software, and apply critical updates to an organization. Active Directory stores information and settings in a Central database. Active Directory network can vary from a small installation with a few hundred objects, to a large installation with millions of objects. [0037] In various embodiments, the implementation of the directory 160 enterprise resource may include identifying information for various invited to the meeting in the event of a multimedia conference. The identifying information may include any type of information capable of uniquely identifying each of the invitees to the meeting. For example, the identifying information may include, without limitation, the name, location, contact information, number of accounts, professional information, organizational information (such as name, personal information, the connection information, the presence information, the network address, the address control the medium access (MAC)address, Internet Protocol (IP), telephone number, email address, IP address (for example, a SIP address), hardware IDs, hardware configuration, software configuration, wired interfaces, the wireless interfaces, supported protocols and other desired information. [0038] the Server 130 multimedia conferencing can make a reservation event media conference, including a list of invitees to the meeting, and retrieves the appropriate identifying information from the directory 160 enterprise resource. The server 130 multimedia conferencing can use the list of invitees to the meeting to help automatic identification of participants in the multimedia conference event. [0039] the Server 130 multimedia conference may implement different hardware and/or software components for automatic identification of participants for the event multimedia conferencing. In more detail, the server 130 multimedia conferencing can implement methods for automatic identification of multiple participants in a video content recorded from the conference hall, such as the participants 154-1-p in the conference room 150. In the illustrated embodiment, shown in Fig. 1, for example, the server 130 multimedia conferencing includes module 134 content-based annotation of the media. Component 134 content-based annotation can be configured to receive the list of invitees to the meeting at event media conference the script from the directory 160 enterprise resource. Component 134 content-based annotation can also accept multiple input media streams from multiple consoles 110-1-m meetings, one of which may come from the local console 110-1 meeting in the conference room 150. Component 134 content-based annotation can annotate one or more media frames of each input media stream identifying information for each participant within each input media stream to generate a corresponding annotated media stream. For example, a component 134 content-based annotation can annotate one or more media frames of the input media stream received from the local console 110-1 meetings, identifying information for each participant 154-1-p within the input media stream to generate a corresponding annotated media stream. Component 154-1-p-based content annotation can annotate, to identify the location or place identification information in relative proximity to the participants 154-1-p in the media input stream and move identifiable information, when the participant 154-1-p moves within the media input. Component 134 content-based annotation can be described in more detail with reference to Fig. 2. [0040] Fig. 2 illustrates a block diagram for a component 134 basics is authorized on the content of the annotation. Component 134 content-based annotation can contain part or subsystem of the server 130 multimedia conferencing. Component 134 content-based annotation can contain multiple modules. The modules may be implemented using hardware components, software elements, or a combination of hardware components and software elements. Although component 134 content-based annotation, as shown in Fig. 2, has a limited number of elements in a certain topology, it is possible to evaluate that component 134 content-based annotation may include more or less elements in alternate topologies as desired for a given implementation. Options for implementation is not limited in this context. [0041] In the illustrated embodiment, shown in Fig. 2, component 134 content-based annotation can contain module 210 analysis of media connected to the module 220 member identification and storage 260 data signature with the data transmission. Store 260 data signatures can store various types of information 262 invited to the meeting. Module 220 identification of the participant connected to the module 230 annotation media and storage 260 data signature with the data transmission. The module 230 EN is otachi media connected with the data transmission module 240 mixing media and module 232 of the positioning. The module 232 of the positioning connected to the module 210 analysis of the media data transmission. Module 240 mixing media may include one or more buffers 242. [0042] the Module 210 of the media analysis component 134 content-based annotation can be configured to receive as input the various media streams 204-1-f input. Media streams 204-1-f input each can contain media content stream, supported consoles 110-1-m and the server 130 multimedia conferencing. For example, the first media stream may represent a video stream and/or audio stream from a remote console 110-2-m. The first media stream may include video content, containing only one participant who uses the console 110-2-m meetings. The second media stream 204-2 input can be a video stream from a video camera, such as camera 106, and an audio stream from one or more microphones 104-1-r connected to the local console 110-1 meetings. The second media stream 204-2 input can contain video content that contains many of the participants 154-1-p using a local console 110-1 meetings. Other media input streams 204-3-f can have varying combinations of media content (e.g. audio, video or data) with varying number of participants. [0043] the Module 210 media analysis can determine the number of participants 154-1-p, present what each media stream 204-1-f input. Module 210 media analysis can determine the number of participants 154-1-p using various characteristics of the media content within the media stream 204-1-f input. In one embodiment, for example, the module 210 media analysis can determine the number of participants 154-1-p using methods of image analysis in relation to video content from a media stream 204-1-f input. In one embodiment, for example, the module 210 media analysis can determine the number of participants 154-1-p, using the methods of voice analysis in relation to the audio content of the media stream 204-1-f input. In one embodiment, for example, the module 210 media analysis can determine the number of participants 154-1-p, and using image analysis, and analysis of voice in relation to the audio content of the media stream 204-1-f input. Other types of media content may also be used. [0044] In one embodiment, module 210 media analysis can determine the number of participants, using image analysis in relation to video content from a media stream 204-1-f input. For example, the module 210 analysis of media may perform image analysis to determine some characteristics of people using any conventional methods developed for the determination of the person in the image or image sequence. In one embodiment, for example, m is Dul 210 analysis of media may implement different types of methods to identify individuals. Face detection is a computer technology that determines the locations and sizes of human faces in arbitrary digital images. It detects facial features and ignores all the others, as well as buildings, trees, and body. Module 210 analysis of media can be linked to the implementation of the algorithm for determining the persons are able to identify local visual indication of the sites that include distinct parts of the human face. When a person is defined, the module 210 analysis of media can update the count of images indicating the number of participants specified for a given media stream 204-1-f input. Module 210 analysis of the media may then perform various additional postoperative processing portion of the image data with the image content of a particular participant in preparation for the operation of the face recognition. Examples of such postoperati processing may include video content extraction, representing a face from an image or sequence of images, normalizing the extracted video content to a certain size (for example, matrix C), and uniformly Cantua the RGB color space (for example, 64 colors). Module 210 analysis of the media may display the counter value of the image and each processed part of the image data to the module 220 identify participants of the ICA. [0045] In one embodiment, module 210 media analysis can determine the number of participants using voice analysis in relation to the audio content of the media stream 204-1-f input. For example, the module 210 analysis of media can perform voice analysis to determine some characteristics of human speech, using any conventional methods designed to identify someone within the audio segment or sequence of audio segments. In one embodiment, for example, the module 210 analysis of media may implement different types of detection methods for voice or speech. When the human voice is defined, the module 210 analysis of media may update the voice counter indicating the number of participants for a given media stream 204-1-f input. Module 210 analysis of media can perform various operations post-processing on the part of the audio data with the audio content from a particular participant in preparation for the operations of voice identification. [0046] once the portion of the audio data with the audio content from the participant identified, the module 210 media analysis can then identify the portion of the image data corresponding to portions of the audio data. This can be achieved, for example, by comparing sequence time for parts of the audio data with sequences in which the time for parts of the image data, comparing the part of the audio data with the lip movements of the parts of the image data, and other relevant ways of associating audio/video. For example, video content is typically captures a number of media frames (e.g., still images) per second (usually on the order of 15-60 frames per second, although other speeds may be used). These media frames 252-1-g, as well as the corresponding audio content (for example, every 1/15 to 1/60 seconds of audio data) are used as a frame to determine the location module 232 of the positioning. When recording audio, this audio is typically sampled at a higher transfer rate than the video (for example while 15-60 images can be captured every second of video, thousands of audio samples can be captured). Audio sample may correspond to a particular video frame in many different ways. For example, audio samples, changing from when the video frame is captured, before, when the next video frame is captured, can be an audio frame corresponding to the video frame. As another example, audio samples, concentrated around the time of video capture frame may be an audio frame corresponding to the video frame. For example, if the video is captured at 30 frames per second, audio frame may vary from 1/60 of a second before the ideology of the frame will be captured, to 1/60 of a second after the video frame is captured. In some situations, the audio content may include data that does not correspond directly to the video content. For example, audio content may be an audio track of music, and not the votes of the participants of the video content. In these situations, the module 210 analysis of media discards the audio content as an error, and returns to the previous state for methods of determining the entity. [0047] In one embodiment, for example, the module 210 media analysis can determine the number of participants 154-1-p, and using image analysis and analysis of votes in relation to the audio content of the media stream 204-1-f input. For example, media analysis 210 may perform image analysis to determine the number of participants 154-1-p as the initial pass and then perform voice analysis to confirm the determination of the number of participants 154-1-p in the following passage. The use of multiple determination techniques can provide extended benefits by improving the accuracy of the operations of determining, through use of more computational resources. [0048] the Module 220 identification of the participant may be configured to display invited to the meeting to each detected participant. Module 220 identification of a party may take the ri input includes a list 202 of invited participants from the directory 160, enterprise resource values media counter (for example the count of the images or the count of votes) of the module 210 analysis of media, and part of the media data (e.g., the portion of the image data or part of the audio data) from the module 210 analysis of media. Module 220 identification of the participant can then use the algorithm of identification of the participant and one or more of these three inputs to display invited to meet each specific participant. [0049] As described previously, the list 202 invited to the meeting can contain a list of the people invited to the event media conference. In some cases, the list 202 invited to the meeting may include only those people who are invited and accepted for multimedia events. In addition, the list 202 invited to the meeting can also include various types of information associated with the given invited to the meeting. For example, the list 202 invited to the meeting may include identifying information for a given invited to the meeting, authentication information for a given invited to the meeting, the console ID of the meeting used the invited to a meeting, etc. [0050] the Algorithm for the identification of the user can be developed for the relatively rapid identification of meeting participants, using a threshold decision based on the value of media counters. An example of pseudocode for this algorithm is the identification of the participant is shown as follows: Receive a list of those present at the meeting For each media stream: Determine number of members (N): If N= =1, then the participant is the media source otherwise, if N>1 Requested in the data store signature information are invited to the meeting, Comparison of the signature with the parts of the data media End [0051] In accordance with the algorithm member identification module 220 identification of the participant determines the value is the number of participants of the first media stream 204-1 input to one participant. If value = TRUE (for example, N = 1), the module 220 maps identify the participants invited to the meeting from the list 202 invited to meet with the participant in the first media stream 204-1 input, based on the media source for the first media stream 204-1 input. In this case, the media source for the first media stream 204-1 input can contain one of the remote console 110-2-m meetings, as identified in the list 202 invited to the meeting or store 260 data signatures. Since there is only one particular participant in the first media stream 204-1 input, the algorithm for the identification of the participant assumes that the participant is not náchod is conducted in the conference room 150, and therefore maps of the participant in part of the media data directly from the media source. In this way the module 220 identification of the participant reduces or avoids the need to perform additional analysis of parts of the media data received from module 210 analysis of the media, thus saving computational resources. [0052] In some cases, however, multiple participants can gather in the conference room 150, and share various types of multimedia equipment connected to the local console 110-1 meetings, to communicate with other participants who have remote console 110-2-m. Since there is only one local console 110-1 meetings, the only participant (e.g participant 154-1) in the conference room 150 typically uses a local console 110-1 meeting to join the event media conference on behalf of all participants 154-2-R in conference room 150. Therefore, the server 130 multimedia conference may have identifying information for the participant 154-1, but not to have any identifying information to other participants 154-2-R in conference room 150. [0053] in Order to deal with this scenario, the module 220 identification of the participant determines whether more number of participants of the second media stream 204-2 input than one participant. If value = TRUE, (e.g. the, N> 1), the module 220 member identification maps each invited to meet with each participant in the second media stream 204-2 input on the basis of the signature facial, voice signatures, or a combination of signature face and voice signature. [0054] As shown in Fig. 2, the module 220 identification of the participant can be connected to the storage 262 data signature with the data transmission. Store 262 data signatures may store information 262 invited to the meeting for everyone invited to the meeting in the list 202 invited to the meeting. For example, information 262 invited to the meeting may include various entries invited for the meeting, corresponding to each of the invited to a meeting in the list 202 invited to the meeting, and records invited to the meeting have identifiers 264-1-a invited to the meeting, signatures 266-1-b face, voice, signature 268-1-c and-identifying information 270-1-d. Various types of information stored by entries invited for the meeting can be obtained from various sources, such as a list of 202 invited to the meeting, the base 260 of the data resources of the enterprise, previous events, media conferences, console 110-1-m meetings, third-party database or other available network resources. [0055] In one embodiment, the module 220 identification part is nick may implement a system of facial recognition arranged to perform face recognition for participants, based on the signature 266-1-b person. System face recognition is a computer application for automatically identifying or verifying a person in accordance with the digital image or media video frame from a video source. One way to do this is to compare the characteristics you select the person from the image database and violence. This can be achieved using any number of face recognition systems, such as the eigenface system, the system sherface, hidden Markov models, neural system mapping motivated, dynamic binding, etc. Module 220 identification of the participant can take part of the image data from the module 210 analysis of media, and to extract the various characteristics of the face part image data. Module 220 identification of the participant can retrieve one or more signatures 266-1-b entity from the repository 260 data signatures. Signature 266-1-b individuals may have different facial features, extracted from the well-known image of the participant. Module 220 identification of the participant can compare facial characteristics from the pieces of image data for different signatures 266-1-b person, and to determine whether there are matches. If there is a match, the module 220 identification of a party may extract identifying information 270-1-d, which is relevant to the duty to regulate the signature 266-1-b persons and output the portion of the media data and identifying information 270-1 module 230 media annotation. For example, assuming that the characteristics of the face of the portion of the image data correspond to the signature 266-1 entity, module 220 identification of the participant may then retrieve identifying information 270-1, corresponding to the signature 266-1 persons, and to withdraw the portion of the media data and identifying information 270-1 module 230 annotation media. [0056] In one embodiment, the module 220 identification of the participant can implement voice identification, arranged for performing voice identification for participants on the basis of voice signatures 268-1-c. Voice identification is a computer application for automatically identifying or verifying a person in accordance with the audio segment or multiple segments of audio. Voice identification can identify people based on their voices. Voice identification extracts various features from speech, simulates them, and uses them to recognize a person based on his/her voice. Module 220 identification of a party may take the audio sampling module 210 analysis of media, and to extract different audio characteristics of the parts of the image data. Module 220 identification of the user can retrieve voice signature 268-1-c from the storage is Lisa data signature 260. Voice signature 268-1-c may contain various signs of speech or voice extracted from a known sample of the speech or voice of the participant. Module 220 identification of the participant can compare the audio characteristics of the parts of the image data with voice signature 268-1-c, and to determine whether there are matches. If there is a match, the module 220 identification of a party may extract identifying information 270-1-d, which corresponds to a voice signature 268-1-c, and output the corresponding portion of the image data and identifying information 270-1-d module 230 annotation media. [0057] the Module 230 annotation media can be employed for the annotation of media frames 252-1-g each media stream 204-1-f input identifying information 270-1-d for each display of the participant within each media stream 204-1-f input for the formation of the corresponding annotated media stream 205. For example, the module 230 annotation media takes a different portion of the image data and identifying information 270-1-d from module 220 identification of the participant. The module 230 annotation media then annotates one or more media frames 252-1-g-identifying information 270-1-d in relative proximity to the associated party. The module 230 annotation media can determine exactly where to annotate one or more media frames 252-1-g identify is her information 270-1-d, using the information of the positioning adopted from module 232 of the positioning. [0058] the Module 232 of the positioning connected to the module 230 of the media annotation module 210 analysis of media with data transmission and works to determine the information of the location for the associated participant 154-1-p within the media frame or successive media frames 252-1-g media stream 204-1-f input. In one embodiment, for example, the location may include coordinate 256 center and the boundary region 258 for the associated participant 154-1-p. [0059] the Module 232 of the positioning manages and updates information location for each area in the media frames 252-1-g media stream 204-1-f input that includes or potentially includes a human face. The field of media frames 252-1-g can be obtained from parts of the image data output from the module 210 analysis of media. For example, the module 210 analysis of the media may display information location for each area in the media frames 252-1-g, which are used to form parts of the image data with the detected participants. The module 232 of the positioning may maintain a list of identifiers of pieces of image data for parts Dan who's image, and to associate information location for each portion of the image data within the media frames 252-1-g. Additionally or alternatively, the field of media frames 252-1-g can be obtained the actual module 232 of the positioning, analyzing media frames 204-1-f input regardless of the module 210 analysis of the media. [0060] In the illustrated example, information defining a location for each area described by the coordinates of 256 center and the boundary region 258. The field of video content, which can include participants, defined by coordinates 256 center and the boundary region 258. Coordinate 256 center represents the approximate center of the region, while the boundary region 258 is any geometric shape around the center coordinates. The geometric shape may be of any desired size, and may vary according to the specified participant 154-1-p. Examples of geometric shapes may include, but not limited to rectangle, circle, ellipse, triangle, Pentagon, hexagon, or other shape free form. The boundary region 258 defines a region in the media frames 252-1-g, which include the face and are monitored by module 232 of the positioning. [0061] Information location may additionally include identifier the abuser location 272. Identifying 272 location may contain a position within the boundary region 258 to annotate identifying information 270-1-d. Identifying information 270-1-d displayed for the participant 154-1-p can be placed anywhere within the boundary region 258. In the application, identifying information 270-1-d should be close enough to the mapped 154-1-p party to facilitate the connection between the video content for the participant 154-1-p and identifying information 270-1-d for participant 154-1-p from the perspective of the person viewing the media frames 252-1-g, at the same time reducing or avoiding the possibility to partially or completely close the video content for the participant 154-1-p. Identifying 272 location can be a static location, or may be changed dynamically according to factors such as the size of the participant 154-1-p, the movement of the participant 154-1-p, changes in the secondary objects in the media frames 252-1-g, etc. [0062] once the module 230 annotation media takes a different portion of the image data and identifying information 270-1-d from module 220 member identification module 230 annotation media extracts information location for parts of the image data from the module 232 of the positioning. The module 230 of the media annotation annotates one or more media kad is s 252-1-g each media stream 204-1-f input identifying information 270-1-d for each mapped participant within each media stream 204-1-f on the basis of the information of the positioning. By way of example, assume that media frames 252-1 may include participants 154-1, 154-2 and 154-3. Suppose, further, that the associated participant is a participant 154-2. The module 230 annotation of the media can receive identifying information 270-2 from module 220 member identification information and location for the area within the media frame 252-1. The module 230 annotation media can then annotate media frame 252-1 from the second media stream 204-2 input identifying information 270-2 for the associated participant 154-2 within the boundary area around 258 coordinates 256 center in identifying 272 location. In the illustrated embodiment, shown in Fig. 1, the boundary region 258 has a rectangular shape and module 230 annotation media positioning-identifying information 270-2 in identifying the location 272, containing the upper-right corner of the boundary region 258 in the space between the video content for the participant 154-2 and the edge of the boundary region 258. [0063] once the area of media frames 252-1-g was annotated identifying information 270-1-d for the associated participant 154-1-p module 232 location can monitor and track the movements of the participant 154-1-p for further media frames 252-1-g media streams 204-1-f input using the tracking list. After the distribution, the module 232 of the positioning monitors each of the identified areas for mapped participants 154-1-p in the tracking list. The module 232 of the positioning uses a variety of visual signals for tracking region from frame to frame in the video content. Each of the entities in the area to be monitored is the image of at least part of man. As a rule, people can move, while the video content is generated, for example, to stand, to sit, to walk around, to move while sitting on a chair, etc. Instead of having to perform face detection in each media frame 252-1-g media stream 204-1-f input module 232 of the positioning monitors the area of the traces, which include person (once defined) from frame to frame, which are generally less computationally expensive than the re-definition of the entity. [0064] the Module 240 mixing media can be connected to the module 230 annotation media. Module 240 mixing media, may be configured to receive multiple annotated media streams from module 205 230 annotation media, and merging of multiple annotated media streams 205 in mixed media stream 260 output to display multiple console 110-1-m meetings. Module 240 mixing media can use the Ufer 242 and different modules delay to synchronize the various annotated media streams 205. Module 240 mixing media may be implemented as a MCU as part of component 134 content-based annotation. Additionally or alternatively, the module 240 mixing media may be implemented as a MCU as part of the component 132 server meeting server 130 multimedia conferencing. [0065] Fig. 3 illustrates a block diagram of the server 130 multimedia conferencing. As shown in Fig. 3, the server 130 multimedia conferencing can take various media streams 204-1-m input, to handle different media streams 204-1-m input using component 134 content-based annotation and display of multiple mixed media streams 206 output. Media streams 204-1-m input can represent different media streams originating from different consoles 110-1-m meetings, and mixed media streams 206, the output may represent identical media flows, ending on different consoles 110-1-m meeting. [0066] Component 302 calculations can represent different computing resources to support or implement a feature 134 content-based annotation. Examples for component 302 calculations may include, without limitation processors, memory, buses, chipsets, controllers, generators, system clock, and other computing platform or hardware architecture of the system is. [0067] Component 304 may represent different resources due to reception of a media stream 204-1-m input and parcel of mixed media streams 206 output. Examples for component 304 communication may include, without limitation receivers, transmitters, transceivers, network interfaces, the network interface card, radio stations, processors, main frequency bands, filters, amplifiers, modulators, demodulators, multiplexers, mixers, switches, antennas, Protocol stacks, or other communication platform or hardware architecture of the system. [0068] Component 132 server meetings can provide various multimedia resources conference for setup, control or management of the event media conference. Server component 132 of the meeting may include, among other things, other elements of the MCU. MCU is a device that is typically used for the organization of the bridge for connections multimedia conference ties. The MCU is usually the endpoint on the network which provides the capability for three or more consoles 110-1-m meetings and gateways to participate in a multipoint conference. MCU typically contains a multipoint controller (MC) and various multipoint processors (MP processors). In one embodiment, for example, the server 132 meetings may implement hardware, software, and about the testing software for MICROSOFT OFFICE LIVE MEETING SERVER or LINKS of MICROSOFT OFFICE. It should be appreciated, however, that implementations are not limited to these examples. [0069] the Operation for the above-described embodiments may be further described with reference to one or more logical flows. It should be appreciated that the representative logical flows do not necessarily have to be performed in the order presented or in any particular order, unless otherwise indicated. In addition, the various steps described in relation to logical threads can be executed parallel or serial manner. The logic flows may be implemented using one or more hardware components and/or software elements of the described embodiments or alternative elements as desired for a given set of performance constraints and patterns. For example, the logic flows may be implemented as logic (e.g., commands of a computer program) to perform a logical device (for example, a General-purpose computer or special purpose computer). [0070] Fig. 4 illustrates one implementation of logic flow 400. Logic flow 400 may be representing some or all of the operations performed by one or more variants of the implementation described in the present description. [0071] It is to shown in Fig. 4, the logic flow 400 may receive the list of invitees to the meeting for event 402 multimedia conferencing. For example, the module 220 member identification component 134 content-based annotation server 130 multimedia conferencing can take a list of 202 invited to the meeting and accompanying information for event media conference. All or part of the list 202 invited to the meeting and accompanying information can be taken from the device 108 planning and/or directory 160 enterprise resource. [0072] the Logic flow 400 may receive multiple media streams input from multiple consoles meetings on stage 404. For example, the module 210 analysis of the media can receive media streams 204-1-f input and output various pieces of image data with the participants to module 220 identification of the participant. Module 220 identification of the participant can show participants are invited 264-1-a to the meeting from the list 202 invited to the meeting using the portion of the image data and the different methods of face recognition and/or voice identification, and display part of the image data and corresponding identifying information 270-1-d module 230 annotation media. [0073] the Logic flow 400 may annotate media frames of each media stream input identifying information for each of the owners within each media stream input to generate an appropriate annotated media stream at step 406. For example, the module 230 annotation media can take part of the image data and corresponding identifying information 270-1-d from module 220 identification of the participant, to retrieve information defining the location of the corresponding portion of the image data module 232 of the positioning and annotate one or more media frames 252-1-g each media stream 204-1-f input identifying information 270-1-d for each participant 154-1-p within each media stream 204-1-f input for the formation of the corresponding annotated media stream 205. [0074] Fig. 5 additionally illustrates a more detailed block diagram of computing architecture 510 suitable for implementing consoles 110-1-m meetings or server 130 multimedia conferencing. In the basic configuration, computing architecture 510 typically includes at least one block 532 processing and memory 534. Memory 534 may be implemented using any machine readable or machine-readable media capable of storing data, including the volatile and non-volatile memory. For example, the memory 534 may include read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), the DRAM c double data rate (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), programmiruemyie ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, memory for the polymer, such as ferroelectric polymer memory, memory on amorphous semiconductors, memory, phase change or ferroelectric memory oxide silicon-oxide-nitride-silicon (SONOS), magnetic or optical cards, or any other type of media suitable for storing information. As shown in Fig. 5, the memory 534 may store various programs, such as one or more application programs 536-1-t and accompanying data. Depending on the implementation, examples of application programs 536-1-t may include the server component 132 of the meeting, the client components 112-1-n meetings or component 134 content-based annotation. [0075] Computing architecture 510 may also have additional features and/or functionality beyond the basic configuration. For example, computing architecture 510 may include a removable storage 538 Manager of the store 540, which may also contain various types of machine readable or machine-readable media, as described earlier. Computing architecture 510 may also have one or more devices 544 input, such as keyboard, mouse, pen, voice input, the input device touch, measuring devices, gauges, etc. Computing rchitecture 510 may also include one or more devices 542 o such as displays, speakers, printers, etc. [0076] the Computing architecture 510 may optionally include one or more compounds of 546 links that allow computing architecture 510 to communicate with other devices. Connection 546 communication may include various types of standard elements of communication, such as one or more communication interfaces, interfaces, network cards, network interface (NIC), radios, wireless transmitters/receivers (transceivers), wired and/or wireless communication media, physical connections, etc. Communication media typically embody machine-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any media of information delivery. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limited to, communication media include wired communication media and wireless communication media. Examples of wired communications media may include wire, cable, metal p is vodku, printed circuit (PCB), the motherboard, the public switched network devices, semiconductor material, twisted wire pairs, coaxial cable, fiber optic connection, the distributed signal, etc. are Examples of wireless communication media may include, without limitation, acoustic, radio frequency range (RF), infrared and other media. Used in the present description, the terms machine readable data carriers and machine-readable media are intended for inclusion in themselves and storage media and communication media. [0077] Fig. 6 illustrates a diagram 600 product manufacturing, suitable for storing logic for the various embodiments, including the logical flow 400. As shown, the product 600 may include a storage medium 602 for storing logic 604. Examples of storage media 602 may include one or more types of machine-readable media capable of storing electronic data, including volatile memory or non-volatile memory, removable or built-in memory, erasable or non-erasable memory, writable or rewritable, etc. are Examples of logic 604 may include various software elements, such as components of software development is ecene, programs, applications, computer programs, application programs, system programs, machine programs, software, operating system, middleware, firmware, software modules, standard software, standard subroutines, functions, methods, procedures, software interfaces, application programming interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. [0078] In one embodiment, for example, the product 600 and/or machine-readable storage medium 602 may store logic 604 containing executable computer program instructions, which when executed by a computer cause the computer to perform the methods and/or operations in accordance with the described variants of implementation. Performed by computer program instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, etc. Performed by the computer program instructions can be implemented according to a predetermined computer language, method or syntax to instruct the computer you who anati some function. The instructions may be implemented using any suitable high-level language, low-level, object-oriented, visual, compiler and/or interpreted programming language, such as C, C ++, Java, Basic, Perl, Matlab, Pascal, Visual Basic, assembler and others. [0079] Various options for implementation may be implemented using hardware components, software elements, or a combination of both. Examples of hardware components may include any of the examples that were previously provided to the logical device, and optionally includes microprocessors, circuits, circuit elements (e.g. transistors, resistors, capacitors, inductors etc), integrated circuits, logic gates, registers, semiconductor device, circuit, chip, microprocessor kits, etc. are Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, software, operating system, middleware software, firmware, software modules, standard software, standard subroutines, functions, methods, procedures is s, the software interfaces, application programming interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination of them. The definition is implemented if an implementation option, using hardware components and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat, reserve loop, speed data, speed data output, memory, bus speed data and a different structure or performance limitations, as desired for a given implementation. [0080] Some embodiments of can be described using the expression “connected with data transmission” and "connected" among their derivatives. These terms are not necessarily intended for use as synonyms for each other. For example, some embodiments of can be described using the terms “connected with data transmission” and/or "connected" to indicate that two or more elements are in direct physical contact or electrical contact with each other. The term "connected with the possibility of transfer Yes the data", however, it may also mean that two or more elements are not in direct contact with each other, but still work together or interact with each other. [0081] it Should be noted that the Abstract of the disclosure provided to perform in accordance with 37 C.F.R Section 1.72 (b)requiring an abstract that will allow the reader to quickly establish the essence of the technical disclosure. It is presented with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing detailed description, it can be noted that various characteristics are grouped in a single embodiment, to simplify the disclosure. This method of disclosure should not be interpreted as reflecting the intention that the required options for implementation require more signs than explicitly specified in each item of the claim. Instead, as indicated in the subsequent claims, the invented object of the invention is less than all the signs only disclosure of option exercise. Thus the following claims are hereby incorporated into the detailed description, with each claim standing alone as a separate option implementation. In the attached claims, Ter the ins " include" and "in which" are used as a simple English equivalents of the respective terms "include" and "in which", respectively. In addition, the terms "first", "second", "third", etc. are used merely as symbols, and are not intended to impose numerical requirements on their objects. [0082] Although the subject invention has been described in language specific to structural features and/or methodological acts, it is necessary to understand that the subject matter defined in the appended claims, is not necessarily limited to the specific features or actions described above. Instead, the specific features and steps described above are disclosed as example forms of implementing the claims. 1. Method for automatic identification of participants for the event multimedia conferencing, containing: 2. The method according to claim 1, containing: 3. The method according to claim 2, containing: 4. The method according to claim 2, containing: 5. The method according to claim 2, containing information defining the location for the associated participant within the media frame or successive media frames of the media stream input, and information location contains the coordinate of the center and the boundary area for the associated participant. 6. The method according to claim 2, containing annotating video content of each media stream input and artificialy information for each mapped to a participant based on the information location for each mapped to a participant. 7. The method according to claim 2, containing annotating media frames of each media stream input identifying information for each mapped participant within the boundary region around the center coordinates for a specific location associated with the participant. 8. The method according to claim 2, containing the concatenation of multiple annotated media streams in mixed media output stream to display multiple consoles meeting. 9. Products for automatic identification of participants for the event multimedia conferencing containing storage media containing commands that, if executed, enable the system: 10. The product according to claim 9, further containing instructions that if executed, enable the system: 11. The product according to claim 9, further containing instructions that if executed, enable the system: 12. The product according to claim 9, further containing instructions that if executed, enable the system: 13. Device for automatic identification of participants for the event media conference that contains the component-based content annotation working for the reception of the list of invitees to the meeting for the event mult the media conference, receiving multiple media streams input from multiple consoles meetings, and annotation of video content of each media stream input identifying information for each participant within each media stream input to generate an appropriate annotated media stream, and identifying information for each participant moves with the participant when the participant moves within the video content. 14. The device according to item 13, in which the component-based content annotation contains: 15. The device according to 14, in which the module identification of the participant works to determine that the number of participants in the first media stream input is equal to one party, and mapping invited to meet with the participant in the first media stream input based on the media source to the first media input stream. 16. The device according to 14, in which the module identification of the participant works to determine that the number of participants in the second media stream input more than one participant, and mapping invited to meet with the participant in the second media stream input based on the signatures of the face, voice, signature, or a combination of signature face and voice signature. 17. The device 14 that contains the module location connected with exchange module annotations media, and module location works to determine the information of the location for the associated participant within the media frame or successive media frames of the media stream input, and information location contains the coordinate of the center and the boundary area for the associated participant. 18. The device according to 14, in which the module annotations media is used for the annotation of video content of each media stream input identifying information for each table is built on the basis of the information of the positioning. 19. The device 14 containing the mixing module media data connected with exchange module annotations media, and mixing module media data is used to receive multiple annotated media streams and combining multiple annotated media streams in mixed media output stream to display multiple consoles meeting. 20. The device according to 14, in which the server of the multimedia conferencing works to control operations of the multimedia conference event multimedia conferencing between multiple consoles meetings, and server multimedia conferencing contains a component-based content annotation.
|
© 2013-2015 Russian business network RussianPatents.com - Special Russian commercial information project for world wide. Foreign filing in English. |