US20150124171A1 - Multiple vantage point viewing platform and user interface - Google Patents

Multiple vantage point viewing platform and user interface Download PDF

Info

Publication number
US20150124171A1
US20150124171A1 US14/096,869 US201314096869A US2015124171A1 US 20150124171 A1 US20150124171 A1 US 20150124171A1 US 201314096869 A US201314096869 A US 201314096869A US 2015124171 A1 US2015124171 A1 US 2015124171A1
Authority
US
United States
Prior art keywords
image data
image
data
capture
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/096,869
Inventor
Kristopher King
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LiveStagedeg Inc
Original Assignee
LiveStagedeg Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US14/096,869 priority Critical patent/US20150124171A1/en
Application filed by LiveStagedeg Inc filed Critical LiveStagedeg Inc
Priority to US14/689,922 priority patent/US20150221334A1/en
Publication of US20150124171A1 publication Critical patent/US20150124171A1/en
Priority to US14/719,636 priority patent/US20150256762A1/en
Priority to US14/754,432 priority patent/US20150304724A1/en
Priority to US14/754,446 priority patent/US10664225B2/en
Priority to US14/941,582 priority patent/US10296281B2/en
Priority to US14/941,584 priority patent/US10156898B2/en
Priority to US15/943,525 priority patent/US20180227572A1/en
Priority to US15/943,504 priority patent/US20180227504A1/en
Priority to US15/943,540 priority patent/US20180227694A1/en
Priority to US15/943,471 priority patent/US20180227501A1/en
Priority to US15/943,550 priority patent/US20180227464A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2368Multiplexing of audio and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/242Synchronization processes, e.g. processing of PCR [Program Clock References]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6156Network physical structure; Signal processing specially adapted to the upstream path of the transmission network
    • H04N21/6193Network physical structure; Signal processing specially adapted to the upstream path of the transmission network involving transmission via a satellite
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed

Definitions

  • the present invention relates to methods and apparatus for generating streaming video captured from multiple vantage points. More specifically, the present invention presents methods an apparatus for capturing image data in two dimensional or three dimensional data formats and from multiple disparate points of capture, and assembling the captured image data into a viewing experience emulating observance of an event from at least two of the multiple points of capture.
  • Traditional methods of viewing image data generally include viewing a video stream of images in a sequential format. The viewer is presented with image data from a single vantage point at a time.
  • Simple video includes streaming of imagery captured from a single image data capture device, such as a video camera.
  • More sophisticated productions include sequential viewing of image data captured from more than one vantage point and may include viewing image data captured from more than one image data capture device.
  • the present invention provides methods and apparatus for capturing image data from multiple vantage points and making the image data available across a distributed platform in a synchronized manner to a user via one or both of an interactive user interface and a predetermined sequence of video segments.
  • the image data captured from multiple vantage points may be captured as one or both of: two dimensional image data or three dimensional image data.
  • the data is synchronized such that a user may view image data from multiple vantage points, each vantage point being associated with a disparate image capture device.
  • the data is synchronized such that the user may view image data of an event or subject at an instance in time, or during a specific time sequence, from one or more vantage points.
  • a user may view multiple image capture sequences at once on a multi view interface pane. In additional embodiments, a user may sequentially choose one or multiple vantage points at a time. In still other embodiments, a user may view a sequence of video image data segments compiled by another user or “user producer,” such that the artistic preferences of amateur or professional users may be shared with other users.
  • FIG. 1 illustrates a block diagram of Content Delivery Workflow according to some embodiments of the present invention.
  • FIG. 3 illustrates an exemplary user interface according to some embodiments of the present invention.
  • FIG. 5 illustrates a controller that may be used in some embodiments of the present invention.
  • the present invention provides generally for the use of multiple camera arrays for the capture and processing of image data that may be used to generate visualizations of live performance imagery from a multi-perspective reference. More specifically, the visualizations of the live performance imagery can include oblique and/or orthogonal approaching and departing view perspectives for a performance setting. Image data captured via the multiple camera arrays is synchronized and made available to a user via a communications network. The user may choose a viewing vantage point from the multiple camera arrays for a particular instance of time or time segment.
  • Image Capture Device refers to apparatus for capturing digital image data
  • an Image capture device may be one or both of: a two dimensional camera (sometimes referred to as “2D”) or a three dimensional camera (sometimes referred to as “3D”).
  • an image capture device includes a charged coupled device (“CCD”) camera.
  • CCD charged coupled device
  • Production Media Ingest refers to the collection of image data and input of image data into a storage for processing, such as Transcoding and Caching.
  • Production Media Ingest may also include the collection of associated data, such a time sequence, a direction of image capture, a viewing angle, 2D or 3D image data collection.
  • a soundboard mix may be used to match recorded audio data with captured image data.
  • an audio mix may be latency adjusted to account for the time consumed in stitching 360 degree image signals into cohesive image presentation.
  • a Broadcast Truck includes audio and image data processing equipment enclosed within a transportable platform, such as, for example, a container mounted upon, or attachable to, a semi-truck, a rail car; container ship or other transportable platform.
  • a Broadcast Truck will process video signals and perform color correction. Video and audio signals may also be mastered with equipment on the Broadcast Truck to perform on-demand post-production processes.
  • post processing may also include one or more of: encoding; muxing and latency adjustment.
  • signal based outputs of HD cameras may be encoded to predetermined player specifications.
  • 360 degree files may also be re-encoded to a specific player specification. Accordingly, various video and audio signals may be muxed together into a single digital data stream.
  • an automated system may be utilized to perform muxing of image data and audio data.
  • a Content Delivery Network 108 may include a digital communications network, such as, for example, the Internet.
  • Other network types may include a virtual private network, a cellular network, an Internet Protocol network, or other network that is able to identify a network access device and transmit data to the network access device.
  • Transmitted data may include, by way of example: transcoded captured image data, and associated timing data or metadata.
  • Production Media Ingest 201 may be accomplished via Image Capture Devices, such as 2D or 3D cameras 206 .
  • Cameras 206 may include, for example, CCD digital cameras that capture imagery as one or both of video or image frame formats.
  • Specific examples of Image Capture Devices include: three dimensional digital cameras, such as for example charged couple device (“CCD”) cameras, and high definition cameras which may also be digital CCD cameras.
  • Image data may be captured in a digital format that may be proprietary or an industry standard format.
  • High definition cameras may include 920 ⁇ 1080 pixel digital professional video cameras based on CCD technology, sometimes referred to as “digital cinematography”.
  • Production Media Ingest 201 is accomplished via multiple Image Capture Devices 206 arranged at multiple Vantage Points, wherein multiple Vantage Points may include one to both of: more than one disparate physical point of capture locations and more than one disparate point of capture viewing direction.
  • Production Media Ingest may also include synchronous time recordation indicating when individual image segments are captured.
  • the synchronous time recordation may facilitate transcoding and caching 102 of image data, which in turn allows for video replay of an instance of an event from multiple perspectives, each perspective from a different vantage point.
  • Transcoding and Caching may include storage of image data with correlating data indicating a time and location of image data capture.
  • Data indicating a specific time of capture may be linked to the image data in a manner that allows a user to choose from multiple image data sets.
  • Each image data set may be associated with a disparate vantage point from which the image was captured.
  • Uploaded video signal data may be transcoded to multiple formats and bitrates. Multiple bitrates may be used to allow an optimum viewing experience for individual users, wherein each user may use an appropriate bit rate for a bandwidth employed by different users.
  • a Content Delivery Network 203 may include a digital communications network, such as, for example, the Internet.
  • Other network types may include a virtual private network, a cellular network, an Internet Protocol network, or other network that is able to identify a network access device and transmit data to the network access device.
  • Transmitted data may include, by way of example: transcoded captured image data, and associated timing data or metadata.
  • multiple content sites may be placed at locations proximate to a destination specified by users. Some embodiments may also use forward proxy ingest to reduce storage cost on the CDN so that only image data that is “watched” is pulled.
  • Post processing servers 209 such as, for example one or more RevolverTM CMS Servers 209 , may be utilized to organize the transmitted data and present it for viewing via one or more user network access devices 210 .
  • Cross Platform Players 205 may receive transmitted data, wherein the Cross Platform Players may include user network access devices 205 may receive data via the Content Delivery network 208 , directly from the Revolver Servers 209 , a cellular network, or via a private data network.
  • an exemplary user interface 400 is illustrated.
  • the exemplary user interface is typically transmitted to a user's Network Access Device as digital data and displayed on a display screen.
  • an identifier of an event such as a performer's name is provided.
  • An event may be anyone, or anything on which data capture is focused.
  • a performer may be a music performer, a presenter, a machine, a workplace, a sporting event, a demonstration, a television show, an entertainment act, a speech, a classroom, a competition, or other subject in a time and place being recorded,
  • the event includes a band performing and Vantage Points 401 - 406 , include various views of the band performing.
  • the Vantage Points include: vie of the a main performer 401 ; a view of the band 402 ; a view of dancers 403 ; a view of stage left 404 ; a view of center stage 405 ; and a view of stage right 406 .
  • each Vantage Point may be associated with audio data captured in proximity to the individual Vantage Points 401 - 406 .
  • Audio may also include overdubbing related to what is being viewed in the image data associated with the Vantage Point. For example, overdubbing may explain who is performing and where and when. The overdubbing may also include editorial or instructional content.
  • User controls 407 may also be included in the User Interface 300 .
  • User controls 407 may include, for example user interactive devices that operate via software or firmware in correlation with a controller or a processor to: indicate a vantage point, indicate a direction, indicate a zoom preference, indicate a filter, and indicate a resolution such as, for example high definition or standard.
  • a user interface 400 is illustrated with additional user interactive controls 401 - 407 .
  • Embodiments can therefore include a personal computer, handheld, game controller; PDA, cellular device, smart device, High Definition Television or other multimedia device with user interactive controls, including, in some embodiments, voice activated interactive controls.
  • FIG. 5 illustrates a controller 500 that may be utilized to implement some embodiments of the present invention.
  • the controller may be included in one or more of the apparatus described above, such as the Revolver Server, and the Network Access Device.
  • the controller 500 comprises a processor unit 510 , such as one or more semiconductor based processors, coupled to a communication device 520 configured to communicate via a communication network (not shown in FIG. 5 ).
  • the communication device 520 may be used to communicate, for example, with one or more online devices, such as a personal computer, laptop or a handheld device.
  • the processor 510 is also in communication with a storage device 530 .
  • the storage device 530 may comprise any appropriate information storage device, including combinations of magnetic storage devices (e.g., magnetic tape and hard disk drives), optical storage devices, and/or semiconductor memory devices such as Random Access Memory (RAM) devices and Read Only Memory (ROM) devices.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • the storage device 530 can store a software program 540 for controlling the processor 510 .
  • the processor 510 performs instructions of the software program 540 , and thereby operates in accordance with the present invention.
  • the processor 510 may also cause the communication device 520 to transmit information, including, in some instances, control commands to operate apparatus to implement the processes described above.
  • the storage device 530 can additionally store related data in a database 530 A and database 530 B, as needed.
  • Apparatus described herein may be included, for example in one or more smart devices such as, for example: a mobile phone, tablet or traditional computer such as laptop or microcomputer or an Internet ready TV.
  • smart devices such as, for example: a mobile phone, tablet or traditional computer such as laptop or microcomputer or an Internet ready TV.
  • the above described platform may be used to implement various features and systems available to users. For example, in some embodiments, a user will provide all or most navigation. Software, which is executable upon demand, may be used in conjunction with a processor to provide seamless navigation of 360/3D/panoramic video footage with Directional Audio—switching between multiple 360/3D/panoramic cameras and user will be able to experience a continuous audio and video experience.
  • Additional embodiments may include the system described automatic predetermined navigation amongst multiple 360/3D/panoramic cameras. Navigation may be automatic to the end user but the experience either controlled by the director or producer or some other designated staff based on their own judgment.
  • Still other embodiments allow a user to record a user defined sequence of image an audio content with navigation of 360/3D/panoramic video footage, Directional Audio, switching between multiple 360/3D/panoramic cameras.
  • user defined recordations may include audio, text or image data overlays.
  • a user may thereby act as a producer with the Multi-Vantage point data, including directional video and audio data and record a User Produced multimedia segment of a performance.
  • the User Produced may be made available via a distributed network, such as the Internet for viewers to view, and, in some embodiments further edit the multimedia segments themselves.
  • Directional Audio may captured via an apparatus that is located at a Vantage Point and records audio from a directional perspective, such as a directional microphone in electrical communication with an audio storage device.
  • Other apparatus that is not directional, such as an omni directional microphone may also be used to capture and record a stream of audio data, however such data is not directional audio data.
  • a user may be provided a choice of audio streams captured from a particular vantage point at at particular time in a sequence.
  • a User may have manual control in auto mode.
  • the User is able to manually control by actions such as swipe or equivalent to switch between MVPs or between HD and 360.
  • an Auto launch Mobile Remote App may launch as soon as video is transferred from iPad to TV using Apple Airplay.
  • tools such as, for example, Apple's Airplay technology
  • a user may stream a video feed from iPad or iPhone to a TV is connected to Apple TV.
  • automatically mobile remote application launches on iPad or iPhone is connected/synched to the system.
  • Computer Systems may be used to displays video streams and switches seamlessly between 360/3D/Panoramic videos and High Definition (HD) videos.
  • executable software allows a user to switch between 360/3D/Panoramic video and High Definition (HD) video without interruptions to a viewing experience of the user.
  • the user is able to switch between HD and any of the multiple vantage points coming as part of the panoramic video footage.
  • Automatic control a computer implemented method (software) that allows its users to experience seamlessly navigation between 360/3D/Panoramic video and HD video. Navigation is either controlled a producer or director or a trained technician based on their own judgment.
  • Manual Control and Manual Control systems may be run on a portal computer such as a mobile phone, tablet or traditional computer such as laptop or microcomputer.
  • functionality may include: Panoramic Video Interactivity, Tag human and inanimate objects in panoramic video footage; interactivity for the user in tagging humans as well as inanimate objects; sharing of these tags in real time with other friends or followers in your social network/social graph; Panoramic Image Slices to provide the ability to slice images/photos out of Panoramic videos; real time processing that allows users to slice images of any size from panoramic video footage over a computer; allowing users to purchase objects or items of interest in an interactive panoramic video footage; ability to share panoramic images slides from panoramic videos via email, sms (smart message service) or through social networks; share or send panoramic images to other users of a similar application or via the use of SMS, email, and social network sharing; ability to “tag” human and inanimate objects within Panoramic Image slices; real time “tagging” of human and inanimate objects in the panoramic image; allowing users to purchase
  • Software allows for interactivity on the user front and also ability to aggregate the feedback in a backend platform that is accessible by individuals who can act on the interactive data; ability to offer “bidding” capability to panoramic video audience over a computer network, bidding will have aspects of gamification wherein results may be based on multiple user participation (triggers based on conditions such # of bids, type of bids, timing); Heads Up Display (HUD) with a display that identifies animate and inanimate objects in the live video feed wherein identification may be tracked at an end server and associated data made available to front end clients.
  • HUD Heads Up Display
  • a player according to the present invention includes a Panoramic HTML5 Video Player, or other standardized player, referred to herein as a “KingPlaya”. KingPlaya is developed as an alternative to traditional Flash and video processing intensive panoramic players previously known in the industry.
  • KingPlaya may include a video processing configuration and a jquery javascript library.
  • processor intensive video processing is required to recreate the image in the player, such as stitch a far right hand edge of a first frame to a far left hand edge of a second frame of a panoramic image in a repeated fashion.
  • the present invention stitches multiple or all camera elements thereby creating a digital seam.
  • the difference being that a stitch/seam that is created according to the present invention and control can be reassembled in the player without having to process any video whatsoever as it is a clean vertical line break.
  • the video configuration sections of this document will address this in more detail.
  • a second element to KingPlaya may include a jquery implementation. This implementation is dependent on the above video configuration as that configuration allows the video to be reassembled with minimal to no processing. KingPlaya then duplicates the video into two identical frames sitting side by side in an A-B configuration which seam flawlessly. As the user navigates the 360 degree video, the A-B configuration is constantly re-evaluated by a return function in the jquery draggable code. KingPlaya constantly asks itself if an AB configuration or a BA configuration is more appropriate depending on the direction in which the user is moving the panoramic image.
  • KingPlaya then rearranges the A-B or B-A configuration in real-time so that by the time the user reaches the frame edge of either the A or B frame (depending on which is currently in the viewport), the alternative frame is already aligned and the user can continue panning. This process may be repeated in either direction technical limitation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Astronomy & Astrophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present invention provides methods and apparatus for generating and transmitting a multimedia, multi-vantage point platform for viewing audio and video data.

Description

    FIELD OF THE INVENTION
  • The present invention relates to methods and apparatus for generating streaming video captured from multiple vantage points. More specifically, the present invention presents methods an apparatus for capturing image data in two dimensional or three dimensional data formats and from multiple disparate points of capture, and assembling the captured image data into a viewing experience emulating observance of an event from at least two of the multiple points of capture.
  • BACKGROUND OF THE INVENTION
  • Traditional methods of viewing image data generally include viewing a video stream of images in a sequential format. The viewer is presented with image data from a single vantage point at a time. Simple video includes streaming of imagery captured from a single image data capture device, such as a video camera. More sophisticated productions include sequential viewing of image data captured from more than one vantage point and may include viewing image data captured from more than one image data capture device.
  • As video capture has proliferated, popular video viewing forums, such as YouTube™, to allow for users to choose from a variety of video segments. In many cases, a single event will be captured on video by more than one user and each user will post a video segment on YouTube. Consequently, it is possible for a viewer to view a single event from different vantage points, However, in each instance of the prior art, a viewer must watch a video segment from the perspective of the video capture device, and cannot switch between views in a synchronized fashion during video replay.
  • Consequently, alternative ways of viewing captured image data that allow for greater control by a viewer are desirable.
  • SUMMARY OF THE INVENTION
  • Accordingly, the present invention provides methods and apparatus for capturing image data from multiple vantage points and making the image data available across a distributed platform in a synchronized manner to a user via one or both of an interactive user interface and a predetermined sequence of video segments.
  • The image data captured from multiple vantage points may be captured as one or both of: two dimensional image data or three dimensional image data. The data is synchronized such that a user may view image data from multiple vantage points, each vantage point being associated with a disparate image capture device. The data is synchronized such that the user may view image data of an event or subject at an instance in time, or during a specific time sequence, from one or more vantage points.
  • In some embodiments, a user may view multiple image capture sequences at once on a multi view interface pane. In additional embodiments, a user may sequentially choose one or multiple vantage points at a time. In still other embodiments, a user may view a sequence of video image data segments compiled by another user or “user producer,” such that the artistic preferences of amateur or professional users may be shared with other users.
  • Still further embodiments allow for multiple segments of image data to be combined with one or more of: unassociated images, unassociated video segments and editorial content to generate a hybrid of event imagery and external imagery. Unassociated images, unassociated video segments and editorial content to generate a hybrid of event imagery and external imagery may be combined for example with a device including a processor running executable software and a viewing screen with a graphical user interface (“GUI”).
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, that are incorporated in and constitute a part of this specification, illustrate several embodiments of the invention and, together with the description, serve to explain the principles of the invention:
  • FIG. 1 illustrates a block diagram of Content Delivery Workflow according to some embodiments of the present invention.
  • FIG. 2 illustrates a block diagram of Live Production Workflow according to some embodiments of the present invention.
  • FIG. 3 illustrates an exemplary user interface according to some embodiments of the present invention.
  • FIG. 4 illustrates additional features of an exemplary user interface according to some embodiments of the present invention.
  • FIG. 5 illustrates a controller that may be used in some embodiments of the present invention.
  • DETAILED DESCRIPTION
  • The present invention provides generally for the use of multiple camera arrays for the capture and processing of image data that may be used to generate visualizations of live performance imagery from a multi-perspective reference. More specifically, the visualizations of the live performance imagery can include oblique and/or orthogonal approaching and departing view perspectives for a performance setting. Image data captured via the multiple camera arrays is synchronized and made available to a user via a communications network. The user may choose a viewing vantage point from the multiple camera arrays for a particular instance of time or time segment.
  • In the following sections, detailed descriptions of embodiments and methods of the invention will be given. The description of both preferred and alternative embodiments though through are exemplary only, and it is understood that to those skilled in the art that variations, modifications and alterations may be apparent. It is therefore to be understood that the exemplary embodiments do not limit the broadness of the aspects of the underlying invention as defined by the claims.
  • Definitions
  • As used herein “Broadcast Truck” refers to a vehicle transportable from a first location to a second location with electronic equipment capable of transmitting captured image data, audio data and video data in an electronic format, wherein the transmission is to a location remote from the location of the Broadcast Truck.
  • As used herein, “Image Capture Device” refers to apparatus for capturing digital image data, an Image capture device may be one or both of: a two dimensional camera (sometimes referred to as “2D”) or a three dimensional camera (sometimes referred to as “3D”). In some exemplary embodiments an image capture device includes a charged coupled device (“CCD”) camera.
  • As used herein, Production Media Ingest refers to the collection of image data and input of image data into a storage for processing, such as Transcoding and Caching. Production Media Ingest may also include the collection of associated data, such a time sequence, a direction of image capture, a viewing angle, 2D or 3D image data collection.
  • As used herein, Vantage Point refers to a location of Image Data Capture in relation to a location of a performance.
  • As used herein, Directional Audio refers to audio data captured from a vantage point and from a direction such that the audio data includes at least one quality that differs from audio data captured from the vantage and a second direction or from an omni-direction capture.
  • Referring now to FIG. 1, a Live Production Workflow diagram is presented 100 with components that may be used to implement various embodiments of the present invention. Image capture devices 101-102, such as for example, one or both of 360 degree camera arrays 101 and high definition camera's 102 capture image date of an event. In preferred embodiments, multiple vantage points each have both a 360 degree camera array 101 and at least one high definition camera 102 capturing image data of the event. Image capture devices 101-102 may be arranged for one or more of: planer image data capture; oblique image data capture; and perpendicular image data capture. Some embodiments may also include audio microphones to capture sound input which accompanies the captured image data.
  • Additional embodiments may include camera arrays with multiple viewing angles that are not complete 360 degree camera arrays, for example, in some embodiments, a camera array may include at least 120 degrees of image capture, additional embodiments include a camera array with at least 180 degrees of image capture; and still other embodiments include a camera array with at least 270 degrees of image capture. In various embodiments, image capture may include cameras arranged to capture image data in directions that are planar or oblique in relation to one another.
  • At 103, a soundboard mix may be used to match recorded audio data with captured image data. In some embodiments, in order to maintain synchronization, an audio mix may be latency adjusted to account for the time consumed in stitching 360 degree image signals into cohesive image presentation.
  • At 104, a Broadcast Truck includes audio and image data processing equipment enclosed within a transportable platform, such as, for example, a container mounted upon, or attachable to, a semi-truck, a rail car; container ship or other transportable platform. In some embodiments, a Broadcast Truck will process video signals and perform color correction. Video and audio signals may also be mastered with equipment on the Broadcast Truck to perform on-demand post-production processes.
  • At 105, in some embodiments, post processing may also include one or more of: encoding; muxing and latency adjustment. By way of non-limiting example, signal based outputs of HD cameras may be encoded to predetermined player specifications. In addition, 360 degree files may also be re-encoded to a specific player specification. Accordingly, various video and audio signals may be muxed together into a single digital data stream. In some embodiments, an automated system may be utilized to perform muxing of image data and audio data.
  • At 104A, in some embodiments, a Broadcast Truck or other assembly of post processing equipment may be used to allow a technical director to perform line-edit decisions and pass through to a predetermined player's autopilot support for multiple camera angles.
  • At 106, a satellite uplink may be used to transmit post process or native image data and audio data. In some embodiments, by way of non-limiting example, a muxed signal may be transmitted via satellite uplink at or about 80 megabytes (Mb/s) by a commercial provider, such as, PSSI Global™ or Sureshot™ Transmissions.
  • In some venues, such as, for example events taking place at a sports arena a transmission may take place via Level 3 fiber optic lines, otherwise made available for sports broadcasting or other event broadcasting. At 107 Satellite Bandwidth may be utilized to transmit image data and audio data to a Content Delivery Network 108.
  • As described further below, a Content Delivery Network 108 may include a digital communications network, such as, for example, the Internet. Other network types may include a virtual private network, a cellular network, an Internet Protocol network, or other network that is able to identify a network access device and transmit data to the network access device. Transmitted data may include, by way of example: transcoded captured image data, and associated timing data or metadata.
  • Referring now to FIG. 2, a flow chart is illustrated with components of a multi-vantage point viewing system 200 according to the present invention. Production Media Ingest 201 may be accomplished via Image Capture Devices, such as 2D or 3D cameras 206. Cameras 206 may include, for example, CCD digital cameras that capture imagery as one or both of video or image frame formats. Specific examples of Image Capture Devices include: three dimensional digital cameras, such as for example charged couple device (“CCD”) cameras, and high definition cameras which may also be digital CCD cameras. Image data may be captured in a digital format that may be proprietary or an industry standard format. High definition cameras may include 920×1080 pixel digital professional video cameras based on CCD technology, sometimes referred to as “digital cinematography”.
  • Production Media Ingest 201 is accomplished via multiple Image Capture Devices 206 arranged at multiple Vantage Points, wherein multiple Vantage Points may include one to both of: more than one disparate physical point of capture locations and more than one disparate point of capture viewing direction.
  • Production Media Ingest may also include synchronous time recordation indicating when individual image segments are captured. The synchronous time recordation may facilitate transcoding and caching 102 of image data, which in turn allows for video replay of an instance of an event from multiple perspectives, each perspective from a different vantage point.
  • At 202 Transcoding and Caching may include storage of image data with correlating data indicating a time and location of image data capture. Data indicating a specific time of capture may be linked to the image data in a manner that allows a user to choose from multiple image data sets. Each image data set may be associated with a disparate vantage point from which the image was captured. Uploaded video signal data may be transcoded to multiple formats and bitrates. Multiple bitrates may be used to allow an optimum viewing experience for individual users, wherein each user may use an appropriate bit rate for a bandwidth employed by different users.
  • A Content Delivery Network 203 may include a digital communications network, such as, for example, the Internet. Other network types may include a virtual private network, a cellular network, an Internet Protocol network, or other network that is able to identify a network access device and transmit data to the network access device. Transmitted data may include, by way of example: transcoded captured image data, and associated timing data or metadata. In some embodiments, multiple content sites may be placed at locations proximate to a destination specified by users. Some embodiments may also use forward proxy ingest to reduce storage cost on the CDN so that only image data that is “watched” is pulled.
  • Post processing servers 209, such as, for example one or more Revolver™ CMS Servers 209, may be utilized to organize the transmitted data and present it for viewing via one or more user network access devices 210. Cross Platform Players 205 may receive transmitted data, wherein the Cross Platform Players may include user network access devices 205 may receive data via the Content Delivery network 208, directly from the Revolver Servers 209, a cellular network, or via a private data network.
  • Referring now to FIG. 3, a sample video and audio production strategy is illustrated. A production area may include an event location 301, such as a stage and an audience viewing area 302. The event location 301 and the audience viewing area 302 include multiple vantage points, wherein each vantage point include one or more of a 360 degree camera arrays 303-308 and HD cameras 309-312. Audio pick-ups, such as one or more microphones 313 may also be included in the video and audio production strategy. In some embodiments, audio pickups may also be included in the 360 camera arrays, the HD cameras, or proximate to performers (not shown).
  • Referring now to FIG. 4, an exemplary user interface 400 is illustrated. The exemplary user interface is typically transmitted to a user's Network Access Device as digital data and displayed on a display screen. At 401 an identifier of an event, such as a performer's name is provided. An event may be anyone, or anything on which data capture is focused. By way of non-limiting example, a performer may be a music performer, a presenter, a machine, a workplace, a sporting event, a demonstration, a television show, an entertainment act, a speech, a classroom, a competition, or other subject in a time and place being recorded,
  • At 402-406, various Vantage Points are listed from which a user or other viewer may view captured image data. As illustrated, the event includes a band performing and Vantage Points 401-406, include various views of the band performing. By way of non-limiting example, the Vantage Points include: vie of the a main performer 401; a view of the band 402; a view of dancers 403; a view of stage left 404; a view of center stage 405; and a view of stage right 406.
  • In some embodiments, each Vantage Point may be associated with audio data captured in proximity to the individual Vantage Points 401-406. Audio may also include overdubbing related to what is being viewed in the image data associated with the Vantage Point. For example, overdubbing may explain who is performing and where and when. The overdubbing may also include editorial or instructional content.
  • User controls 407 may also be included in the User Interface 300. User controls 407, may include, for example user interactive devices that operate via software or firmware in correlation with a controller or a processor to: indicate a vantage point, indicate a direction, indicate a zoom preference, indicate a filter, and indicate a resolution such as, for example high definition or standard.
  • Referring now to FIG. 4, a user interface 400 is illustrated with additional user interactive controls 401-407.
  • The teachings of the present invention may be implemented with apparatus capable of embodying the innovative concepts described herein. Image presentation can be accomplished via multimedia type user interface. Embodiments can therefore include a personal computer, handheld, game controller; PDA, cellular device, smart device, High Definition Television or other multimedia device with user interactive controls, including, in some embodiments, voice activated interactive controls.
  • Apparatus
  • In addition, FIG. 5 illustrates a controller 500 that may be utilized to implement some embodiments of the present invention. The controller may be included in one or more of the apparatus described above, such as the Revolver Server, and the Network Access Device. The controller 500 comprises a processor unit 510, such as one or more semiconductor based processors, coupled to a communication device 520 configured to communicate via a communication network (not shown in FIG. 5). The communication device 520 may be used to communicate, for example, with one or more online devices, such as a personal computer, laptop or a handheld device.
  • The processor 510 is also in communication with a storage device 530. The storage device 530 may comprise any appropriate information storage device, including combinations of magnetic storage devices (e.g., magnetic tape and hard disk drives), optical storage devices, and/or semiconductor memory devices such as Random Access Memory (RAM) devices and Read Only Memory (ROM) devices.
  • The storage device 530 can store a software program 540 for controlling the processor 510. The processor 510 performs instructions of the software program 540, and thereby operates in accordance with the present invention. The processor 510 may also cause the communication device 520 to transmit information, including, in some instances, control commands to operate apparatus to implement the processes described above. The storage device 530 can additionally store related data in a database 530A and database 530B, as needed.
  • Specific Examples of Equipment
  • Apparatus described herein may be included, for example in one or more smart devices such as, for example: a mobile phone, tablet or traditional computer such as laptop or microcomputer or an Internet ready TV.
  • The above described platform may be used to implement various features and systems available to users. For example, in some embodiments, a user will provide all or most navigation. Software, which is executable upon demand, may be used in conjunction with a processor to provide seamless navigation of 360/3D/panoramic video footage with Directional Audio—switching between multiple 360/3D/panoramic cameras and user will be able to experience a continuous audio and video experience.
  • Additional embodiments may include the system described automatic predetermined navigation amongst multiple 360/3D/panoramic cameras. Navigation may be automatic to the end user but the experience either controlled by the director or producer or some other designated staff based on their own judgment.
  • Still other embodiments allow a user to record a user defined sequence of image an audio content with navigation of 360/3D/panoramic video footage, Directional Audio, switching between multiple 360/3D/panoramic cameras. In some embodiments, user defined recordations may include audio, text or image data overlays. A user may thereby act as a producer with the Multi-Vantage point data, including directional video and audio data and record a User Produced multimedia segment of a performance. The User Produced may be made available via a distributed network, such as the Internet for viewers to view, and, in some embodiments further edit the multimedia segments themselves.
  • Directional Audio may captured via an apparatus that is located at a Vantage Point and records audio from a directional perspective, such as a directional microphone in electrical communication with an audio storage device. Other apparatus that is not directional, such as an omni directional microphone may also be used to capture and record a stream of audio data, however such data is not directional audio data. A user may be provided a choice of audio streams captured from a particular vantage point at at particular time in a sequence.
  • In some embodiments a User may have manual control in auto mode. The User is able to manually control by actions such as swipe or equivalent to switch between MVPs or between HD and 360.
  • In some additional embodiments, an Auto launch Mobile Remote App may launch as soon as video is transferred from iPad to TV using Apple Airplay. Using tools, such as, for example, Apple's Airplay technology, a user may stream a video feed from iPad or iPhone to a TV is connected to Apple TV. When a user moves the video stream to TV, automatically mobile remote application launches on iPad or iPhone is connected/synched to the system. Computer Systems may be used to displays video streams and switches seamlessly between 360/3D/Panoramic videos and High Definition (HD) videos.
  • In some embodiments that implement Manual control, executable software allows a user to switch between 360/3D/Panoramic video and High Definition (HD) video without interruptions to a viewing experience of the user. The user is able to switch between HD and any of the multiple vantage points coming as part of the panoramic video footage.
  • In some embodiments that implement Automatic control a computer implemented method (software) that allows its users to experience seamlessly navigation between 360/3D/Panoramic video and HD video. Navigation is either controlled a producer or director or a trained technician based on their own judgment.
  • Manual Control and Manual Control systems may be run on a portal computer such as a mobile phone, tablet or traditional computer such as laptop or microcomputer. In various embodiments, functionality may include: Panoramic Video Interactivity, Tag human and inanimate objects in panoramic video footage; interactivity for the user in tagging humans as well as inanimate objects; sharing of these tags in real time with other friends or followers in your social network/social graph; Panoramic Image Slices to provide the ability to slice images/photos out of Panoramic videos; real time processing that allows users to slice images of any size from panoramic video footage over a computer; allowing users to purchase objects or items of interest in an interactive panoramic video footage; ability to share panoramic images slides from panoramic videos via email, sms (smart message service) or through social networks; share or send panoramic images to other users of a similar application or via the use of SMS, email, and social network sharing; ability to “tag” human and inanimate objects within Panoramic Image slices; real time “tagging” of human and inanimate objects in the panoramic image; allowing users to purchase objects or items of interest in an interactive panoramic video footage; content and commerce layer on top of the video footage—that recognizes objects that are already tagged for purchase or adding to user's wish list; ability to compare footage from various camera sources in real time; real time comparison panoramic video footage from multiple cameras captured by multiple users or otherwise to identify the best footage based on aspects such as visual clarity, audio clarity, lighting, focus and other details; recognition of unique users based on the user's devices that are used for capturing the video footage (brand, model #, MAC address, IP address, etc); radar navigation of which camera footage is being displayed on the screens amongst many other sources of camera feeds; navigation matrix of panoramic video viewports that in a particular geographic location or venue; user generated content that can be embedded on top of the panoramic video that maps exactly to the time codes of video feeds; time code mapping done between production quality video feed and user generated video feeds; user interactivity with the ability to remotely vote for a song or an act/song while watching a panoramic video and effect outcome at venue. Software allows for interactivity on the user front and also ability to aggregate the feedback in a backend platform that is accessible by individuals who can act on the interactive data; ability to offer “bidding” capability to panoramic video audience over a computer network, bidding will have aspects of gamification wherein results may be based on multiple user participation (triggers based on conditions such # of bids, type of bids, timing); Heads Up Display (HUD) with a display that identifies animate and inanimate objects in the live video feed wherein identification may be tracked at an end server and associated data made available to front end clients.
  • Specific Examples of User Interface Functionality
  • In some embodiments, a player according to the present invention includes a Panoramic HTML5 Video Player, or other standardized player, referred to herein as a “KingPlaya”. KingPlaya is developed as an alternative to traditional Flash and video processing intensive panoramic players previously known in the industry.
  • KingPlaya may include a video processing configuration and a jquery javascript library. When multiple camera elements are stitched into a single two dimensional (2D) panoramic image, without a KingPlaya video configuration, processor intensive video processing is required to recreate the image in the player, such as stitch a far right hand edge of a first frame to a far left hand edge of a second frame of a panoramic image in a repeated fashion.
  • The present invention stitches multiple or all camera elements thereby creating a digital seam. The difference being that a stitch/seam that is created according to the present invention and control can be reassembled in the player without having to process any video whatsoever as it is a clean vertical line break. The video configuration sections of this document will address this in more detail.
  • A second element to KingPlaya may include a jquery implementation. This implementation is dependent on the above video configuration as that configuration allows the video to be reassembled with minimal to no processing. KingPlaya then duplicates the video into two identical frames sitting side by side in an A-B configuration which seam flawlessly. As the user navigates the 360 degree video, the A-B configuration is constantly re-evaluated by a return function in the jquery draggable code. KingPlaya constantly asks itself if an AB configuration or a BA configuration is more appropriate depending on the direction in which the user is moving the panoramic image. KingPlaya then rearranges the A-B or B-A configuration in real-time so that by the time the user reaches the frame edge of either the A or B frame (depending on which is currently in the viewport), the alternative frame is already aligned and the user can continue panning. This process may be repeated in either direction technical limitation.
  • CONCLUSION
  • A number of embodiments of the present invention have been described. While this specification contains many specific implementation details, there should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of the present invention.
  • Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in combination in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
  • Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous.
  • Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
  • Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order show, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the claimed invention.

Claims (20)

1. Apparatus for capturing audio and video data of an event from multiple vantage points, the apparatus comprising:
multiple arrays of image capture devices deployed at multiple vantage points in relation to an event subject location;
one or more high definition cameras deployed in at least one vantage point in relation to an event subject location; and
a content delivery network for transmitting image data captured by the multiple arrays of image capture devices.
2. The apparatus of claim 1 additionally comprising apparatus for muxing image data captured by the multiple image data devices and the one or more high definition cameras, wherein the content delivery network transmits muxed image data.
3. The apparatus of claim 2 additionally comprising a satellite uplink for transmitting the muxed image data.
4. The apparatus of claim 3 wherein the multiple arrays of image capture devices comprise three or more cameras arranged to capture image data in 360 degrees from respective multiple vantage points in relation to the event subject location.
5. The apparatus of claim 3 wherein the image capture devices comprise digital cameras.
6. The apparatus of claim 3 wherein the high definition camera comprises 920×1080 pixel digital video cameras based on CCD technology.
7. The apparatus of claim 3 wherein the multiple vantage points comprises more than one disparate physical point of image capture at a venue.
8. The apparatus of claim 3 wherein the image capture devices are positioned to capture image data in more than one disparate viewing directions.
9. The apparatus of claim 3 additionally comprising editorial apparatus for combining images not captured by the one or more image data capture devices with captured image data.
10. The apparatus of claim 3 additionally comprising a motor vehicle transportable from a first location to a second location with electronic equipment capable of transmitting captured image data, audio data and video data in an electronic format, wherein the transmission is to a location remote from the location of the Broadcast Truck.
11. The apparatus of claim 3 additionally comprising apparatus for capturing audio data captured from a vantage point and from a direction such that the audio data includes at least one quality that differs from audio data captured from the vantage and a second direction or from an omni-direction.
12. The apparatus of claim 3 wherein the image capture devices are arranged to capture image data at an angle oblique to event subject location.
13. The apparatus of claim 3 wherein the image capture devices are arranged to capture image data at an angle perpendicular to event subject location.
14. The apparatus of claim 3 wherein the image capture devices are arranged to capture image data at an angle planar to event subject location.
15. The apparatus of claim 3 wherein the image capture devices are arranged to capture image data at angles of between about 120 degree arcs and 270 degree arcs.
16. The apparatus of claim 3 wherein the image capture devices are arranged to capture image data at angles of between about 270 degree arcs and 360 degree arcs.
17. The apparatus of claim 4 additionally comprising a processor for stitching image data into a cohesive image data presentation.
18. The apparatus of claim 17 additionally comprising a soundboard mixer synchronizing audio data with image data.
19. The apparatus of claim 18 wherein the processor and the soundboard mixer are enclosed within a transportable platform.
20. The apparatus of claim 19 wherein the transportable platform comprises a container attachable to a semi-truck.
US14/096,869 2013-11-05 2013-12-04 Multiple vantage point viewing platform and user interface Abandoned US20150124171A1 (en)

Priority Applications (12)

Application Number Priority Date Filing Date Title
US14/096,869 US20150124171A1 (en) 2013-11-05 2013-12-04 Multiple vantage point viewing platform and user interface
US14/689,922 US20150221334A1 (en) 2013-11-05 2015-04-17 Audio capture for multi point image capture systems
US14/719,636 US20150256762A1 (en) 2013-11-05 2015-05-22 Event specific data capture for multi-point image capture systems
US14/754,432 US20150304724A1 (en) 2013-11-05 2015-06-29 Multi vantage point player
US14/754,446 US10664225B2 (en) 2013-11-05 2015-06-29 Multi vantage point audio player
US14/941,582 US10296281B2 (en) 2013-11-05 2015-11-14 Handheld multi vantage point player
US14/941,584 US10156898B2 (en) 2013-11-05 2015-11-14 Multi vantage point player with wearable display
US15/943,525 US20180227572A1 (en) 2013-11-05 2018-04-02 Venue specific multi point image capture
US15/943,504 US20180227504A1 (en) 2013-11-05 2018-04-02 Switchable multiple video track platform
US15/943,540 US20180227694A1 (en) 2013-11-05 2018-04-02 Audio capture for multi point image capture systems
US15/943,471 US20180227501A1 (en) 2013-11-05 2018-04-02 Multiple vantage point viewing platform and user interface
US15/943,550 US20180227464A1 (en) 2013-11-05 2018-04-02 Event specific data capture for multi-point image capture systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361900093P 2013-11-05 2013-11-05
US14/096,869 US20150124171A1 (en) 2013-11-05 2013-12-04 Multiple vantage point viewing platform and user interface

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/687,752 Continuation-In-Part US20150222935A1 (en) 2013-11-05 2015-04-15 Venue specific multi point image capture

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US14/532,659 Continuation-In-Part US20150124048A1 (en) 2013-11-05 2014-11-04 Switchable multiple video track platform
US14/689,922 Continuation-In-Part US20150221334A1 (en) 2013-11-05 2015-04-17 Audio capture for multi point image capture systems
US14/754,446 Continuation-In-Part US10664225B2 (en) 2013-11-05 2015-06-29 Multi vantage point audio player

Publications (1)

Publication Number Publication Date
US20150124171A1 true US20150124171A1 (en) 2015-05-07

Family

ID=53006740

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/096,869 Abandoned US20150124171A1 (en) 2013-11-05 2013-12-04 Multiple vantage point viewing platform and user interface
US14/532,659 Abandoned US20150124048A1 (en) 2013-11-05 2014-11-04 Switchable multiple video track platform

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/532,659 Abandoned US20150124048A1 (en) 2013-11-05 2014-11-04 Switchable multiple video track platform

Country Status (1)

Country Link
US (2) US20150124171A1 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150139601A1 (en) * 2013-11-15 2015-05-21 Nokia Corporation Method, apparatus, and computer program product for automatic remix and summary creation using crowd-sourced intelligence
CN105959716A (en) * 2016-05-13 2016-09-21 武汉斗鱼网络科技有限公司 Method and system for automatically recommending definition based on user equipment
US20190392868A1 (en) * 2015-08-26 2019-12-26 JBF Interlude 2009 LTD Systems and methods for adaptive and responsive video
US10616547B1 (en) * 2019-02-14 2020-04-07 Disney Enterprises, Inc. Multi-vantage point light-field picture element display
US11102543B2 (en) * 2014-03-07 2021-08-24 Sony Corporation Control of large screen display using wireless portable computer to pan and zoom on large screen display
US11164548B2 (en) 2015-12-22 2021-11-02 JBF Interlude 2009 LTD Intelligent buffering of large-scale video
US11232458B2 (en) 2010-02-17 2022-01-25 JBF Interlude 2009 LTD System and method for data mining within interactive multimedia
US11245961B2 (en) 2020-02-18 2022-02-08 JBF Interlude 2009 LTD System and methods for detecting anomalous activities for interactive videos
US11284141B2 (en) 2019-12-18 2022-03-22 Yerba Buena Vr, Inc. Methods and apparatuses for producing and consuming synchronized, immersive interactive video-centric experiences
US11314936B2 (en) 2009-05-12 2022-04-26 JBF Interlude 2009 LTD System and method for assembling a recorded composition
US11348618B2 (en) 2014-10-08 2022-05-31 JBF Interlude 2009 LTD Systems and methods for dynamic video bookmarking
US11412276B2 (en) 2014-10-10 2022-08-09 JBF Interlude 2009 LTD Systems and methods for parallel track transitions
US11490047B2 (en) 2019-10-02 2022-11-01 JBF Interlude 2009 LTD Systems and methods for dynamically adjusting video aspect ratios
US11501802B2 (en) 2014-04-10 2022-11-15 JBF Interlude 2009 LTD Systems and methods for creating linear video from branched video
US11528534B2 (en) 2018-01-05 2022-12-13 JBF Interlude 2009 LTD Dynamic library display for interactive videos
US11553024B2 (en) 2016-12-30 2023-01-10 JBF Interlude 2009 LTD Systems and methods for dynamic weighting of branched video paths
US11563915B2 (en) 2019-03-11 2023-01-24 JBF Interlude 2009 LTD Media content presentation
US11601721B2 (en) 2018-06-04 2023-03-07 JBF Interlude 2009 LTD Interactive video dynamic adaptation and user profiling
US11856271B2 (en) 2016-04-12 2023-12-26 JBF Interlude 2009 LTD Symbiotic interactive video
US11882337B2 (en) 2021-05-28 2024-01-23 JBF Interlude 2009 LTD Automated platform for generating interactive videos
US11934477B2 (en) 2021-09-24 2024-03-19 JBF Interlude 2009 LTD Video player integration within websites
US11997413B2 (en) 2019-03-11 2024-05-28 JBF Interlude 2009 LTD Media content presentation
US12047637B2 (en) 2020-07-07 2024-07-23 JBF Interlude 2009 LTD Systems and methods for seamless audio and video endpoint transitions
US12096081B2 (en) 2020-02-18 2024-09-17 JBF Interlude 2009 LTD Dynamic adaptation of interactive video players using behavioral analytics
US12132962B2 (en) 2015-04-30 2024-10-29 JBF Interlude 2009 LTD Systems and methods for nonlinear video playback using linear real-time video players
US12155897B2 (en) 2021-08-31 2024-11-26 JBF Interlude 2009 LTD Shader-based dynamic video manipulation
US12316905B2 (en) 2024-06-20 2025-05-27 JBF Interlude 2009 LTD Systems and methods for seamless audio and video endpoint transitions

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9261526B2 (en) 2010-08-26 2016-02-16 Blast Motion Inc. Fitting system for sporting equipment
US9619891B2 (en) 2010-08-26 2017-04-11 Blast Motion Inc. Event analysis and tagging system
US9607652B2 (en) 2010-08-26 2017-03-28 Blast Motion Inc. Multi-sensor event detection and tagging system
US9940508B2 (en) 2010-08-26 2018-04-10 Blast Motion Inc. Event detection, confirmation and publication system that integrates sensor data and social media
US9626554B2 (en) 2010-08-26 2017-04-18 Blast Motion Inc. Motion capture system that combines sensors with different measurement ranges
US9604142B2 (en) 2010-08-26 2017-03-28 Blast Motion Inc. Portable wireless mobile device motion capture data mining system and method
US9396385B2 (en) 2010-08-26 2016-07-19 Blast Motion Inc. Integrated sensor and video motion analysis method
US20170257414A1 (en) * 2012-01-26 2017-09-07 Michael Edward Zaletel Method of creating a media composition and apparatus therefore
US20150058709A1 (en) * 2012-01-26 2015-02-26 Michael Edward Zaletel Method of creating a media composition and apparatus therefore
US10124230B2 (en) 2016-07-19 2018-11-13 Blast Motion Inc. Swing analysis method using a sweet spot trajectory
US11577142B2 (en) 2015-07-16 2023-02-14 Blast Motion Inc. Swing analysis system that calculates a rotational profile
US10974121B2 (en) 2015-07-16 2021-04-13 Blast Motion Inc. Swing quality measurement system
US9694267B1 (en) 2016-07-19 2017-07-04 Blast Motion Inc. Swing analysis method using a swing plane reference frame
US11565163B2 (en) 2015-07-16 2023-01-31 Blast Motion Inc. Equipment fitting system that compares swing metrics
CA3031040C (en) 2015-07-16 2021-02-16 Blast Motion Inc. Multi-sensor event correlation system
US10265602B2 (en) 2016-03-03 2019-04-23 Blast Motion Inc. Aiming feedback system with inertial sensors
US20170316806A1 (en) * 2016-05-02 2017-11-02 Facebook, Inc. Systems and methods for presenting content
WO2017218962A1 (en) * 2016-06-16 2017-12-21 Blast Motion Inc. Event detection, confirmation and publication system that integrates sensor data and social media
JP7404067B2 (en) * 2016-07-22 2023-12-25 ドルビー ラボラトリーズ ライセンシング コーポレイション Network-based processing and delivery of multimedia content for live music performances
CN107800946A (en) * 2016-09-02 2018-03-13 丰唐物联技术(深圳)有限公司 A kind of live broadcasting method and system
CN108289228B (en) 2017-01-09 2020-08-28 阿里巴巴集团控股有限公司 Panoramic video transcoding method, device and equipment
US10547704B2 (en) * 2017-04-06 2020-01-28 Sony Interactive Entertainment Inc. Predictive bitrate selection for 360 video streaming
FR3065604B1 (en) * 2017-04-21 2019-06-07 Peugeot Citroen Automobiles Sa METHOD AND DEVICE FOR CONTROLLING THE TRANSMISSIONS AND RECEPTIONS OF FRAMES IN A BIDIRECTIONAL VIDEO NETWORK
US10786728B2 (en) 2017-05-23 2020-09-29 Blast Motion Inc. Motion mirroring system that incorporates virtual environment constraints
US12112773B2 (en) 2021-04-22 2024-10-08 Andrew Levin Method and apparatus for production of a real-time virtual concert or collaborative online event

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020145660A1 (en) * 2001-02-12 2002-10-10 Takeo Kanade System and method for manipulating the point of interest in a sequence of images
US20050117019A1 (en) * 2003-11-26 2005-06-02 Edouard Lamboray Method for encoding and decoding free viewpoint videos
US8004557B2 (en) * 2007-08-21 2011-08-23 Sony Taiwan Limited Advanced dynamic stitching method for multi-lens camera system
US20120113264A1 (en) * 2010-11-10 2012-05-10 Verizon Patent And Licensing Inc. Multi-feed event viewing
US20130093899A1 (en) * 2011-10-18 2013-04-18 Nokia Corporation Method and apparatus for media content extraction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9185361B2 (en) * 2008-07-29 2015-11-10 Gerald Curry Camera-based tracking and position determination for sporting events using event information and intelligence data extracted in real-time from position information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020145660A1 (en) * 2001-02-12 2002-10-10 Takeo Kanade System and method for manipulating the point of interest in a sequence of images
US20050117019A1 (en) * 2003-11-26 2005-06-02 Edouard Lamboray Method for encoding and decoding free viewpoint videos
US8004557B2 (en) * 2007-08-21 2011-08-23 Sony Taiwan Limited Advanced dynamic stitching method for multi-lens camera system
US20120113264A1 (en) * 2010-11-10 2012-05-10 Verizon Patent And Licensing Inc. Multi-feed event viewing
US20130093899A1 (en) * 2011-10-18 2013-04-18 Nokia Corporation Method and apparatus for media content extraction

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11314936B2 (en) 2009-05-12 2022-04-26 JBF Interlude 2009 LTD System and method for assembling a recorded composition
US12265975B2 (en) 2010-02-17 2025-04-01 JBF Interlude 2009 LTD System and method for data mining within interactive multimedia
US11232458B2 (en) 2010-02-17 2022-01-25 JBF Interlude 2009 LTD System and method for data mining within interactive multimedia
US20150139601A1 (en) * 2013-11-15 2015-05-21 Nokia Corporation Method, apparatus, and computer program product for automatic remix and summary creation using crowd-sourced intelligence
US11102543B2 (en) * 2014-03-07 2021-08-24 Sony Corporation Control of large screen display using wireless portable computer to pan and zoom on large screen display
US11501802B2 (en) 2014-04-10 2022-11-15 JBF Interlude 2009 LTD Systems and methods for creating linear video from branched video
US11900968B2 (en) 2014-10-08 2024-02-13 JBF Interlude 2009 LTD Systems and methods for dynamic video bookmarking
US11348618B2 (en) 2014-10-08 2022-05-31 JBF Interlude 2009 LTD Systems and methods for dynamic video bookmarking
US11412276B2 (en) 2014-10-10 2022-08-09 JBF Interlude 2009 LTD Systems and methods for parallel track transitions
US12132962B2 (en) 2015-04-30 2024-10-29 JBF Interlude 2009 LTD Systems and methods for nonlinear video playback using linear real-time video players
US12119030B2 (en) 2015-08-26 2024-10-15 JBF Interlude 2009 LTD Systems and methods for adaptive and responsive video
US11804249B2 (en) * 2015-08-26 2023-10-31 JBF Interlude 2009 LTD Systems and methods for adaptive and responsive video
US20190392868A1 (en) * 2015-08-26 2019-12-26 JBF Interlude 2009 LTD Systems and methods for adaptive and responsive video
US11164548B2 (en) 2015-12-22 2021-11-02 JBF Interlude 2009 LTD Intelligent buffering of large-scale video
US11856271B2 (en) 2016-04-12 2023-12-26 JBF Interlude 2009 LTD Symbiotic interactive video
CN105959716A (en) * 2016-05-13 2016-09-21 武汉斗鱼网络科技有限公司 Method and system for automatically recommending definition based on user equipment
US11553024B2 (en) 2016-12-30 2023-01-10 JBF Interlude 2009 LTD Systems and methods for dynamic weighting of branched video paths
US11528534B2 (en) 2018-01-05 2022-12-13 JBF Interlude 2009 LTD Dynamic library display for interactive videos
US11601721B2 (en) 2018-06-04 2023-03-07 JBF Interlude 2009 LTD Interactive video dynamic adaptation and user profiling
CN111564116A (en) * 2019-02-14 2020-08-21 迪士尼企业公司 Multiple vantage point light field picture element display
US10616547B1 (en) * 2019-02-14 2020-04-07 Disney Enterprises, Inc. Multi-vantage point light-field picture element display
US11563915B2 (en) 2019-03-11 2023-01-24 JBF Interlude 2009 LTD Media content presentation
US11997413B2 (en) 2019-03-11 2024-05-28 JBF Interlude 2009 LTD Media content presentation
US11490047B2 (en) 2019-10-02 2022-11-01 JBF Interlude 2009 LTD Systems and methods for dynamically adjusting video aspect ratios
US11750864B2 (en) 2019-12-18 2023-09-05 Yerba Buena Vr, Inc. Methods and apparatuses for ingesting one or more media assets across a video platform
US11284141B2 (en) 2019-12-18 2022-03-22 Yerba Buena Vr, Inc. Methods and apparatuses for producing and consuming synchronized, immersive interactive video-centric experiences
US11245961B2 (en) 2020-02-18 2022-02-08 JBF Interlude 2009 LTD System and methods for detecting anomalous activities for interactive videos
US12096081B2 (en) 2020-02-18 2024-09-17 JBF Interlude 2009 LTD Dynamic adaptation of interactive video players using behavioral analytics
US12047637B2 (en) 2020-07-07 2024-07-23 JBF Interlude 2009 LTD Systems and methods for seamless audio and video endpoint transitions
US11882337B2 (en) 2021-05-28 2024-01-23 JBF Interlude 2009 LTD Automated platform for generating interactive videos
US12284425B2 (en) 2021-05-28 2025-04-22 JBF Interlude 2009 LTD Automated platform for generating interactive videos
US12155897B2 (en) 2021-08-31 2024-11-26 JBF Interlude 2009 LTD Shader-based dynamic video manipulation
US11934477B2 (en) 2021-09-24 2024-03-19 JBF Interlude 2009 LTD Video player integration within websites
US12316905B2 (en) 2024-06-20 2025-05-27 JBF Interlude 2009 LTD Systems and methods for seamless audio and video endpoint transitions

Also Published As

Publication number Publication date
US20150124048A1 (en) 2015-05-07

Similar Documents

Publication Publication Date Title
US20150124171A1 (en) Multiple vantage point viewing platform and user interface
US20180227501A1 (en) Multiple vantage point viewing platform and user interface
US11025978B2 (en) Dynamic video image synthesis using multiple cameras and remote control
EP3238445B1 (en) Interactive binocular video display
CN106792246B (en) Method and system for interaction of fusion type virtual scene
JP6216513B2 (en) Content transmission device, content transmission method, content reproduction device, content reproduction method, program, and content distribution system
US10664225B2 (en) Multi vantage point audio player
US20150222935A1 (en) Venue specific multi point image capture
US20150304724A1 (en) Multi vantage point player
EP2884751A1 (en) A multimedia platform for generating and streaming content based on data provided by capturing devices corresponding to multiple viewpoints including subjective viewpoints
US9654813B2 (en) Method and system for synchronized multi-venue experience and production
US10205969B2 (en) 360 degree space image reproduction method and system therefor
US10156898B2 (en) Multi vantage point player with wearable display
US20150221334A1 (en) Audio capture for multi point image capture systems
US20160073013A1 (en) Handheld multi vantage point player
US20110304735A1 (en) Method for Producing a Live Interactive Visual Immersion Entertainment Show
WO2012100114A2 (en) Multiple viewpoint electronic media system
US20180227572A1 (en) Venue specific multi point image capture
CN101742096A (en) Multi-view interactive television system and method
US9438937B1 (en) Video server that provides a user tailored video stream consistent with user input using content of a primary stream and an enhanced stream
US20180227694A1 (en) Audio capture for multi point image capture systems
JP2020524450A (en) Transmission system for multi-channel video, control method thereof, multi-channel video reproduction method and device thereof
CN105704399A (en) Playing method and system for multi-picture television program
US20180227504A1 (en) Switchable multiple video track platform
WO2018027067A1 (en) Methods and systems for panoramic video with collaborative live streaming

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

OSZAR »