US9773350B1 - Systems and methods for greater than 360 degree capture for virtual reality - Google Patents
Systems and methods for greater than 360 degree capture for virtual reality Download PDFInfo
- Publication number
- US9773350B1 US9773350B1 US14/855,180 US201514855180A US9773350B1 US 9773350 B1 US9773350 B1 US 9773350B1 US 201514855180 A US201514855180 A US 201514855180A US 9773350 B1 US9773350 B1 US 9773350B1
- Authority
- US
- United States
- Prior art keywords
- subject
- virtual reality
- reality content
- real
- cameras
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active - Reinstated
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B13/00—Optical objectives specially designed for the purposes specified below
- G02B13/06—Panoramic objectives; So-called "sky lenses" including panoramic objectives having reflecting surfaces
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H04N5/23238—
-
- H04N5/247—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
- H04N5/772—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/87—Regeneration of colour television signals
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
Definitions
- the systems and methods described herein generally relate to systems and methods for capture of video information, generation of virtual reality content based thereon, and playback of the generated virtual reality content, and, in particular, various implementations relate to systems and methods to create a true point-of-view (POV) experience in virtual reality.
- POV point-of-view
- simulated virtual reality is a convenient way to share certain experiences, information, and/or views with other people. Such simulations may be based on three-dimensional models of real-world objects. By adding more photo-realistic elements to the virtual reality content, the visual quality of the user experience may improve. High-definition real-world video information may be captured and subsequently processed to create objects and/or elements within a virtual reality that have a higher image quality than comparable images from computer models.
- One aspect of the disclosed technology relates to systems for one or more of the following features: capturing video information in the real world, generating and storing virtual reality content based on the captured video information, and retrieving and playing back the generated virtual reality content for users.
- video information may include both visual information and audio information (e.g. captured sound).
- playing back and derivatives thereof with regard to video information, virtual reality content, and/or other information may include any combination of retrieving information from storage for use and/or rendering and streaming information for user and/or rendering.
- the captured video information may be live action.
- live action may refer to capturing live events as they occur.
- the video information may be captured in real-time, or in near-real-time.
- real-time may refer to capturing events at a rate sufficient to enable playback at the same rate as live action.
- near-real-time may refer to capturing events at a rate sufficient to enable playback at any rate ranging from one-hundredth of the rate of live action to one hundred times the rate of live action, e.g. using time-lapse playback.
- the captured video information may be first-person, as a user sees and experiences the real world around him.
- the disclosed technology creates a point-of-view experience in virtual reality that is closer to true point-of-view than existing systems employing a single camera, a pair of cameras that rotate and/or swivel around an axis between the pair of cameras, existing spherical capture configurations, and/or other existing virtual reality live capture hardware and/or techniques.
- the system may include one or more of a set of cameras, electronic storage, one or more support structures, one or more sensors, an electronic display, a user interface, one or more physical processors, computer program components, and/or other components.
- the term “camera” may include any device that captures images, including but not limited to a single lens-based camera, a sensor array, a solid-state camera, a mechanical camera, a digital camera, and/or other cameras.
- the set of cameras may be configured to capture real-world video information.
- the set of cameras may be configured to capture real-world information in real-time and/or in high-definition.
- one or more cameras in the set of cameras may be positioned at or near eye-level of a person. This person may be referred to as the first user, the first subject, the first person, the original user, the experience-capturing user, and/or other names that are similar to these references, including derivatives thereof.
- one or more cameras in the set of cameras may be arranged to capture video information that includes images captured in a downward direction, e.g. from the head of a person toward the ground.
- one or more cameras in the set of cameras may capture video information that includes images of a person who is supporting, carrying, and/or wearing the cameras, e.g. of the body of such a person, i.e. the first subject, etc.
- one or more cameras in the set of cameras may be arranged and configured to capture video information that includes the front and back of the torso and/or body of the first subject.
- one or more cameras in the set of cameras may be arranged to capture video information from multiple points of view.
- the multiple points of view may include one or more points of view that correspond to the first subject looking over his right shoulder and/or to his right, one or more points of view that correspond to the first subject looking over his left shoulder and/or to his left, one or more points of view that correspond to the first subject looking backwards, and/or other points of view.
- use of the term “over the shoulder” and derivatives thereof may refer to one or more points of view in which a user turns his or her head left, right, and/or back.
- the term “over the shoulder” and derivatives neither implies nor is limited to looking down.
- a particular point of view that corresponds to a subject looking over his left shoulder may include the point of view in which the subject turns his or her head left (e.g. by any number of degrees between zero and about 180 degrees) and views in any vertical direction.
- the vertical direction may be level with a default straight-forward point of view, or may be in a downward direction (e.g. by any number of degrees between zero and about 120 degrees), or may be in an upward direction (e.g. by any number of degrees between zero and about 120 degrees), and/or any combination thereof.
- the support structure may be configured to support one or more cameras.
- the support structure may be configured to be worn and/or carried by a subject, e.g. such one or more cameras are positioned at or near eye-level of the (first) subject.
- the support structure may be configured such that one or more cameras are positioned at least at eye-level of the (first) subject.
- the support structure may be configured such that one or more cameras are positioned less than 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 inches above eye-level of the subject.
- the support structure may be configured such that one or more cameras are positioned less than 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 inches below eye-level of the subject.
- the point-of-view experience may be better.
- capturing video with a mounting system that connects to a backpack may result in an inferior experience because the one or more cameras may be mounted more than 6 inches higher than the actual eye-level of a subject, e.g. 1 foot higher, 2 feet higher, or more than 2 feet higher.
- the system may include a display support structure configured to support an electronic display.
- the display support structure may be configured to be carried and/or worn by users.
- a display support structure may include a helmet, goggles, head-up display, head-mounted display, and/or other structure to support an electronic display, a projector, and/or other structure to visually present information to a user.
- the one or more processors may be configured (e.g. via executable instructions) to execute computer program components.
- the computer program components may include a camera control component, a generation component, a storage component, a compression component, a playback component, a parameter determination component, a transmission component, and/or other components.
- the system may include and/or be embedded in a client computing platform.
- client computing platforms may include one or more of a desktop computer, a laptop computer, a handheld computer, a NetBook, a mobile telephone, a “smart phone”, a tablet, a mobile computing platform, a gaming console, a television, an electronic device, and/or other computing platforms. Users may interact with any of the computing platforms described in this disclosure, and/or any combination of computing platforms described in this disclosure.
- the camera control component may be configured to control one or more cameras.
- the camera control component may be configured to control (e.g. manage the operation of) a set of cameras to capture real-world video information, e.g. in real-time.
- the camera control component may be configured to control the operational characteristics of multiple cameras, including but not limited to aperture timing, exposure, focal length, depth of field, focus, light metering, white balance, resolution, frame rate, compression parameters, video format, sound parameters, and/or other operational characteristics of video cameras.
- the camera control component may be configured to synchronize multiple cameras, e.g. turn multiple cameras on and/or off.
- the generation component may be configured to generate virtual reality content based on captured video information.
- the term “virtual reality” may include both the common concept of virtual reality as well as augmented reality.
- the generation component may be configured to combine video information from multiple cameras to generate virtual reality content.
- virtual reality content refers to information representing a virtual reality that may be played back and/or otherwise experienced, perceived, and/or interacted with by one or more users, e.g. through a virtual reality environment that supports sensory experiences. Note that this user may be a different person than the experience-capturing subject.
- the user experiencing, perceiving, and/or interacting with the virtual reality content may be referred to as the second user, the second subject, the second person, the derivative user, and/or other names that are similar to these references, including derivatives thereof.
- multiple users may experience, perceive, and/or interact with the virtual reality content, e.g. at the same time. These multiple users may be referred to as the second subjects, the second set of people, the derivative users, or the secondary subjects, and/or other names that are similar to these reference, including derivatives thereof.
- the generation component may be configured to extract depth information, create stereoscopic information, stitch together and/or blend multiple images, and/or perform other operations on the captured video information.
- the storage component may be configured to store, retrieve, and/or otherwise access information in and/or from electronic storage.
- the storage component may be configured to store virtual reality content in electronic storage.
- the storage component may be configured to communicate with electronic storage to access information, such as, by way of non-limiting example, virtual reality content.
- the compression component may be configured to compress information prior, during, and/or after storage of the information in electronic storage.
- the compression component may be configured to perform lossless compression.
- the compression component may be configured to compress information with some loss of information upon decompression.
- the compression component may be configured to compress virtual reality content that is based on real-world video information corresponding to a view of a first number of degrees into a portion of a data structure used to store virtual reality content that corresponds to a view of a second number of degrees.
- the first number of degrees may be larger than the second number of degrees.
- the first number of degrees may be 180 degrees.
- the second number of degrees may be less than 180 degrees, 160 degrees, 150 degreed, 140 degrees, 120 degrees, and/or another number of degrees less than the first number of degrees.
- the playback component may be configured to play back, present, and/or otherwise use virtual reality content for and/or to a user, e.g. a second user.
- the playback component may be configured to play back virtual reality content on an electronic display (and/or another structure configured to visually present information to a user).
- the virtual reality content may include images of the body of the first user.
- the playback component may be configured to play back virtual reality content to a second user such that, responsive to the second user looking downward during playback, the virtual reality content includes imagery based on images of the body of the first user.
- operation of the playback component may be based on the actions, movement, and/or position of the second user.
- the actions, movement, and/or position of the second user may be measured and/or otherwise reflected by one or more sensors, including but not limited to one or more accelerometers.
- the second user may cause and/or otherwise effectuate a particular viewing direction.
- the playback component may be configured to play back virtual reality content to a second user such that, responsive to the second user looking over his right shoulder during playback, the virtual reality content includes imagery based on video information captured from the point of view of the first subject looking over his right shoulder.
- the playback component may be configured to play back virtual reality content to a second user such that, responsive to the second user looking over his left shoulder during playback, the virtual reality content includes imagery based on video information captured from the point of view of the first subject looking over his left shoulder.
- the playback component may be configured to play back virtual reality content to a second user such that, responsive to the second user changing his viewing direction to his right and/or over his right shoulder, the virtual reality content includes imagery based on video information captured from the point of view of the first subject looking over his right shoulder.
- the playback component may be configured to play back virtual reality content to a second user such that, responsive to the second user changing his viewing direction to his left and/or over his left shoulder, the virtual reality content includes imagery based on video information captured from the point of view of the first subject looking over his left shoulder.
- the playback component may be configured to match a change in viewing direction by the second user with corresponding virtual reality content that includes imagery based on the same viewing direction as the second user.
- the parameter determination component may be configured to determine parameters based on output signals generated by sensors.
- a sensor may be configured to generate output signals conveying information related to one or more angles of the second user's head and/or the second user's helmet (that may carry an electronic display that presents the virtual reality content).
- the parameter determination component may be configured to determine the one or more angles based on the generated output signals.
- the determined parameters may include one or more of position, location, movement, direction, acceleration, jerk, tilt, angle, derivatives thereof, and/or other parameters (including combinations of these parameters) pertinent to three-dimensional position and/or three-dimensional movement of a particular user, any body part of the particular user, any garment of object carried, worn, and/or otherwise supported by the particular user, and/or any object in a known and/or fixed relation to the particular user.
- the transmission component may be configured to send, stream, and/or otherwise transmit information from the system to one or more second users, the display support structure, and/or another component of the system that is used to play back, present, and/or otherwise use virtual reality content for and/or to one or more users, e.g. the second user.
- the transmission component may be configured to retrieve information (e.g. virtual reality content) from the electronic storage.
- the transmission component may be configured to support streaming content (e.g. virtual reality content) to a user (or to the device a user is using to experience the virtual reality content).
- the sensors may be configured to generate output signals conveying information related to one or more parameters that are pertinent to the three-dimensional position and/or movement of a user.
- information and such parameters may be referred to as sensor information.
- the three-dimensional position (or angle, etc.) of a head-mounted display may be used as a proxy for the position (or angle, etc.) of the head of a user, and subsequently used as a basis to determine a viewing direction of the user and/or other pertinent parameters for the playback of virtual reality content.
- the sensors may include one or more accelerometers, tilt sensors, motion sensors, image sensors, cameras, position sensors, global positioning sensors (GPS), vibration sensors, microphones, altitude sensors, pressure sensors, degree-of-freedom sensors (e.g. 6-DOF and/or 9-DOF sensors), a compass, and/or other sensors.
- GPS global positioning sensors
- vibration sensors microphones
- altitude sensors e.g. 6-DOF and/or 9-DOF sensors
- a compass e.g. 6-DOF and/or 9-DOF sensors
- the electronic storage may be configured to electronically store information, including but not limited to virtual reality content and/or sensor information generated by the system.
- the electronic storage may be configured to electronically store sensor information, e.g. time-stamped sensor information.
- the electronic storage may store instructions, code, commands, models, and/or other information used to execute computer program components.
- the user interface may be configured to provide an interface between one or more users and the system.
- a user may be able to interact with the system by virtue of the user interface.
- the user interface may be configured to receive user input from a particular user, and direct and/or otherwise cause the system to operate in accordance with the received input.
- any association (or relation, or reflection, or indication, or correspondency) involving users/subjects, client computing platforms, cameras, points of view, and/or another entity or object that interacts with any part of the system and/or plays a part in the operation of the system, may be a one-to-one association, a one-to-many association, a many-to-one association, and/or a many-to-many association or N-to-M association (note that N and M may be different numbers greater than 1).
- One aspect of the disclosed technology relates to methods for one or more of the following features: capturing video information in real-time in the real world, generating and storing virtual reality content based on the captured video information, and retrieving and playing back the virtual reality content for users.
- FIG. 1 illustrates a system configured for one or more of capturing video information in real-time in the real world, generating and storing virtual reality content based on the captured video information, and retrieving and/or playing back the virtual reality content for users, in accordance with one or more implementations.
- FIGS. 2-5 illustrate methods for one or more of capturing video information in real-time in the real world, generating and storing virtual reality content based on the captured video information, and retrieving and playing back the virtual reality content for users, in accordance with one or more implementations.
- FIGS. 6A,6B,7A,7B,7C and 8 illustrate concepts and scenes related to the functionality of a system configured for capturing video information in real-time in the real world, generating and storing virtual reality content based on the captured video information, and retrieving and playing back the virtual reality content for users.
- FIG. 1 illustrates an example system 10 that is configured for one or more of capturing video information in real-time in the real world, generating and storing virtual reality content based on the captured video information, and retrieving and playing back the virtual reality content for users.
- system 10 may include one or more cameras 15 , one or more client computing platforms 14 , one or more sensors 16 , electronic storage 60 , an electronic display 19 , a support structure 17 , a display support structure 18 , a user interface 76 , one or more physical processors 110 , one or more computer program components, and/or other components.
- System 10 may be deployed, at least in part, using a network (e.g., a public network), and/or using commercial web services.
- a network e.g., a public network
- one or more cameras 15 may include camera 15 a , camera 15 b , camera 15 c , and camera 15 d .
- the number of cameras in set of cameras 15 is not intended to be limited in any way by the depiction in FIG. 1 .
- Set of cameras 15 may include 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 16, 20, 36, 48, and/or any number greater than 2 cameras.
- set of cameras 15 may be configured to capture real-world information in real-time and/or in high-definition.
- one or more cameras in set of cameras 15 may be positioned at or near eye-level of a person (e.g. the first subject).
- one or more cameras in set of cameras 15 may be arranged to capture video information that includes images captured in a downward direction, e.g. from the head of a person toward the ground. In some implementations, one or more cameras in set of cameras 15 may capture video information that includes images of a person who is supporting, carrying, and/or wearing the cameras, e.g. of the body of such a person, i.e. the first subject, etc. For example, in some implementations, one or more cameras in set of cameras 15 may be arranged and configured to capture video information that includes the front and back of the torso and/or body of the first subject. This video information may enable body-awareness for the second user (e.g.
- one or more cameras in set of cameras 15 may be arranged to capture real-world video information in real-time in a vertical plane traversing more than 360 degrees, by virtue of multiple cameras having overlapping views.
- the images captured in a downward direction by multiple cameras may overlap by at least 30 degrees, by about 40 degrees, by about 60 degrees, by about 80 degrees, and/or by another number of degrees.
- set of cameras 15 may be configured to capture more than 360 degrees vertically and/or in a vertical plane, e.g. by virtue of cameras having overlapping views and multiple points of view.
- FIG. 6A illustrates a first subject wearing a support structure 17 that carries two cameras: camera 15 a and camera 15 b .
- FIG. 6A depicts a vertical overlap 66 of about 40 degrees.
- cameras in set of cameras 15 may be arranged to capture video information from multiple points of view.
- the multiple points of view may include one or more points of view that corresponds to the first subject looking over his right shoulder (or to his right), one or more points of view that corresponds to the first subject looking over his left shoulder (or to his left), one or more points of view that corresponds to the first subject looking backwards, and/or other points of view.
- cameras in set of cameras 15 may be arranged to capture video information in multiple vertical viewing directions per horizontal viewing direction.
- cameras in set of cameras 15 may be arranged to capture video information at horizontal viewing directions of zero degrees, 10 degrees to the right, 20 degrees to the right, 30 degrees to the right, 40 degrees to the right, 50 degrees to the right, 60 degrees to the right, 70 degrees to the right, 80 degrees to the right, 90 degrees to the right, 100 degrees to the right, 110 degrees to the right, 120 degrees to the right, 130 degrees to the right, 140 degrees to the right, 150 degrees to the right, 160 degrees to the right, 170 degrees to the right, 180 degrees to the right, 190 degrees to the right (i.e. turning past straight backwards), 200 degrees to the right, 210 degrees to the right, and/or another suitable degrees to the right.
- zero degrees 10 degrees to the right, 20 degrees to the right, 30 degrees to the right, 40 degrees to the right, 50 degrees to the right, 60 degrees to the right, 70 degrees to the right, 80 degrees to the right, 90 degrees to the right, 100 degrees to the right, 110 degrees to the right, 120 degrees to the right, 130 degrees to the right,
- cameras in set of cameras 15 may be arranged to capture video information in multiple vertical viewing directions per horizontal viewing direction, including vertical viewing directions of zero degrees, 10 degrees upward, 20 degrees upward, 30 degrees upward, 40 degrees upward, 50 degrees upward, 60 degrees upward, 70 degrees upward, 80 degrees upward, 90 degrees upward, 100 degrees upward (i.e. turning upward past straight up), 110 degrees upward, 120 degrees upward, 10 degrees downward, 20 degrees downward, 30 degrees downward, 40 degrees downward, 50 degrees downward, 60 degrees downward, 70 degrees downward, 80 degrees downward, 90 degrees downward, 100 degrees downward (i.e. turning downward past straight down), 110 degrees downward, 120 degrees downward and/or another suitable degrees upward or downward.
- multiple cameras in set of cameras 15 may be configured to capture video information from multiple backwards-looking points of view. For example, a particular point of view of the first subject looking backwards over his right shoulder may be different from a particular point of view of the first subject looking over his left shoulder. For example, the specific portion of the body of the first subject that is captured for both backward-looking points of view may be different. The corresponding virtual reality content may be different.
- a first point of view of looking straight ahead and a second point of view of looking to one side may have viewing directions that are, e.g., 90 degrees apart.
- the positions of the cameras capturing both points of view maybe at least 6 inches apart, about 10 inches apart, about 12 inches apart, about 18 inches apart, and/or another distance apart. For example, the positions may be less than 18 inches apart.
- cameras in set of cameras 15 may be arranged to capture video information that more accurately mimics the movement of a person's eyes when he looks to the side and/or over his shoulder. For example, human eyes do not swivel around an axis located between the eyes, but rather move in an arc or curve responsive to a person rotating his or her neck.
- FIG. 6B illustrates a top view of a pair of eyes 68 in three different viewing directions: a viewing direction 69 a that corresponds to a person looking straight ahead, a viewing direction 69 b that corresponds to a person looking to his right by about 45 degrees, and a viewing direction 69 c that corresponds to a person looking to his left by about 45 degrees.
- FIG. 6B illustrates that the position of a pair of eyes moves as a person looks right or left, and that the movement of the eyes forms an arc or curve, for example along the dotted circle in FIG. 6B .
- one or more cameras in set of cameras 15 may be arranged to capture real-world video information in real-time in a horizontal plane traversing more than 360 degrees, by virtue of multiple cameras having overlapping views.
- the images captured in different horizontal directions by multiple cameras may overlap by at least 30 degrees, by about 40 degrees, by about 60 degrees, by about 80 degrees, and/or by another number of degrees.
- set of cameras 15 may be configured to capture more than 360 degrees horizontally and/or in a horizontal plane, e.g. by virtue of cameras having overlapping views and multiple points of view.
- Support structure 17 may be configured to support one or more cameras in set of cameras 15 .
- support structure 17 may be configured to be worn and/or carried by a subject, e.g. such one or more cameras in set of cameras 15 are positioned at or near eye-level of the (first) subject.
- support structure 17 may be configured such that one or more cameras in set of cameras 15 are positioned at or near eye-level of the (first) subject.
- support structure 17 may be configured such that one or more cameras in set of cameras 15 are positioned less than 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 inches above eye-level of the first subject.
- support structure 17 may be configured such that one or more cameras in set of cameras 15 are positioned less than 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 inches below eye-level of the first subject. As a camera is positioned closer to eye-level, the point-of-view experience may be better.
- support structure 17 may include one or more of a forward-facing side, a backward-facing side, a right-facing side, a left-facing side, and/or other sides.
- a first subset of cameras from set of cameras 15 may be positioned at or near the forward-facing side of support structure 17 to capture video information that includes images of a first area around the first subject's feet. Some portion of the first area may include a second area on the backward-facing side of the first subject's feet, relative to the orientation of support structure 17 . This may be similar to the natural view of a person looking down to his feet and seeing not only feet or shoes, but also a small area behind the feet or shoes. By way of non-limiting example, FIG.
- FIG. 7A-7B-7C illustrate the concept of a first-person view that includes an area around a person's feet.
- FIG. 7A depicts a floor mat 71 that includes numbers that are placed 1 foot apart.
- Floor mat 71 is circular and its diameter is more than 6 feet.
- FIG. 7B depicts a person (e.g. a first subject) standing on floor mat 71 at or near the number zero.
- the person depicted in FIG. 7B is wearing a support structure 17 (similar to a helmet) that carries a set of cameras 15 (each camera depicted as a square).
- FIG. 7C depicts the point-of-view 72 of the same person in FIG. 7B if the person were to look down to his feet.
- Point-of-view 72 includes not only the person's shoes 73 (and part of his body), but also an area behind the person's shoes 73 . As depicted in FIG. 7C , the edge of point-of-view 72 may coincide with the edge of floor mat 71 .
- a second subset of cameras from set of cameras 15 may be positioned at or near the backward-facing side of support structure 17 to capture video information that includes images of a third area around the first subject's feet. Some portion of the third area may include a fourth area on the forward-facing side of the first subject's feet, relative to the orientation of support structure 17 .
- system 10 may include display support structure 18 configured to support electronic display 19 .
- Display support structure 18 may be configured to be carried and/or worn by users.
- display support structure 18 may include a helmet, goggles, head-up display, head-mounted display, and/or other structure to support an electronic display, a projector, and/or other structure to visually present information to a user.
- one or more sensors 16 may include sensor 16 a , sensor 16 b , sensor 16 c , and sensor 16 d .
- the number of sensors is not intended to be limited in any way by the depiction in FIG. 1 .
- System 10 may include 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 16, 20, 36, 48, and/or any number greater than 2 sensors.
- One or more physical processors 110 may be configured to execute computer program components.
- the computer program components may include a camera control component 21 , a generation component 22 , a storage component 23 , a compression component 24 , a playback component 25 , a parameter determination component 26 , a transmission component 27 , and/or other components.
- One or more physical processors 110 may be configured to provide information processing capabilities in system 10 .
- physical processor 110 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
- physical processor 110 may be shown in FIG. 1 as a single entity, this is for illustrative purposes only.
- physical processor 110 may include a plurality of processing units. These processing units may be physically located within the same device, or physical processor 110 may represent processing functionality of a plurality of devices operating in coordination (e.g., “in the cloud”, and/or other virtualized processing solutions).
- Camera control component 21 may be configured to control one or more cameras (e.g., in set of cameras 15 ). In some implementations, camera control component 21 may be configured to control (e.g. manage the operation of) set of cameras 15 to capture real-world video information, e.g. in real-time. For example, camera control component 21 may be configured to control the operational characteristics of multiple cameras, including but not limited to aperture timing, exposure, focal length, depth of field, focus, light metering, white balance, resolution, frame rate, compression parameters, video format, sound parameters, and/or other operational characteristics of video cameras. For example, camera control component 21 may be configured to synchronize multiple cameras, e.g. turn multiple cameras on and/or off.
- One or more sensors 16 may be configured to generate output signals conveying information related to one or more parameters that are pertinent to the three-dimensional position and/or movement of a user.
- a sensor may be configured to generate output signals conveying information related to one or more angles of the first user's head and/or the first user's helmet.
- information and/or parameters based on the one or more sensors 16 may be captured at the same time as set of cameras 15 captures video information.
- information and/or parameters based on the one or more sensors 16 may be stored in electronic storage 60 .
- Generation component 22 may be configured to generate virtual reality content based on captured video information.
- generation component 22 may be configured to combine video information from multiple cameras to generate virtual reality content.
- generation component 22 may be configured to extract depth information, create stereoscopic information, stitch together and/or blend multiple images, and/or perform other operations on the captured video information.
- Storage component 23 may be configured to store, retrieve, and/or otherwise access information in and/or from electronic storage 60 .
- storage component 23 may be configured to store virtual reality content in electronic storage 60 .
- storage component 23 may be configured to store information and/or parameters based on the one or more sensors 16 in electronic storage 60 .
- storage component 23 may be configured to communicate with electronic storage 60 to access information, such as, by way of non-limiting example, virtual reality content.
- Compression component 24 may be configured to compress information prior, during, and/or after storage of the information in electronic storage 60 .
- compression component 24 may be configured to perform lossless compression.
- compression component 24 may be configured to compress information with some loss of information upon decompression.
- compression component 24 may be configured to compress virtual reality content that is based on real-world video information corresponding to a view of a first number of degrees into a portion of a data structure used to store virtual reality content that corresponds to a view of a second number of degrees.
- the first number of degrees may be larger than the second number of degrees.
- the first number of degrees may be 180 degrees.
- the second number of degrees may be less than 180 degrees, 160 degrees, 150 degreed, 140 degrees, 120 degrees, and/or another number of degrees less than the first number of degrees.
- Playback component 25 may be configured to play back, present, and/or otherwise use virtual reality content for and/or to a user, e.g. a second user.
- playback component 25 may be configured to play back virtual reality content on electronic display 19 (and/or another structure configured to visually present information to a user).
- the virtual reality content may include images of the body of the first user.
- operation of playback component 25 may be controlled by the second user (e.g. through movement and/or through interaction with user interface 76 ). Alternatively, and/or simultaneously, in some implementations, operation of playback component 25 may be controlled by a control application. Combinations of multiple types of operation are envisioned within the scope of this disclosure.
- playback component 25 may be configured to play back virtual reality content to a second user such that, responsive to the second user looking downward during playback, the virtual reality content includes imagery based on images including the body of the first user.
- operation of playback component 25 may be based on the actions, movement, and/or position of the second user.
- the actions, movement, and/or position of the second user may be measured and/or otherwise reflected by one or more sensors 16 , including but not limited to one or more accelerometers.
- the second user may cause and/or otherwise effectuate a particular viewing direction, e.g. by rotating his head.
- playback component 25 may be configured to play back virtual reality content to a second user such that, responsive to the second user looking over his right shoulder (or to his right) during playback, the virtual reality content includes imagery based on video information captured from the point of view of the first subject looking over his right shoulder (or to his right).
- the video information captured from the point of view of the first subject looking over his right shoulder (or to his right) may include all or part of the right shoulder, the right arm, the right hand, fingers on the right hand, and/or other body parts that the first user might see when looking over his right shoulder and/or to his right.
- playback component 25 may be configured to play back virtual reality content to a second user such that, responsive to the second user looking over his left shoulder (or to his left) during playback, the virtual reality content includes imagery based on video information captured from the point of view of the first subject looking over his left shoulder (or to his left).
- the video information captured from the point of view of the first subject looking over his left shoulder (or to his left) may include all or part of the left shoulder, the left arm, the left hand, fingers on the left hand, and/or other body parts that the first user might see when looking over his left shoulder and/or to his left.
- FIG. 8 depicts the concept of a person looking over his shoulder using a composite view including four areas: area 81 , area 82 , area 83 , and area 84 .
- Area 81 depicts a first user who has moved his left arm in front of his body and his right arm behind his body, thus turning his shoulders accordingly. His left arm reaches for a chair and his right arm points at a tricycle.
- Area 82 depicts the point-of-view of the same user as in area 81 , responsive to the user looking over his right shoulder.
- Area 84 depicts the point-of-view of the same user, responsive to the user looking over his left shoulder.
- Area 83 depicts the point-of-view of the same user, responsive to the user looking forward and slightly downward.
- a system similar to system 10 may simultaneously capture video information in multiple viewing directions corresponding to the point-of-view in area 82 , 83 , and 84 .
- playback component 25 may be configured to play back virtual reality content to a second user such that, responsive to the second user changing his viewing direction to his right and/or over his right shoulder, the virtual reality content includes imagery based on video information captured from the point of view of the first subject looking over his right shoulder and/or to his right.
- playback component 25 may be configured to play back virtual reality content to a second user such that, responsive to the second user changing his viewing direction to his left and/or over his left shoulder, the virtual reality content includes imagery based on video information captured from the point of view of the first subject looking over his left shoulder and/or to his left.
- playback component 25 may be configured to match a change in viewing direction by the second user with corresponding virtual reality content that includes imagery based on the point of view (of the first user) in same viewing direction as the second user.
- Parameter determination component 26 may be configured to determine parameters based on output signals generated by one or more sensors 16 .
- a sensor may be configured to generate output signals conveying information related to one or more angles of the second user's head and/or the second user's helmet (that may carry electronic display 19 that is configured to present the virtual reality content).
- Parameter determination component 26 may be configured to determine the one or more angles based on the generated output signals.
- the determined parameters may include one or more of position, location, movement, direction (e.g.
- Transmission component 27 may be configured to send, stream, and/or otherwise transmit information from system 10 to the second user, display support structure 18 , and/or another component of system 10 used to play back, present, and/or otherwise use virtual reality content for and/or to a user, e.g. the second user.
- transmission component 27 may be configured to retrieve information (e.g. virtual reality content) from electronic storage 60 .
- transmission component 27 may be configured to support streaming content (e.g. virtual reality content) to a user (or to the device a user is using to experience the virtual reality content).
- One or more sensors 16 may be configured to generate output signals conveying information related to one or more parameters that are pertinent to the three-dimensional position and/or movement of a user.
- the three-dimensional position (or angle, etc.) of a head-mounted display may be used as a proxy for the position (or angle, etc.) of the head of a user, and subsequently used as a basis to determine a viewing direction of the user and/or other pertinent parameters for the playback of virtual reality content.
- one or more sensors 16 may include one or more accelerometers, tilt sensors, motion sensors, image sensors, cameras, position sensors, global positioning sensors (GPS), vibration sensors, microphones, altitude sensors, pressure sensors, degree-of-freedom sensors (e.g. 6-DOF and/or 9-DOF sensors), a compass, and/or other sensors.
- GPS global positioning sensors
- components 21 - 27 are illustrated in FIG. 1 as being located and/or co-located within a particular component of system 10 , in implementations in which physical processor 110 includes multiple processing units, one or more of components 21 - 27 may be located remotely from the other components.
- the description of the functionality provided by the different components 21 - 27 described herein is for illustrative purposes, and is not intended to be limiting, as any of components 21 - 27 may provide more or less functionality than is described.
- one or more of components 21 - 27 may be eliminated, and some or all of its functionality may be incorporated, shared, integrated into, and/or otherwise provided by other ones of components 21 - 27 .
- physical processor 110 may be configured to execute one or more additional components that may perform some or all of the functionality attributed below to one of components 21 - 27 .
- Electronic storage 60 in FIG. 1 comprises electronic storage media that electronically stores information.
- the electronic storage media of electronic storage 60 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with system 10 and/or removable storage that is connectable to system 10 via, for example, a port (e.g., a USB port, a FIREWIRE port, etc.) or a drive (e.g., a disk drive, etc.).
- the electronic storage media of electronic storage 60 may include storage that is connectable to system 10 wirelessly.
- Electronic storage 60 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EPROM, EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
- Electronic storage 60 may store sensor information (e.g. time-stamped sensor information such that specific sensor information may be correlated with specific captured video information), software algorithms, information determined by physical processor 110 or any computer program components, information received via a user interface, and/or other information that enables system 10 to function properly.
- electronic storage 60 may store virtual reality content, positional information (as discussed elsewhere herein), and/or other information.
- Electronic storage 60 may be a separate component within system 10 , or electronic storage 60 may be provided integrally with one or more other components of system 10 (e.g., physical processor 110 ).
- User interface 76 of system 10 in FIG. 1 may be configured to provide an interface between system 10 and a user through which the user can provide information to and receive information from system 10 .
- This enables data, results, and/or instructions and any other communicable items, collectively referred to as “information,” to be communicated between the user and system 10 .
- An example of information that may be conveyed to a user is the virtual reality content, etc.
- An example of information that may be conveyed by a user is a viewing direction and/or change in viewing direction, etc.
- Examples of interface devices suitable for inclusion in user interface 76 include a keypad, buttons, switches, a keyboard, knobs, levers, a display screen, a touch screen, speakers, a microphone, an indicator light, an audible alarm, and a printer.
- Information may be provided to a user by user interface 76 in the form of auditory signals, visual signals, tactile signals, and/or other sensory signals.
- user interface 76 may be integrated with a removable storage interface provided by electronic storage 60 .
- information is loaded into system 10 from removable storage (e.g., a smart card, a flash drive, a removable disk, etc.) that enables the user(s) to customize system 10 .
- removable storage e.g., a smart card, a flash drive, a removable disk, etc.
- Other exemplary input devices and techniques adapted for use with system 10 as user interface 76 include, but are not limited to, an RS-232 port, RF link, an IR link, modem (telephone, cable, Ethernet, internet or other). In short, any technique for communicating information with system 10 is contemplated as user interface 76 .
- FIGS. 2-3 illustrate exemplary methods 200 - 300 for capturing video information in real-time in the real world near a first subject and storing generated virtual reality content based on the captured video information.
- FIGS. 4-5 illustrate exemplary methods 400 - 500 for playing back virtual reality content that is generated based on video information captured near a first subject in the real world, wherein the virtual reality content is played back for a second subject.
- the operations of methods 200 - 500 presented below are intended to be illustrative and non-limiting examples. In certain implementations, methods 200 - 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of methods 200 - 500 are illustrated in FIGS. 2, 3, 4, and 5 and described below is not intended to be limiting.
- methods 200 - 500 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, and/or other mechanisms for electronically processing information).
- the one or more processing devices may include one or more devices executing some or all of the operations of methods 200 - 500 in response to instructions stored electronically on an electronic storage medium.
- the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of methods 200 - 500 .
- a set of multiple cameras is supported such that one or more individual cameras in the set are positioned at least at eye-level of the first subject while the cameras and/or support structure is being carried and/or worn by the first subject, and further such that the one or more individual cameras in the set are arranged to capture real-world video information that includes images captured in a downward direction.
- operation 202 is performed by a support structure the same as or similar to support structure 17 (shown in FIG. 1 and described herein).
- the set of cameras is controlled to capture real-world video information in real-time.
- the one or more individual cameras capture real-world video information that includes images of a body of the first subject.
- operation 204 is performed by a camera control component the same as or similar to camera control component 21 (shown in FIG. 1 and described herein).
- virtual reality content is generated based on the real-world video information captured by the set of cameras.
- the virtual reality content includes imagery based on the images of the body of the first subject.
- operation 206 is performed by a generation component the same as or similar to generation component 22 (shown in FIG. 1 and described herein).
- the virtual reality content is stored in the electronic storage.
- operation 208 is performed by a storage component the same as or similar to storage component 23 (shown in FIG. 1 and described herein).
- a set of multiple cameras is supported such that one or more individual cameras in the set are positioned at least at eye-level of the first subject while the cameras and/or a support structure is being carried and/or worn by the first subject, and further such that the one or more individual cameras in the set are arranged to capture video information from a first point of view that corresponds to the first subject looking over the right shoulder and from a second point of view that corresponds to the first subject looking over the left shoulder.
- operation 302 is performed by a support structure the same as or similar to support structure 17 (shown in FIG. 1 and described herein).
- the set of cameras is controlled to capture real-world video information in real-time.
- a first subset of cameras in the set capture real-world video information from the first point of view.
- a second subset of cameras in the set capture real-world video information from the second point of view.
- operation 304 is performed by a camera control component the same as or similar to camera control component 21 (shown in FIG. 1 and described herein).
- virtual reality content is generated based on the real-world video information captured by the set of cameras.
- the virtual reality content includes imagery based on the video information captured from the first point of view, and further includes imagery based on the video information captured from the second point of view.
- operation 306 is performed by a generation component the same as or similar to generation component 22 (shown in FIG. 1 and described herein).
- the virtual reality content is stored in electronic storage.
- operation 308 is performed by a storage component the same as or similar to storage component 23 (shown in FIG. 1 and described herein).
- information is stored electronically in electronic storage.
- the information includes virtual reality content generated based on real-world video information captured in real-time by a set of cameras.
- the real-world video information includes images of a body of a first subject who carried the set of cameras.
- operation 402 is performed by a storage component the same as or similar to storage component 23 (shown in FIG. 1 and described herein).
- operation 404 the virtual reality content is accessed through communication with the electronic storage.
- operation 404 is performed by a storage component the same as or similar to storage component 23 (shown in FIG. 1 and described herein).
- the virtual reality content is played back on an electronic display for a second subject such that, responsive to the second subject effectuating a downward viewing direction, the virtual reality content includes imagery based on the images of the body of the first subject.
- operation 406 is performed by a playback component the same as or similar to playback component 25 (shown in FIG. 1 and described herein).
- information is stored electronically in electronic storage.
- the information includes virtual reality content generated based on real-world video information captured in real-time by a set of cameras.
- the real-world video information includes video information from a first point of view that corresponds to the a first subject looking over his right shoulder and further includes video information from a second point of view that corresponds to the first subject looking over his left shoulder.
- operation 502 is performed by a storage component the same as or similar to storage component 23 (shown in FIG. 1 and described herein).
- operation 504 the virtual reality content is accessed through communication with the electronic storage.
- operation 504 is performed by a storage component the same as or similar to storage component 23 (shown in FIG. 1 and described herein).
- the virtual reality content is played back on an electronic display for a second subject such that, responsive to the second subject effectuating a first change in viewing direction to the right, the virtual reality content includes imagery based on the video information captured from the first point of view, and further such that, responsive to the second subject effectuating a second change in viewing direction to the left, the virtual reality content includes imagery based on the video information captured from the second point of view.
- operation 506 is performed by a playback component 25 the same as or similar to playback component 25 (shown in FIG. 1 and described herein).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/855,180 US9773350B1 (en) | 2014-09-16 | 2015-09-15 | Systems and methods for greater than 360 degree capture for virtual reality |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462051103P | 2014-09-16 | 2014-09-16 | |
US14/855,180 US9773350B1 (en) | 2014-09-16 | 2015-09-15 | Systems and methods for greater than 360 degree capture for virtual reality |
Publications (1)
Publication Number | Publication Date |
---|---|
US9773350B1 true US9773350B1 (en) | 2017-09-26 |
Family
ID=59886580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/855,180 Active - Reinstated US9773350B1 (en) | 2014-09-16 | 2015-09-15 | Systems and methods for greater than 360 degree capture for virtual reality |
Country Status (1)
Country | Link |
---|---|
US (1) | US9773350B1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190028691A1 (en) * | 2009-07-14 | 2019-01-24 | Cable Television Laboratories, Inc | Systems and methods for network-based media processing |
US20190122702A1 (en) * | 2016-03-31 | 2019-04-25 | Sony Corporation | Information processing device, information processing method, and computer program |
US10324290B2 (en) * | 2015-12-17 | 2019-06-18 | New Skully, Inc. | Situational awareness systems and methods |
US10559126B2 (en) * | 2017-10-13 | 2020-02-11 | Samsung Electronics Co., Ltd. | 6DoF media consumption architecture using 2D video decoder |
US10845681B1 (en) | 2020-08-17 | 2020-11-24 | Stephen Michael Buice | Camera apparatus for hiding a camera operator while capturing 360-degree images or video footage |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050207487A1 (en) | 2000-06-14 | 2005-09-22 | Monroe David A | Digital security multimedia sensor |
DE102005063198A1 (en) | 2005-12-28 | 2007-07-05 | Stefan Wiesmeth | Head or helmet camera e.g. video film camera, has support unit with two side areas and middle area, where form and dimensions of support unit and/or head-sided rear side of support unit correspond to geometry of human front part |
US20080030429A1 (en) | 2006-08-07 | 2008-02-07 | International Business Machines Corporation | System and method of enhanced virtual reality |
US20100240988A1 (en) | 2009-03-19 | 2010-09-23 | Kenneth Varga | Computer-aided system for 360 degree heads up display of safety/mission critical data |
US20110216060A1 (en) * | 2010-03-05 | 2011-09-08 | Sony Computer Entertainment America Llc | Maintaining Multiple Views on a Shared Stable Virtual Space |
US20120120103A1 (en) * | 2010-02-28 | 2012-05-17 | Osterhout Group, Inc. | Alignment control in an augmented reality headpiece |
DE202011003574U1 (en) | 2011-03-04 | 2012-06-12 | Willy Bogner Film Gesellschaft mit beschränkter Haftung | camera device |
US20120206452A1 (en) | 2010-10-15 | 2012-08-16 | Geisner Kevin A | Realistic occlusion for a head mounted augmented reality display |
WO2012166593A2 (en) | 2011-05-27 | 2012-12-06 | Thomas Seidl | System and method for creating a navigable, panoramic three-dimensional virtual reality environment having ultra-wide field of view |
US20130215281A1 (en) | 2011-10-24 | 2013-08-22 | Kenleigh C. Hobby | Smart Helmet |
US8576276B2 (en) | 2010-11-18 | 2013-11-05 | Microsoft Corporation | Head-mounted display device which provides surround video |
WO2013176997A1 (en) | 2012-05-19 | 2013-11-28 | Skully Helmets, Inc. | Augmented reality motorcycle helmet |
US8625200B2 (en) | 2010-10-21 | 2014-01-07 | Lockheed Martin Corporation | Head-mounted display apparatus employing one or more reflective optical surfaces |
US20140028704A1 (en) | 2012-07-30 | 2014-01-30 | Lenovo (Beijing) Co., Ltd. | Display Device |
WO2014033306A1 (en) | 2012-09-03 | 2014-03-06 | SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH | Head mounted system and method to compute and render a stream of digital images using a head mounted system |
CN103731659A (en) | 2014-01-08 | 2014-04-16 | 百度在线网络技术(北京)有限公司 | Head-mounted display device |
WO2014071400A1 (en) | 2012-11-05 | 2014-05-08 | 360 Heros, Inc. | 360 degree camera mount and related photographic and video system |
US20160344999A1 (en) | 2013-12-13 | 2016-11-24 | 8702209 Canada Inc. | SYSTEMS AND METHODs FOR PRODUCING PANORAMIC AND STEREOSCOPIC VIDEOS |
US9521398B1 (en) | 2011-04-03 | 2016-12-13 | Gopro, Inc. | Modular configurable camera system |
-
2015
- 2015-09-15 US US14/855,180 patent/US9773350B1/en active Active - Reinstated
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050207487A1 (en) | 2000-06-14 | 2005-09-22 | Monroe David A | Digital security multimedia sensor |
DE102005063198A1 (en) | 2005-12-28 | 2007-07-05 | Stefan Wiesmeth | Head or helmet camera e.g. video film camera, has support unit with two side areas and middle area, where form and dimensions of support unit and/or head-sided rear side of support unit correspond to geometry of human front part |
US20080030429A1 (en) | 2006-08-07 | 2008-02-07 | International Business Machines Corporation | System and method of enhanced virtual reality |
US20100240988A1 (en) | 2009-03-19 | 2010-09-23 | Kenneth Varga | Computer-aided system for 360 degree heads up display of safety/mission critical data |
US20120120103A1 (en) * | 2010-02-28 | 2012-05-17 | Osterhout Group, Inc. | Alignment control in an augmented reality headpiece |
US20110216060A1 (en) * | 2010-03-05 | 2011-09-08 | Sony Computer Entertainment America Llc | Maintaining Multiple Views on a Shared Stable Virtual Space |
US20120206452A1 (en) | 2010-10-15 | 2012-08-16 | Geisner Kevin A | Realistic occlusion for a head mounted augmented reality display |
US8625200B2 (en) | 2010-10-21 | 2014-01-07 | Lockheed Martin Corporation | Head-mounted display apparatus employing one or more reflective optical surfaces |
US8576276B2 (en) | 2010-11-18 | 2013-11-05 | Microsoft Corporation | Head-mounted display device which provides surround video |
DE202011003574U1 (en) | 2011-03-04 | 2012-06-12 | Willy Bogner Film Gesellschaft mit beschränkter Haftung | camera device |
US9521398B1 (en) | 2011-04-03 | 2016-12-13 | Gopro, Inc. | Modular configurable camera system |
WO2012166593A2 (en) | 2011-05-27 | 2012-12-06 | Thomas Seidl | System and method for creating a navigable, panoramic three-dimensional virtual reality environment having ultra-wide field of view |
US20130215281A1 (en) | 2011-10-24 | 2013-08-22 | Kenleigh C. Hobby | Smart Helmet |
WO2013176997A1 (en) | 2012-05-19 | 2013-11-28 | Skully Helmets, Inc. | Augmented reality motorcycle helmet |
US20140028704A1 (en) | 2012-07-30 | 2014-01-30 | Lenovo (Beijing) Co., Ltd. | Display Device |
WO2014033306A1 (en) | 2012-09-03 | 2014-03-06 | SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH | Head mounted system and method to compute and render a stream of digital images using a head mounted system |
WO2014071400A1 (en) | 2012-11-05 | 2014-05-08 | 360 Heros, Inc. | 360 degree camera mount and related photographic and video system |
US20160344999A1 (en) | 2013-12-13 | 2016-11-24 | 8702209 Canada Inc. | SYSTEMS AND METHODs FOR PRODUCING PANORAMIC AND STEREOSCOPIC VIDEOS |
CN103731659A (en) | 2014-01-08 | 2014-04-16 | 百度在线网络技术(北京)有限公司 | Head-mounted display device |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190028691A1 (en) * | 2009-07-14 | 2019-01-24 | Cable Television Laboratories, Inc | Systems and methods for network-based media processing |
US11277598B2 (en) * | 2009-07-14 | 2022-03-15 | Cable Television Laboratories, Inc. | Systems and methods for network-based media processing |
US10324290B2 (en) * | 2015-12-17 | 2019-06-18 | New Skully, Inc. | Situational awareness systems and methods |
US20190293943A1 (en) * | 2015-12-17 | 2019-09-26 | New Skully, Inc. | Situational awareness systems and methods |
US20190122702A1 (en) * | 2016-03-31 | 2019-04-25 | Sony Corporation | Information processing device, information processing method, and computer program |
US10679677B2 (en) * | 2016-03-31 | 2020-06-09 | Sony Corporation | Information processing device and information processing method |
US10559126B2 (en) * | 2017-10-13 | 2020-02-11 | Samsung Electronics Co., Ltd. | 6DoF media consumption architecture using 2D video decoder |
US10845681B1 (en) | 2020-08-17 | 2020-11-24 | Stephen Michael Buice | Camera apparatus for hiding a camera operator while capturing 360-degree images or video footage |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109479010B (en) | Private communication by gazing at avatar | |
JP6725038B2 (en) | Information processing apparatus and method, display control apparatus and method, program, and information processing system | |
KR101986329B1 (en) | General purpose spherical capture methods | |
US10277813B1 (en) | Remote immersive user experience from panoramic video | |
EP3712840A1 (en) | Method and system for generating an image of a subject in a scene | |
US10681276B2 (en) | Virtual reality video processing to compensate for movement of a camera during capture | |
US9773350B1 (en) | Systems and methods for greater than 360 degree capture for virtual reality | |
US20130141419A1 (en) | Augmented reality with realistic occlusion | |
US20240153226A1 (en) | Information processing apparatus, information processing method, and program | |
WO2015122108A1 (en) | Information processing device, information processing method and program | |
WO2022199260A1 (en) | Static object stereoscopic display method and apparatus, medium, and electronic device | |
JP2019087226A (en) | Information processing device, information processing system, and method of outputting facial expression images | |
US10515481B2 (en) | Method for assisting movement in virtual space and system executing the method | |
CN112272817B (en) | Method and apparatus for providing audio content in immersive reality | |
EP3665656B1 (en) | Three-dimensional video processing | |
WO2020017435A1 (en) | Information processing device, information processing method, and program | |
US10410390B2 (en) | Augmented reality platform using captured footage from multiple angles | |
JP6775669B2 (en) | Information processing device | |
JP6518645B2 (en) | INFORMATION PROCESSING APPARATUS AND IMAGE GENERATION METHOD | |
US20240205513A1 (en) | Video display system, information processing device, information processing method, and recording medium | |
WO2018173206A1 (en) | Information processing device | |
JP7044846B2 (en) | Information processing equipment | |
WO2023248832A1 (en) | Remote viewing system and on-site imaging system | |
Nowatzyk et al. | Omni-Directional Catadioptric Acquisition System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SILVR THREAD, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CROSBY, TAI;REEL/FRAME:043331/0223 Effective date: 20151216 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20210926 |
|
PRDP | Patent reinstated due to the acceptance of a late maintenance fee |
Effective date: 20220829 |
|
FEPP | Fee payment procedure |
Free format text: PETITION RELATED TO MAINTENANCE FEES FILED (ORIGINAL EVENT CODE: PMFP); ENTITY STATUS OF PATENT OWNER: MICROENTITY Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PMFG); ENTITY STATUS OF PATENT OWNER: MICROENTITY Free format text: ENTITY STATUS SET TO MICRO (ORIGINAL EVENT CODE: MICR); ENTITY STATUS OF PATENT OWNER: MICROENTITY Free format text: SURCHARGE, PETITION TO ACCEPT PYMT AFTER EXP, UNINTENTIONAL (ORIGINAL EVENT CODE: M3558); ENTITY STATUS OF PATENT OWNER: MICROENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, MICRO ENTITY (ORIGINAL EVENT CODE: M3551); ENTITY STATUS OF PATENT OWNER: MICROENTITY Year of fee payment: 4 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |