US20040262051A1 - Program product, system and method for creating and selecting active regions on physical documents - Google Patents
Program product, system and method for creating and selecting active regions on physical documents Download PDFInfo
- Publication number
- US20040262051A1 US20040262051A1 US10/818,790 US81879004A US2004262051A1 US 20040262051 A1 US20040262051 A1 US 20040262051A1 US 81879004 A US81879004 A US 81879004A US 2004262051 A1 US2004262051 A1 US 2004262051A1
- Authority
- US
- United States
- Prior art keywords
- workstation
- active region
- page
- tablet
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000004044 response Effects 0.000 claims abstract description 21
- 238000013519 translation Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims 3
- 241001422033 Thestylus Species 0.000 description 24
- 239000011888 foil Substances 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 5
- 238000009826 distribution Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 239000010409 thin film Substances 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 230000005684 electric field Effects 0.000 description 1
- 238000000295 emission spectrum Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- AMGQUBHHOARCQH-UHFFFAOYSA-N indium;oxotin Chemical compound [In].[Sn]=O AMGQUBHHOARCQH-UHFFFAOYSA-N 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004020 luminiscence type Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000011368 organic material Substances 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- ULBKMFLWMIGVOJ-CFXUUZMDSA-M propicillin potassium Chemical compound [K+].N([C@@H]1C(N2[C@H](C(C)(C)S[C@@H]21)C([O-])=O)=O)C(=O)C(CC)OC1=CC=CC=C1 ULBKMFLWMIGVOJ-CFXUUZMDSA-M 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 150000003384 small molecules Chemical class 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/045—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using resistive elements, e.g. a single continuous surface or two parallel surfaces put in contact
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/044—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
- G06F3/0441—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means using active external devices, e.g. active pens, for receiving changes in electrical potential transmitted by the digitiser, e.g. tablet driving signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/044—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
- G06F3/0445—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means using two or more layers of sensing electrodes, e.g. using two layers of electrodes separated by a dielectric layer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/044—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
- G06F3/0446—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means using a grid-like structure of electrodes in at least two directions, e.g. using row and column electrodes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/044—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
- G06F3/0447—Position sensing using the local deformation of sensor cells
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
Definitions
- the present invention relates to printed media documents with electronic links to related, electronic information.
- Hyperlinks on web pages are well known today where a user can “click on” an icon, and in response, the web browser will fetch and display another web page linked to the icon. It was also known to define hyperlink active regions in a web page as rectangles, circles, and polygons, and associate them with a hyperlink address. They enable selected areas of a digital image (e.g., a GIF or JEPG image file) to be made “clickable” (i.e., active) so that a user can navigate from the web page containing the image to a number of other web pages or files, depending on which part of the image is selected.
- a digital image e.g., a GIF or JEPG image file
- To create an imagemap three things are required: an image, a database that relates each active region within the image to a hypertext reference, and a method of associating the database with the image.
- U.S. patent application 20020087598 entitled “Method and system for accessing interactive multimedia information or services by touching highlighted items on physical documents” was filed on Apr. 25, 2001 and published on Jul. 4, 2002. It discloses a system and method for manually selecting and electronically accessing multimedia information and/or services located on a user workstation or on one or a plurality of servers connected to a communication network. To make a selection, a person touches his or her finger to a word, letter, symbol, picture, icon, etc. that is electronically illuminated on the surface of a hard-copy document or any other physical surface. These illumination items are illuminated by a luminous signal (or light spot) generated by a transparent opto-touch foil, operating under the control of a user workstation.
- a luminous signal or light spot
- These illumination items act like hyperlinks.
- the user workstation receives from the opto-touch foil a signal indicating the position of the selected item. Then, the user workstation identifies and locates, by reference to a hyperlink table, the information and/or the service associated with the selected item. If the information and/or service is located in a remote server, the user workstation sends a request to this server for the information and/or service. If the information and/or the service is stored in the user workstation, then this information and/or service is accessed locally. The user workstation then displays the information or provides the requested service.
- the hyperlinked items are identified by the user as discrete illuminated points (light spots) emitted by the transparent opto-touch foil placed over the document.
- a “minimum distance” algorithm is used to identify the hyperlink item selected by the user.
- the distance from the coordinates of the point pressed by the user on the opto-touch foil is compared to the coordinates of all hyperlinked items (i.e., assimilated to illuminated points) defined on the document.
- the hyperlink item closest to the point that was pressed is the one deemed selected and triggered.
- Each hyperlink item (light spot) is a associated with a unique hyperlink destination (i.e., with a single URL) giving access to a single multimedia information or service related with the selected item.
- An object of the present invention is to create and utilize indicia of active regions on a printed document in such a way as to facilitate user selection of an active region.
- Another object of the present invention is to create and utilize indicia of active regions on a printed document in such as a way as not to mask the document.
- Another object of the present invention is to create and utilize indicia of active regions on a printed document in such a way as to more readily convey the subject matter of the hyper-linked information.
- Another object of the present invention is to create and utilize indicia of active regions on a printed document in such as a way as to show the hyperlinked information related to an active region selected by the user.
- the present invention resides in a system, method and program product for presenting and selecting an active region of a physical document page so that a user can access corresponding information via a workstation.
- a transparent electro-luminiscent tablet or other touch sensitive plate is positioned over the physical document page.
- the tablet or plate is coupled to the workstation.
- the physical document page is identified to the workstation.
- the workstation stores information defining an active region for the physical document page and a hyperlink to a web page or web file containing information related to content of the active region.
- the workstation directs the tablet or plate to display the active region over the physical document page.
- the tablet or plate conveys the touch point to the workstation, and the workstation displays on a computer screen the hyperlink.
- the invention also resides in a system, method and program product for presenting and selecting an active region of a physical document page so that a user can access corresponding information via a workstation.
- a transparent electro-luminiscent tablet or other touch sensitive plate is positioned over the page.
- the tablet or plate is coupled to the workstation.
- the page is identified to the workstation.
- the workstation stores information defining an outline of the active region for the page and a hyperlink or information related to content of the active region.
- the workstation directs the tablet or plate to display the outline of the active region over the page.
- the tablet or plate conveys the touch point to the workstation, and the workstation displays on a computer screen the information related to the content of the active region.
- the invention also resides in a system, method and program product for presenting and simultaneously selecting first and second active regions of a physical document page so that a user can access corresponding information via a workstation.
- a transparent electro-luminiscent tablet or other touch sensitive plate is positioned over the page.
- the tablet or plate is coupled to the workstation.
- the page is identified to the workstation.
- the workstation stores information defining outlines of the first and second active regions for the page and first and second hyperlinks or first and second documents related to contents of the first and second active regions, respectively.
- the outline for the second active region encompasses the outline for the first active region.
- the workstation directs the tablet or plate to display the outlines of the first and second active regions over the page.
- the tablet or plate conveys the touch point to the workstation, and the workstation displays on a computer screen the first and second hyperlinks or the first and second documents related to the contents of the first and second active regions.
- FIG. 1 illustrates a physical document where an active region has been defined according to the present invention, and also illustrates related hyperlinked information appearing on a user computer display.
- FIG. 2 illustrates a system, according to the presend invention, used to create, display and use active regions on a physical document.
- FIG. 3 illustrates a stage in use of the system of FIG. 2.
- FIG. 4 illustrates a stage in use of the system of FIG. 2 when creating active regions on a physical document, the system is used to enter the page number on a user workstation.
- FIG. 5 illustrates how the system of FIG. 2 is used to create active regions on a transparent ELDT device.
- FIG. 6 illustrates how the system of FIG. 2 is used to associate an active region with at least one hyperlink.
- FIG. 7 shows a table within the system of FIG. 2 used to define hyperlinks corresponding to active regions.
- FIG. 8 shows relationships between active regions defined on a page of a physical document and the corresponding hyperlinks.
- FIG. 9 illustrates a stage in use of the system of FIG. 2 where a user enters on the user workstation a document reference number and then, the system displays the active regions defined on document.
- FIG. 10 illustrates a stage in the use of the system of FIG. 2 wherein a user specifies a page number of a multipage document, and the system displays the active regions for that page.
- FIG. 11 illustrates a stage in the use of the system of FIG. 2 where the user touches and thereby selects an active region displayed on the page.
- FIG. 12 illustrates a stage in the use of the system of FIG. 2 where multiple, hyperlink information corresponding to the single touch point of FIG. 11 (and more than one active region that enclose and are invoked by that touch point) are retrieved and displayed to the user on the user computer.
- FIG. 13 illustrates a stage in the use of the system of FIG. 2 where a user selects one of the hyperlinks of FIG. 12, and the system fetches the information via the Internet.
- FIG. 14 illustrates a stage in the use of the system of FIG. 2 where the system displays on the user computer a web document corresponding to the hyperlink selected by the user in FIG. 13.
- FIG. 15 illustrates the internal structure of an electro-luminiscent digitizing tablet (ELDT) which overlays the printed document within the system of FIG. 2.
- ELDT electro-luminiscent digitizing tablet
- FIG. 16 is a flow chart illustrating the steps in creating active regions on a physical document within the system of FIG. 2.
- FIG. 17 is a flow chart illustrating the steps in using the active regions created in FIG. 16.
- FIG. 18 illustrates the components within the user computer and ELDT of the system of FIG. 2.
- FIG. 1 illustrates how the present invention is used to provide additional information or a translation for a foreign language newspaper 100 .
- the user places an Electro-luminescent display tablet (“ELDT”) over a page of the newspaper or other document of interest. Then, the user identifies the document to a user workstation. In response, the user workstation directs the ELDT to display/illuminate a perimeter of an active region 103 on the document. Then, the user touches a printed item 101 within the active region 103 with a stylus 102 in order to receive additional information about the content within the active region. In response, the user workstation 203 automatically displays an English translation 104 of the text printed in the highlighted region.
- ELDT Electro-luminescent display tablet
- the user workstation may display one or more hyperlinks to web pages or other documents or applications related to the subject of the selected item.
- a short summary of each hyperlink may be displayed adjacent to each hyperlink to advise the user what further information the hyperlink will elicit. If the user selects one of the hyperlinks using a mouse or keyboard, the user workstation will display the associated web page or other information.
- FIG. 2 illustrates the main components of the present invention and their operation.
- Physical documents 200 can be of any kind, for example, newspapers, legal documents, maps (e.g., topographical maps, political maps, historical maps, route maps, shaded relief maps, city maps, natural resources maps, rail road maps or other type of map), fiction novels, academic text books, technical books, commercial catalogs or other type of engraved, written, or printed surface.)
- the document can be made of paper, plastic, wood or any other material.
- the title 205 a numerical reference 206 (e.g., the ISBN number, or any other number assigned by the publisher or the user), or even the URL (i.e., the internet address) 207 of the publisher server may be printed, written or attached on the physical document (e.g., on the front cover, back cover or first page).
- a numerical reference 206 e.g., the ISBN number, or any other number assigned by the publisher or the user
- the URL i.e., the internet address
- the electro-luminiscent digitizing tablet (ELDT) 201 comprises two superposed, functionally independent transparent foils 1201 and 1202 .
- Transparent digitizing tablet (DT) 1201 can be a type used commonly to manufacture position sensible liquid crystal display devices (PSLCDs) for computers. The generated signals are generally proportional to the coordinates of the point that is pressed 1504 by the stylus 202 .
- Transparent electro-luminiscent display (EL) 1202 can be a transparent, bright, self-emitting display that can emit light 1505 from either one or both surfaces.
- the combination of both foils i.e., the digitizing tablet 1201 stacked over the electro-luminiscent display 1202 ) forms electro-luminiscent digitizing tablet (ELDT) 201 .
- FIG. 15 illustrates an ELDT placed and aligned over a physical document 200 comprising a plurality of items 1507 (i.e., words, paragraphs, sections, pictures, icons, etc.) printed (or written, painted, engraved . . . ) on the surface of the document 200 .
- FIG. 15 also illustrates how the electro-luminiscent display 1202 emits light 1505 illuminating and defining polygonal or circular perimeters defining the active regions of a printed document. This occurs when a user draws them with the stylus 202 and subsequently, when the user selects them.
- the portions of the ELDT other than those displaying the active region perimeters allow light 1506 from the document 200 to pass through both transparent foils 1201 and 1202 to the reader, so that the surface of the physical document is fully visible except for underneath the thin luminous lines delimiting active regions.
- the ELDT 201 may communicate with the user workstation 203 over an infrared link, a serial wired connection or any other communication means (e.g. by means of a wireless connection operating in the globally available 2.4 Ghz band of the “Bluetooth” specification, as promoted by the “Bluetooth Special Interest Group” and documented on the Official Bluetooth Website.
- This connection, wired or wireless, is represented by the reference number 204 .
- Known transparent digitizing tablets are produced, for example, by Calcomp corporation and Wacom Technology Company.
- One example of a transparent digitizing tablety that can be used for ELDT 201 is WACOM PL Series, LCD pen tablet systems.
- the transparent electro-luminiscent display 1202 may include a substrate having an array formed by a plurality of transparent scanning lines, transparent data lines crossing said scanning lines, and electro-luminiscent (EL) elements (pixels) on the intersections of the scanning and data lines.
- the lines are used to determine the position of an applied stylus.
- Those transparent lines and contacts are made by a transparent conductive material, e.g., indium tin oxide (ITO).
- ITO indium tin oxide
- a transparent digitizing tablet is actually a layer that has a mesh of transparent wire sensors running through it. This mesh may look like moiree patterns on the top of the display.
- These thin wires when acted upon by a moving stylus, report the sequence of contact points. The movement of a pencil-like stylus over a tablet surface re-creates the drawing on a computer screen.
- this passive-matrix, light-emitting display may be made of an array of TOLED's (Transparent Organic Light Emitting Devices) of the types used to create vision area displays on windshields, cockpits, helmets and eyeglasses.
- TOLED Transparent Organic Light Emitting Devices
- a TOLED is a monolithic, solid-state device consisting of a series of “small molecule” organic thin films sandwiched between two transparent, conductive layers. When a voltage is applied across the device, it emits light. This light emission is based upon a luminescence phenomenon wherein electrons and holes are injected and migrate from the contacts toward the organic heterojunction under the applied electric field.
- TOLEDs When these carriers meet, they form excitons (electron-hole pairs) that recombine radiatively to emit light.
- excitons electron-hole pairs
- TOLEDs are bright, self-emitting displays that can be directed to emit from either or both surfaces. This is possible because, in addition to having transparent contacts, the organic materials are also transparent over their own emission spectrum and throughout most of the visible spectrum.
- TOLED displays are today manufactured with standard silicon semiconductors. Since TOLEDs are thin-film, solid-state devices, they are very thin, lightweight and durable, ideal for portable applications, like the present invention. TOLEDs can be bottom, top, or both bottom and top emitting. Also, TOLEDs technology has attractive advantages regarding, transparency (TOLED displays can be nearly as clear as the glass or substrate they are on and when built between glass plates, TOLEDs are >85% transparent when turned off), energy efficiency (for longer battery life), full viewing angle, bright and high contrast light emission, fast response time, and environmental robustness. Thus, TOLEDs are well suited for manufacturing the light-emitting, electro-luminiscent, display component, used jointly with the transparent digitizing tablet for the present invention.
- One example of light emitting foil technology that may be used is that of the TOLEDs manufactured by UNIVERSAL DISPLAY CORPORATION.
- Pen like stylus 202 is a type commonly used as input devices for data processing and storage systems in place of conventional keyboards and mouse devices.
- the stylus 202 is used in combination with the digitizing tablet 1201 component of the ELDT 201 incorporating a resistive or capacitive digitizer or sheet material.
- the electro-luminiscent 1202 component of the ELDT displays the instantaneous position and path of movement of the stylus. In this way, the ELDT device displays the pattern, e.g. a written message, sketch or signature traced thereon.
- a human uses the stylus 202 to draw active regions via the ELDT.
- a human uses the stylus 202 to select a portion of document content seen through the transparent ELDT device 201 . If that portion is within an active region, then the ELDT notifies the workstation 203 of the selection.
- stylus 202 is a known wireless, pressure sensitive Wacom UltraPen (tm of Wacom Technology Company) stulus.
- the user workstation 203 can be a handheld device, such as a PDA or a cell phone, a personal computer, a network computer, an Internet appliance or a wireless IP enabled device, connected to the ELDT 201 .
- the user workstation 203 can be stand-alone or connected to a network (e.g. the Internet).
- User workstation 203 includes a wired, wireless or other connection 204 for connecting to the ELDT device 201 to transfer the information necessary to create active regions of physical documents, or to receive through a network and store active regions of a plurality of physical documents.
- the user workstation receives the coordinates of the points selected by the user with the stylus on the physical document 200 to select active regions detected by the ELDT device 201 .
- a pulse driving circuit 1803 alternately transmits driving pulses to X-axis and Y-axis directions of the digitizing tablet 1201 for sensing the present position of the stylus 202 .
- the position of stylus 202 is detected by capacitive coupling sensed in the digitizing tablet 1201 .
- the stylus 202 senses a position signal in a potential distribution on the digitizing tablet 1201 using capacitive coupling and provides the position signal to the position sensing circuit 1806 .
- the position sensing circuit 1806 receives the present X-axis and Y-axis coordinate data of the stylus and converts the coordinate data into digitized position data.
- the microcontroller 1807 controls the pulse driving circuit 1803 and also transfers data of the position detected from the position sensing circuit 1806 to the user workstation 203 . Upon reception of the position data from position sensing circuit 1806 , the microcontroller 1807 analyses the position data to calculate the present position of the stylus 202 and updates the user workstation 203 accordingly.
- the user workstation 203 controls the EL display driving circuit 1804 , while the EL display driving circuit 1804 provides X-axis and Y-axis coordinates driving signals to the electro-luminiscent display 1202 so that it can display the pixel on which the stylus is placed.
- X-axis and Y-axis coordinates of the points (pixels) defining active regions geometric data are fetched from referenced physical page data on Active Regions Table 1812 and are loaded on the Page Regions graphics memory 1811 (a graphics buffer for all regions defined to be active on a document's page).
- the EL display driving circuit 1804 retrieves from Page Regions graphics memory 1811 the coordinates of those pixels of the active regions to be draw and transforms those coordinates to driving signals sent to the electro-luminiscent display 1202 .
- FIG. 16 illustrates the steps for creating active regions (imagemaps) on portions of physical documents and associating hyperlinks from the active regions to multimedia information or services.
- a user selects a physical document and identifies it to the user workstation 203 .
- the physical document comprises one or multiple pages.
- a document management program within user workstation 203 initiates an Active Regions Table associated with the physical document (step 1602 ).
- the document management program records in the Active Regions Table an identification of the selected physical document (step 1603 ).
- a user selects a page of the physical document and identifies the page to the document management program (step 1604 ).
- the document management program then records the page in the Active Regions Table for this document (step 1605 ).
- the user identifies to the document management program names of portions of the page which will correspond to active regions subsequently identified by the user. (step 1606 ).
- the following steps 1607 - 1611 are performed for each active region defined by the user.
- the document management program assigns and stores an identifier of the active region in the Active Regions Table (step 1607 ).
- the user places and aligns a transparent ELDT device over the selected page of the physical document (step 1608 ).
- the user draws with the stylus the contour of the active region over a transparent ELDT device or otherwise defines the outline of the active region by specifying the junction points of the polygon or shape and size of the circle (step 1609 ).
- the active region is defined in terms of rectangles, circles or polygons.
- the document management program receives from the ELDT device and stores in the Active Regions Table geometric coordinates of the outline of the active region (step 1610 ).
- the user specifies to the document management program one or more hyperlinks for each of the active regions defined by the user (step 1611 ) and the document management program stores this information in the Active Region Table.
- FIG. 3 illustrates the foregoing process in more detail.
- the user opens on the user workstation 203 an Active Regions Table 304 for the selected document 200 and enters via the keyboard or mouse codes or names for identifying the document.
- the user types in corresponding fields in the Active Regions Table 304 , the physical document reference number or ISBN (e.g., “071104”) 305 , the document title (e.g., “ Atlas of Asia ”) 306 ; the publisher's name (e.g., “GeoWorld Ltd.”) and the internet address of the publisher Web server (e.g., the URL “http.//http://www.geoworld.com”) 307 .
- active regions can defined by both final users (e.g.
- the Active Regions Table may be stored on the user's workstation.
- the publisher may create the Active Regions Table for a published document and store it on a publisher's Web server for distribution to final users. From those publisher's servers, final users (i.e., readers) can select and download the Active Regions Tables of published documents to the user's workstations.
- the URL 306 of the Publisher Server and the document reference number 307 used to identify the document and locate the electronic document copy through the Web, must be printed or attached at a predefined reserved place on the physical document 201 (e.g., on the front cover, back cover or first page). For each selection made by the user from a physical document 201 a new entry must be created on the Selections Table 305 .
- FIG. 4 shows how the user (the publisher or the editor) while browsing or composing a physical document (e.g., an “ Atlas of Asia ”) 200 finds on a page (e.g. “Page 133”) 507 portions of printed content (e.g., cities and islands on “Hong Kong” map) representing interesting topics, to which he or she would like to associate links to related multimedia information or services.
- a page e.g. “Page 133”
- portions of printed content e.g., cities and islands on “Hong Kong” map
- the user first identifies in the Active Regions Table 304 the selected page of physical document 200 by typing 405 the Page Number (e.g., page “133”) 507 on the user workstation 203 .
- the selected Page Number 507 is recorded on the Active Regions Table 304 , associated to the selected document identification.
- the transparent ELDT device 201 After entering on the user workstation 203 the number 507 of the selected page, the following is done with the transparent ELDT device 201 to define active regions on this page.
- the user places the ELDT device 201 over the page and aligns 406 the ELDT device 201 with the borders of the page by some conventional means (e.g., by adjusting the upper left corner of the ELDT device with the upper left corner of the page). The user can still see the contents of the selected document's page through transparent ELDT device.
- FIG. 5 illustrates how to create active regions on a page 407 of a physical document after transparent ELDT device 201 is placed and aligned over the physical page.
- the user draws/traces by means of stylus 202 active regions over the transparent ELDT device.
- the shapes of the active regions are predefined, menu selectable geometric forms, such as circles, rectangles or polygons. They are draw or located by the user to enclose the portions of document content which the user wants to become selectable (i.e., active, selectable, or “clickable”).
- FIG. 1 illustrates how to create active regions on a page 407 of a physical document after transparent ELDT device 201 is placed and aligned over the physical page.
- the user draws/traces by means of stylus 202 active regions over the transparent ELDT device.
- the shapes of the active regions are predefined, menu selectable geometric forms, such as circles, rectangles or polygons. They are draw or located by the user to enclose the portions of document content which the user wants to become selectable
- FIG. 5 illustrates how the user chooses to define active regions comprising rectangles (e.g., R 0 ) and polygons (e.g., R 1 ) enclosing selected geographic regions represented on a map (e.g., on “Hong Kong” physical map).
- this operation can be done for each active region, by the user selecting the options “RECTANGULAR REGION”, or “POLYGONAL REGION” on the user workstation.
- the user draws or specifies by means of the stylus 202 the selected region (e.g., R 0 ) by marking the corners (e.g., “A”, “B”, “C”, “D”), defining the contour around it (or, as is the case of a rectangular region, by marking two points, i.e., the upper left corner “A”, and the lower right corner “D”).
- the selected region e.g., R 0
- marking the corners e.g., “A”, “B”, “C”, “D”
- Coordinates of the vertices (e.g., “A”, “B”, “C”, “D”) of the region (e.g., R 0 ) are sensed by the ELDT device 201 and are transmitted to the user workstation 203 where they are recorded on a new entry created on the Active Regions Table 304 for the new active region.
- This new active region entry is associated with the selected document and page identification.
- Geometric parameters defining rectangular, polygonal or circular regions are stored on the Active Regions Table 304 .
- a pointer is created from the entry corresponding to the active region being defined and the geometry shape parameters and coordinates received from the ELDT device determining the shape and location of said active region on the physical page (e.g.
- Rectangular region R 0 would be defined as RECTANGLE: (Xa,Ya),(Xd,Yd); where (Xa, Ya) are the coordinates of the upper left corner “A”, and (Xd, Yd) are the coordinates of lower right corner “D” of this rectangular region.
- active regions are being drawn by the user, they are highlighted by the transparent ELDT device, while the user views the page of the physical document placed underneath.
- the active regions can be created by software by processing active regions already created by an electronic mapping of the printed document page.
- Active regions can be nested so that one can be included in another, or even overlap, so that multiple regions can share common portions of document content.
- FIG. 6 illustrates how the user associates one or a plurality of hyperlinks to active regions which have been created.
- active region e.g., R 0 , R 1
- the user assigns on the corresponding active region entry 601 created on the Active Regions Table 304 on the user workstation 203 , an active region name (e.g., “Hong Kong”, “Hong Kong Island” ) 602 and one or a plurality of hyperlinks (comprising hyperlinks names, and URLs) 603 .
- the hyperlinks link to hypermedia information or services to be accessed, or textual comments or data related to the regions to be displayed, when the user selects the corresponding active regions.
- each active region e.g., R 0 , R 1
- the geometric parameters and coordinates e.g., RECTANGLE:(Xa,Ya),(Xd,Yd) 604 specifying the shape and location of said active region on the physical page.
- FIG. 7 shows an example of the information on the Active Regions Table 304 corresponding to the active regions (e.g., R 0 : “Hong Kong”, R 1 : “Hong Kong Island”, R 2 : “Aberdeen”, R 3 : “New Territories”) created on a page (e.g. “Page 133”) of a physical document (e.g., an “ Atlas of Asia ”).
- a plurality of hyperlinks have been associated with each active region.
- the user can access different multimedia information or services from each active region.
- FIG. 8 illustrates the relationship between active regions (e.g., R 0 , R 1 , R 2 , R 3 ) defined on a page of a physical document and the associated reference and hyperlink information on the Active Regions Table.
- This figure illustrates also another principle of the present invention.
- the hyperlink data stored on the Active Regions Table
- the hyperlink data of all active regions enclosing the selected point (e.g., R 2 , R 1 , R 0 ) is displayed to the user on the user workstation 203 .
- FIGS. 8 and 10 there are three outlines R 0 , R 1 and R 2 which surround/encompass the touch point 800 .
- FIG. 17 illustrates the steps and programming by which a user can use the present invention to obtain more information about a topic in a physical document, after the active regions were defined as described above with reference to FIG. 15.
- a user selects and identifies to the user workstation 203 a physical document (step 1701 ).
- the physical document comprises one or a plurality of pages.
- a user selects a page of the physical document (step 1702 ) for which the user would like additional information.
- the user places and aligns a transparent ELDT device over the selected page (step 1703 ).
- the ELDT device is connected to the workstation 203 .
- the user identifies the selected page to the user workstation (step 1704 ).
- the page comprises one or a plurality of active regions defined earlier.
- the document management program within workstation 203 based on the page's Active Regions Table identifies active regions within the identified page (step 1705 ) and directs the ELDT to display their geometric outlines.
- the user selects an active region using the stylus (step 1706 ).
- the ELDT determines the position of the stylus on said transparent ELDT device and conveys the touch position coordinates to the document management program (step 1707 ).
- the document management program by reference to the Active Regions Table, identifies the active region (or plurality of active regions that encompass the touch point) corresponding to the stylus position on said transparent ELDT device (step 1708 ).
- the document management program identifies hyperlinks defined for the active region (or plurality of active regions that encompass the touch point) (step 1709 ).
- the Active Regions Table includes for each active region, an identification of the respective hyperlinks and the location of associated hyperlinked information or service.
- the user selects one of the hyperlinks corresponding to selected active region (or plurality of selected active regions) (step 1710 ).
- the workstation 203 accesses the information or service associated with the selected hyperlink (step 1711 ).
- FIG. 9 further illustrates the foregoing steps of FIG. 17. While flipping through the pages or reading the document, the user finds on a certain page (e.g. “Page 133”) 906 one or several items for which he or she would like to receive further information or access to some services.
- a certain page e.g. “Page 133”
- the active regions of a physical document may be created by the final user (e.g., by the same reader), or alternatively, by the editor or publisher of said physical document.
- the Active Regions Table of the document has been created by the user, it should already be stored and immediately accessible by the user from the same user's workstation.
- the Active Regions Table has been created by the document publisher, usually it will be stored on a publisher's Web server for distribution to final users.
- the final user i.e., the reader
- the URL 306 of the Publisher Server and the document reference number 307 used to identify the document and locate and retrieve through the Web the associated Active Regions Table, is printed or attached at a predefined reserved place on the physical document 201 (e.g., on the front cover, back cover or first page).
- the Active Regions Table of the selected physical document is already stored on the user workstation, or is accessible through a network (e.g., through the internet network) from the user workstation.
- the user To access the Active Regions Table 304 of the selected physical document 200 , by means of any user interface (keyboard, mouse, voice recognition software and microphone, . . . ) or any other reading means (e.g., barcode reader . . . ), the user enters codes or names for identifying the document. In the embodiment illustrated in FIG. 9, the user identifies the document by typing on the user workstation the physical document reference number or ISBN (e.g., “071104”) 905 . If an Active Regions Table 304 has been defined for this document number, it is accessed and displayed by the user workstation. FIG. 9 also illustrates how, once the Active Regions Table 304 is accessed and displayed by the user workstation 203 , the user identifies the selected page of physical document by typing the Page Number (e.g., page “133”) 906 .
- Page Number e.g., page “133”
- FIG. 10 shows how, once the user has identified to the system the selected page (e.g. “Page 133”) 906 of physical document 200 , the geometric data 604 of all active regions defined on this page is retrieved from the Active Regions Table 304 of the selected document.
- the user workstation controls the display of the active regions (e.g., R 0 , R 1 , R 2 , R 3 , R 4 ), which are displayed highlighted by the ELDT device 201 .
- This same figure shows how, by placing and aligning the transparent ELDT device over the physical page, the relationship of the active regions illuminated by the ELDT device and the content of physical page content becomes immediately apparent for the user.
- FIG. 11 illustrates how the user identifies an interesting item (e.g., “Aberdeen”) on the physical page 407 , and checks that this item is contained within an active region (e.g., R 2 ), illuminated by the ELDT device 201 .
- an active region e.g., R 2
- the user keeping aligned the transparent ELDT device 201 over the physical page, points with the stylus to a point 1103 on the ELDT device 201 over its position on the physical page.
- the coordinates of the point pointed by the user with the stylus are sensed by the ELDT device 201 and are transmitted to the user workstation 203 .
- FIG. 12 illustrates how, when the user selects with the stylus an item 1200 on a physical page 407 , the coordinates of the point sensed by the ELDT 201 are transmitted to the user workstation 203 . From those coordinates, by means of interior point algorithms widely known by those skilled on the art, using geometric data 604 of the active regions (e.g., R 0 , R 1 , R 2 , R 3 , R 4 ) stored on the Active Regions Table 304 , corresponding to the selected physical page 407 , the active regions that enclose the point selected by the user are identified (e.g., R 2 , R 1 , R 0 ), and the information corresponding to those “selected” active regions 1200 is extracted from the Active Regions Table and displayed by the user workstation.
- This figure shows how, associated with a single point selected by the user on the physical page, a plurality of active regions and a plurality of hyperlinks associated with each region may be presented to the user on user workstation 203
- FIG. 13 illustrates how from the list of active regions (e.g., R 2 , R 1 , R 0 ) encompassing the selected point, and the descriptive hyperlinks related to those active regions, the user selects and triggers a hyperlink 1201 (e.g., “Aberdeen & Stanley Guide”, defined on active region R 2 ) to access related multimedia information or service from the Web.
- the destination address or URL associated with the selected hyperlink e.g., http://www.inm-asiaguides.com/hongkong/ehkgsou.htm
- the destination address or URL associated with the selected hyperlink e.g., http://www.inm-asiaguides.com/hongkong/ehkgsou.htm
- 1304 is identified from the corresponding entry in the Active Regions Table and is sent from the user workstation 203 through the internet 1203 to the corresponding Web server 1202 (e.g., “www. inm-asiaguides.com”).
- FIG. 14 illustrates how the requested multimedia information or service 1201 , (e.g., HTML file named “ehkgsou.htm”) related with the item 1100 selected by the user on the physical page 407 , is received from Web server 1202 (e.g., “www.inm-asiaguides.com”), being finally played or displayed to the user on the user workstation 203 .
- Web server 1202 e.g., “www.inm-asiaguides.com”
- both visualization modes can be provided as options to the user.
- a first mode all active regions defined on the selected page can be simultaneously displayed to the user, so that the user can identify all printed items for which additional information could be accessible.
- the second mode only the selected active regions will be displayed to the user.
- the user may alternatively prefer to use an opto-touch foil, and select illuminated active regions by touching with the finger over the opto-touch foil, instead of using a stylus.
- the user can choose alternatively the transparent digitizing tablet or opto-foil to be placed under, instead of over, the physical document page.
- a customer receives complex computer equipment, with an installation manual comprising drawings and schematics of parts and subassemblies of the equipment.
- an installation manual comprising drawings and schematics of parts and subassemblies of the equipment.
- the transparent ELDT device over any one of these schematics, the user can immediately see certain parts of the complex schematic illuminated as active regions. These illuminated parts are identified as hyperlinks items and can be used for accessing additional information on a remote Web server or on a local computer.
- multimedia instructions showing how the part needs to be installed or serviced are displayed. It is not necessary to look through printed manuals to obtain this information. No complex navigation is required.
- a single printed copy of a general view of the equipment is sufficient to navigate with the system according to the present invention. The customer need only press with the stylus on the desired illuminated region on the surface of the installation manual.
- a subscriber reading a newspaper or magazine may be interested in seeing computer multimedia or TV video information associated with the articles he or she reads. While reading the sports pages (e.g., on the New York Times), key events can be instantly recalled and played on demand (e.g., the opening ceremony of Athens Olympic Games, the last images of the “Tour de France”, the last tennis match on Wimbledon, etc.) simply by touching highlighted regions encompassing titles, news or editorial sections printed on newspaper pages.
- a user flipping through the paper pages and glancing printed figures and text on a newspaper written in a foreign language may select a section of the newspaper content (e.g., an article), and receive from the newspaper publisher server the selected content translated to the user's native language (e.g., to receive the selected article translated to English).
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A system, method and program product for presenting and selecting an active region of a physical document page so that a user can access corresponding information via a workstation. A transparent electro-luminiscent tablet or other touch sensitive plate is positioned over the physical document page. The tablet or plate is coupled to the workstation. The physical document page is identified to the workstation. The workstation stores information defining an active region for the physical document page and a hyperlink to a web page or web file containing information related to content of the active region. The workstation directs the tablet or plate to display the active region over the physical document page. A user touches a point within the active region. In response, the tablet or plate conveys the touch point to the workstation, and the workstation displays on a computer screen the hyperlink. The active region can be identified by an outline that encompasses the active region. One such active region can encompass another such active region, so that touching a point within the inner active region, elicits display of hyperlinks or documents related to both active regions.
Description
- The present invention relates to printed media documents with electronic links to related, electronic information.
- Electronic publishing is well known today. An enormous amount of content, including documents, books and other types of publications are now accessible to users of personal computers or specialized e-book readers, via the WWW or CD ROM. Nevertheless, some people prefer the feeling and ease of reading a tangible newspaper, magazine or book.
- Hyperlinks on web pages are well known today where a user can “click on” an icon, and in response, the web browser will fetch and display another web page linked to the icon. It was also known to define hyperlink active regions in a web page as rectangles, circles, and polygons, and associate them with a hyperlink address. They enable selected areas of a digital image (e.g., a GIF or JEPG image file) to be made “clickable” (i.e., active) so that a user can navigate from the web page containing the image to a number of other web pages or files, depending on which part of the image is selected. To create an imagemap, three things are required: an image, a database that relates each active region within the image to a hypertext reference, and a method of associating the database with the image.
- U.S. patent application 20020087598 entitled “Method and system for accessing interactive multimedia information or services by touching highlighted items on physical documents” was filed on Apr. 25, 2001 and published on Jul. 4, 2002. It discloses a system and method for manually selecting and electronically accessing multimedia information and/or services located on a user workstation or on one or a plurality of servers connected to a communication network. To make a selection, a person touches his or her finger to a word, letter, symbol, picture, icon, etc. that is electronically illuminated on the surface of a hard-copy document or any other physical surface. These illumination items are illuminated by a luminous signal (or light spot) generated by a transparent opto-touch foil, operating under the control of a user workstation. These illumination items act like hyperlinks. When the user selects one of the illuminated items, the user workstation receives from the opto-touch foil a signal indicating the position of the selected item. Then, the user workstation identifies and locates, by reference to a hyperlink table, the information and/or the service associated with the selected item. If the information and/or service is located in a remote server, the user workstation sends a request to this server for the information and/or service. If the information and/or the service is stored in the user workstation, then this information and/or service is accessed locally. The user workstation then displays the information or provides the requested service.
- In U.S. patent application 20020087598, the hyperlinked items are identified by the user as discrete illuminated points (light spots) emitted by the transparent opto-touch foil placed over the document. When the user touches the foil, a “minimum distance” algorithm is used to identify the hyperlink item selected by the user. According to the minimum distance algorithm, the distance from the coordinates of the point pressed by the user on the opto-touch foil is compared to the coordinates of all hyperlinked items (i.e., assimilated to illuminated points) defined on the document. The hyperlink item closest to the point that was pressed is the one deemed selected and triggered. Each hyperlink item (light spot) is a associated with a unique hyperlink destination (i.e., with a single URL) giving access to a single multimedia information or service related with the selected item.
- The system disclosed in U.S. patent application 20020087598 may have difficulty discriminating between touch points adjacent to closely spaced hyperlink items. Also, the appearance of the illuminated spots on the transparent foil over the document may mask, to some degree, the print seen by the user. Also, the use of a light spot as the hyperlink item does not always convey the subject matter of the hyperlinked information.
- An object of the present invention is to create and utilize indicia of active regions on a printed document in such a way as to facilitate user selection of an active region.
- Another object of the present invention is to create and utilize indicia of active regions on a printed document in such as a way as not to mask the document.
- Another object of the present invention is to create and utilize indicia of active regions on a printed document in such a way as to more readily convey the subject matter of the hyper-linked information.
- Another object of the present invention is to create and utilize indicia of active regions on a printed document in such as a way as to show the hyperlinked information related to an active region selected by the user.
- The present invention resides in a system, method and program product for presenting and selecting an active region of a physical document page so that a user can access corresponding information via a workstation. A transparent electro-luminiscent tablet or other touch sensitive plate is positioned over the physical document page. The tablet or plate is coupled to the workstation. The physical document page is identified to the workstation. The workstation stores information defining an active region for the physical document page and a hyperlink to a web page or web file containing information related to content of the active region. The workstation directs the tablet or plate to display the active region over the physical document page. A user touches a point within the active region. In response, the tablet or plate conveys the touch point to the workstation, and the workstation displays on a computer screen the hyperlink.
- The invention also resides in a system, method and program product for presenting and selecting an active region of a physical document page so that a user can access corresponding information via a workstation. A transparent electro-luminiscent tablet or other touch sensitive plate is positioned over the page. The tablet or plate is coupled to the workstation. The page is identified to the workstation. The workstation stores information defining an outline of the active region for the page and a hyperlink or information related to content of the active region. The workstation directs the tablet or plate to display the outline of the active region over the page. A user touches a point within the outline. In response, the tablet or plate conveys the touch point to the workstation, and the workstation displays on a computer screen the information related to the content of the active region.
- The invention also resides in a system, method and program product for presenting and simultaneously selecting first and second active regions of a physical document page so that a user can access corresponding information via a workstation. A transparent electro-luminiscent tablet or other touch sensitive plate is positioned over the page. The tablet or plate is coupled to the workstation. The page is identified to the workstation. The workstation stores information defining outlines of the first and second active regions for the page and first and second hyperlinks or first and second documents related to contents of the first and second active regions, respectively. The outline for the second active region encompasses the outline for the first active region. The workstation directs the tablet or plate to display the outlines of the first and second active regions over the page. A user touches a point within the outline of the first active region. In response, the tablet or plate conveys the touch point to the workstation, and the workstation displays on a computer screen the first and second hyperlinks or the first and second documents related to the contents of the first and second active regions.
- FIG. 1 illustrates a physical document where an active region has been defined according to the present invention, and also illustrates related hyperlinked information appearing on a user computer display.
- FIG. 2 illustrates a system, according to the presend invention, used to create, display and use active regions on a physical document.
- FIG. 3 illustrates a stage in use of the system of FIG. 2.
- FIG. 4 illustrates a stage in use of the system of FIG. 2 when creating active regions on a physical document, the system is used to enter the page number on a user workstation.
- FIG. 5 illustrates how the system of FIG. 2 is used to create active regions on a transparent ELDT device.
- FIG. 6 illustrates how the system of FIG. 2 is used to associate an active region with at least one hyperlink.
- FIG. 7 shows a table within the system of FIG. 2 used to define hyperlinks corresponding to active regions.
- FIG. 8 shows relationships between active regions defined on a page of a physical document and the corresponding hyperlinks.
- FIG. 9 illustrates a stage in use of the system of FIG. 2 where a user enters on the user workstation a document reference number and then, the system displays the active regions defined on document.
- FIG. 10 illustrates a stage in the use of the system of FIG. 2 wherein a user specifies a page number of a multipage document, and the system displays the active regions for that page.
- FIG. 11 illustrates a stage in the use of the system of FIG. 2 where the user touches and thereby selects an active region displayed on the page.
- FIG. 12 illustrates a stage in the use of the system of FIG. 2 where multiple, hyperlink information corresponding to the single touch point of FIG. 11 (and more than one active region that enclose and are invoked by that touch point) are retrieved and displayed to the user on the user computer.
- FIG. 13 illustrates a stage in the use of the system of FIG. 2 where a user selects one of the hyperlinks of FIG. 12, and the system fetches the information via the Internet.
- FIG. 14 illustrates a stage in the use of the system of FIG. 2 where the system displays on the user computer a web document corresponding to the hyperlink selected by the user in FIG. 13.
- FIG. 15 illustrates the internal structure of an electro-luminiscent digitizing tablet (ELDT) which overlays the printed document within the system of FIG. 2.
- FIG. 16 is a flow chart illustrating the steps in creating active regions on a physical document within the system of FIG. 2.
- FIG. 17 is a flow chart illustrating the steps in using the active regions created in FIG. 16.
- FIG. 18 illustrates the components within the user computer and ELDT of the system of FIG. 2.
- FIG. 1 illustrates how the present invention is used to provide additional information or a translation for a
foreign language newspaper 100. The user places an Electro-luminescent display tablet (“ELDT”) over a page of the newspaper or other document of interest. Then, the user identifies the document to a user workstation. In response, the user workstation directs the ELDT to display/illuminate a perimeter of anactive region 103 on the document. Then, the user touches a printeditem 101 within theactive region 103 with astylus 102 in order to receive additional information about the content within the active region. In response, theuser workstation 203 automatically displays anEnglish translation 104 of the text printed in the highlighted region. (Alternately, an English translation can be read by a text-to-speech conversion system.) Alternately, in response to the user touching the identifieditem 101, the user workstation may display one or more hyperlinks to web pages or other documents or applications related to the subject of the selected item. A short summary of each hyperlink may be displayed adjacent to each hyperlink to advise the user what further information the hyperlink will elicit. If the user selects one of the hyperlinks using a mouse or keyboard, the user workstation will display the associated web page or other information. - FIG. 2 illustrates the main components of the present invention and their operation.
Physical documents 200 can be of any kind, for example, newspapers, legal documents, maps (e.g., topographical maps, political maps, historical maps, route maps, shaded relief maps, city maps, natural resources maps, rail road maps or other type of map), fiction novels, academic text books, technical books, commercial catalogs or other type of engraved, written, or printed surface.) The document can be made of paper, plastic, wood or any other material. For identifying a selecteddocument 200 to the system, thetitle 205, a numerical reference 206 (e.g., the ISBN number, or any other number assigned by the publisher or the user), or even the URL (i.e., the internet address) 207 of the publisher server may be printed, written or attached on the physical document (e.g., on the front cover, back cover or first page). - As shown in FIG. 15, the electro-luminiscent digitizing tablet (ELDT)201 comprises two superposed, functionally independent transparent foils 1201 and 1202. Transparent digitizing tablet (DT) 1201 can be a type used commonly to manufacture position sensible liquid crystal display devices (PSLCDs) for computers. The generated signals are generally proportional to the coordinates of the point that is pressed 1504 by the
stylus 202. Transparent electro-luminiscent display (EL) 1202 can be a transparent, bright, self-emitting display that can emit light 1505 from either one or both surfaces. The combination of both foils (i.e., the digitizing tablet 1201 stacked over the electro-luminiscent display 1202) forms electro-luminiscent digitizing tablet (ELDT) 201. - FIG. 15 illustrates an ELDT placed and aligned over a
physical document 200 comprising a plurality of items 1507 (i.e., words, paragraphs, sections, pictures, icons, etc.) printed (or written, painted, engraved . . . ) on the surface of thedocument 200. FIG. 15 also illustrates how the electro-luminiscent display 1202 emits light 1505 illuminating and defining polygonal or circular perimeters defining the active regions of a printed document. This occurs when a user draws them with thestylus 202 and subsequently, when the user selects them. The portions of the ELDT other than those displaying the active region perimeters allow light 1506 from thedocument 200 to pass through both transparent foils 1201 and 1202 to the reader, so that the surface of the physical document is fully visible except for underneath the thin luminous lines delimiting active regions. - Referring again to FIG. 2, the
ELDT 201 may communicate with theuser workstation 203 over an infrared link, a serial wired connection or any other communication means (e.g. by means of a wireless connection operating in the globally available 2.4 Ghz band of the “Bluetooth” specification, as promoted by the “Bluetooth Special Interest Group” and documented on the Official Bluetooth Website. This connection, wired or wireless, is represented by thereference number 204. - Known transparent digitizing tablets are produced, for example, by Calcomp corporation and Wacom Technology Company. One example of a transparent digitizing tablety that can be used for
ELDT 201 is WACOM PL Series, LCD pen tablet systems. - The transparent electro-luminiscent display1202 may include a substrate having an array formed by a plurality of transparent scanning lines, transparent data lines crossing said scanning lines, and electro-luminiscent (EL) elements (pixels) on the intersections of the scanning and data lines. The lines are used to determine the position of an applied stylus. Those transparent lines and contacts are made by a transparent conductive material, e.g., indium tin oxide (ITO). When integrated on top of a display surface, a transparent digitizing tablet is actually a layer that has a mesh of transparent wire sensors running through it. This mesh may look like moiree patterns on the top of the display. These thin wires, when acted upon by a moving stylus, report the sequence of contact points. The movement of a pencil-like stylus over a tablet surface re-creates the drawing on a computer screen.
- With today's technology, this passive-matrix, light-emitting display may be made of an array of TOLED's (Transparent Organic Light Emitting Devices) of the types used to create vision area displays on windshields, cockpits, helmets and eyeglasses. In its most basic form, a TOLED is a monolithic, solid-state device consisting of a series of “small molecule” organic thin films sandwiched between two transparent, conductive layers. When a voltage is applied across the device, it emits light. This light emission is based upon a luminescence phenomenon wherein electrons and holes are injected and migrate from the contacts toward the organic heterojunction under the applied electric field. When these carriers meet, they form excitons (electron-hole pairs) that recombine radiatively to emit light. As a result, TOLEDs are bright, self-emitting displays that can be directed to emit from either or both surfaces. This is possible because, in addition to having transparent contacts, the organic materials are also transparent over their own emission spectrum and throughout most of the visible spectrum.
- TOLED displays are today manufactured with standard silicon semiconductors. Since TOLEDs are thin-film, solid-state devices, they are very thin, lightweight and durable, ideal for portable applications, like the present invention. TOLEDs can be bottom, top, or both bottom and top emitting. Also, TOLEDs technology has attractive advantages regarding, transparency (TOLED displays can be nearly as clear as the glass or substrate they are on and when built between glass plates, TOLEDs are >85% transparent when turned off), energy efficiency (for longer battery life), full viewing angle, bright and high contrast light emission, fast response time, and environmental robustness. Thus, TOLEDs are well suited for manufacturing the light-emitting, electro-luminiscent, display component, used jointly with the transparent digitizing tablet for the present invention. One example of light emitting foil technology that may be used is that of the TOLEDs manufactured by UNIVERSAL DISPLAY CORPORATION.
- Pen like
stylus 202 is a type commonly used as input devices for data processing and storage systems in place of conventional keyboards and mouse devices. Thestylus 202 is used in combination with the digitizing tablet 1201 component of theELDT 201 incorporating a resistive or capacitive digitizer or sheet material. As such, information can be input by writing with the stylus on the ELDT device. The electro-luminiscent 1202 component of the ELDT displays the instantaneous position and path of movement of the stylus. In this way, the ELDT device displays the pattern, e.g. a written message, sketch or signature traced thereon. In the present invention, a human uses thestylus 202 to draw active regions via the ELDT. Subsequently, a human uses thestylus 202 to select a portion of document content seen through thetransparent ELDT device 201. If that portion is within an active region, then the ELDT notifies theworkstation 203 of the selection. One example ofstylus 202 is a known wireless, pressure sensitive Wacom UltraPen (tm of Wacom Technology Company) stulus. - The
user workstation 203 can be a handheld device, such as a PDA or a cell phone, a personal computer, a network computer, an Internet appliance or a wireless IP enabled device, connected to theELDT 201. Theuser workstation 203 can be stand-alone or connected to a network (e.g. the Internet).User workstation 203 includes a wired, wireless orother connection 204 for connecting to theELDT device 201 to transfer the information necessary to create active regions of physical documents, or to receive through a network and store active regions of a plurality of physical documents. The user workstation receives the coordinates of the points selected by the user with the stylus on thephysical document 200 to select active regions detected by theELDT device 201. - The components and operation of an embodiment of the present invention are now described with reference to FIG. 18. A
pulse driving circuit 1803 alternately transmits driving pulses to X-axis and Y-axis directions of the digitizing tablet 1201 for sensing the present position of thestylus 202. The position ofstylus 202 is detected by capacitive coupling sensed in the digitizing tablet 1201. Thestylus 202 senses a position signal in a potential distribution on the digitizing tablet 1201 using capacitive coupling and provides the position signal to theposition sensing circuit 1806. Theposition sensing circuit 1806 receives the present X-axis and Y-axis coordinate data of the stylus and converts the coordinate data into digitized position data. Themicrocontroller 1807 controls thepulse driving circuit 1803 and also transfers data of the position detected from theposition sensing circuit 1806 to theuser workstation 203. Upon reception of the position data fromposition sensing circuit 1806, themicrocontroller 1807 analyses the position data to calculate the present position of thestylus 202 and updates theuser workstation 203 accordingly. - The
user workstation 203 controls the ELdisplay driving circuit 1804, while the ELdisplay driving circuit 1804 provides X-axis and Y-axis coordinates driving signals to the electro-luminiscent display 1202 so that it can display the pixel on which the stylus is placed. Alternatively, during monitoring of active regions previously defined, X-axis and Y-axis coordinates of the points (pixels) defining active regions geometric data, are fetched from referenced physical page data on Active Regions Table 1812 and are loaded on the Page Regions graphics memory 1811 (a graphics buffer for all regions defined to be active on a document's page). The ELdisplay driving circuit 1804 retrieves from PageRegions graphics memory 1811 the coordinates of those pixels of the active regions to be draw and transforms those coordinates to driving signals sent to the electro-luminiscent display 1202. - FIG. 16 illustrates the steps for creating active regions (imagemaps) on portions of physical documents and associating hyperlinks from the active regions to multimedia information or services. In
step 1601, a user selects a physical document and identifies it to theuser workstation 203. The physical document comprises one or multiple pages. In response to the selection of the physical document, a document management program withinuser workstation 203 initiates an Active Regions Table associated with the physical document (step 1602). The document management program records in the Active Regions Table an identification of the selected physical document (step 1603). Next, a user selects a page of the physical document and identifies the page to the document management program (step 1604). The document management program then records the page in the Active Regions Table for this document (step 1605). Next, the user identifies to the document management program names of portions of the page which will correspond to active regions subsequently identified by the user. (step 1606). - The following steps1607-1611 are performed for each active region defined by the user. The document management program assigns and stores an identifier of the active region in the Active Regions Table (step 1607). Next, the user places and aligns a transparent ELDT device over the selected page of the physical document (step 1608). Next, the user draws with the stylus the contour of the active region over a transparent ELDT device or otherwise defines the outline of the active region by specifying the junction points of the polygon or shape and size of the circle (step 1609). The active region is defined in terms of rectangles, circles or polygons. Next, the document management program receives from the ELDT device and stores in the Active Regions Table geometric coordinates of the outline of the active region (step 1610). Next, the user specifies to the document management program one or more hyperlinks for each of the active regions defined by the user (step 1611) and the document management program stores this information in the Active Region Table.
- FIG. 3 illustrates the foregoing process in more detail. The user opens on the
user workstation 203 an Active Regions Table 304 for the selecteddocument 200 and enters via the keyboard or mouse codes or names for identifying the document. For example, the user types in corresponding fields in the Active Regions Table 304, the physical document reference number or ISBN (e.g., “071104”) 305, the document title (e.g., “Atlas of Asia”) 306; the publisher's name (e.g., “GeoWorld Ltd.”) and the internet address of the publisher Web server (e.g., the URL “http.//http://www.geoworld.com”) 307. It should be noted that active regions can defined by both final users (e.g. readers) as well as editors or publishers of the physical documents. In the former case, the Active Regions Table may be stored on the user's workstation. In the latter case, the publisher may create the Active Regions Table for a published document and store it on a publisher's Web server for distribution to final users. From those publisher's servers, final users (i.e., readers) can select and download the Active Regions Tables of published documents to the user's workstations. - The
URL 306 of the Publisher Server and thedocument reference number 307, used to identify the document and locate the electronic document copy through the Web, must be printed or attached at a predefined reserved place on the physical document 201 (e.g., on the front cover, back cover or first page). For each selection made by the user from a physical document 201 a new entry must be created on the Selections Table 305. - FIG. 4 shows how the user (the publisher or the editor) while browsing or composing a physical document (e.g., an “Atlas of Asia”) 200 finds on a page (e.g. “
Page 133”) 507 portions of printed content (e.g., cities and islands on “Hong Kong” map) representing interesting topics, to which he or she would like to associate links to related multimedia information or services. As represented in this figure, to create active regions from selected portions of this page, the user first identifies in the Active Regions Table 304 the selected page ofphysical document 200 by typing 405 the Page Number (e.g., page “133”) 507 on theuser workstation 203. The selected Page Number 507 is recorded on the Active Regions Table 304, associated to the selected document identification. - After entering on the
user workstation 203 the number 507 of the selected page, the following is done with thetransparent ELDT device 201 to define active regions on this page. The user places theELDT device 201 over the page and aligns 406 theELDT device 201 with the borders of the page by some conventional means (e.g., by adjusting the upper left corner of the ELDT device with the upper left corner of the page). The user can still see the contents of the selected document's page through transparent ELDT device. - FIG. 5 illustrates how to create active regions on a
page 407 of a physical document aftertransparent ELDT device 201 is placed and aligned over the physical page. The user draws/traces by means ofstylus 202 active regions over the transparent ELDT device. The shapes of the active regions are predefined, menu selectable geometric forms, such as circles, rectangles or polygons. They are draw or located by the user to enclose the portions of document content which the user wants to become selectable (i.e., active, selectable, or “clickable”). FIG. 5 illustrates how the user chooses to define active regions comprising rectangles (e.g., R0) and polygons (e.g., R1) enclosing selected geographic regions represented on a map (e.g., on “Hong Kong” physical map). In the example shown in this figure, this operation can be done for each active region, by the user selecting the options “RECTANGULAR REGION”, or “POLYGONAL REGION” on the user workstation. Then, the user, keeping thetransparent ELDT device 201 aligned over the selectedpage 407, draws or specifies by means of thestylus 202 the selected region (e.g., R0) by marking the corners (e.g., “A”, “B”, “C”, “D”), defining the contour around it (or, as is the case of a rectangular region, by marking two points, i.e., the upper left corner “A”, and the lower right corner “D”). Coordinates of the vertices (e.g., “A”, “B”, “C”, “D”) of the region (e.g., R0) are sensed by theELDT device 201 and are transmitted to theuser workstation 203 where they are recorded on a new entry created on the Active Regions Table 304 for the new active region. This new active region entry is associated with the selected document and page identification. Geometric parameters defining rectangular, polygonal or circular regions are stored on the Active Regions Table 304. Internally, a pointer is created from the entry corresponding to the active region being defined and the geometry shape parameters and coordinates received from the ELDT device determining the shape and location of said active region on the physical page (e.g. Rectangular region R0 would be defined as RECTANGLE: (Xa,Ya),(Xd,Yd); where (Xa, Ya) are the coordinates of the upper left corner “A”, and (Xd, Yd) are the coordinates of lower right corner “D” of this rectangular region. As active regions are being drawn by the user, they are highlighted by the transparent ELDT device, while the user views the page of the physical document placed underneath. In one other embodiment of the present invention, the active regions can be created by software by processing active regions already created by an electronic mapping of the printed document page. - Active regions can be nested so that one can be included in another, or even overlap, so that multiple regions can share common portions of document content.
- FIG. 6 illustrates how the user associates one or a plurality of hyperlinks to active regions which have been created. For each active region (e.g., R0, R1) that the user draws with the stylus over the
ELDT device 201 and the selectedphysical page 407 the user assigns on the correspondingactive region entry 601 created on the Active Regions Table 304 on theuser workstation 203, an active region name (e.g., “Hong Kong”, “Hong Kong Island” ) 602 and one or a plurality of hyperlinks (comprising hyperlinks names, and URLs) 603. The hyperlinks link to hypermedia information or services to be accessed, or textual comments or data related to the regions to be displayed, when the user selects the corresponding active regions. For each active region (e.g., R0, R1) 601 on the Active Regions Table 304 internally there is a pointer to the geometric parameters and coordinates (e.g., RECTANGLE:(Xa,Ya),(Xd,Yd)) 604 specifying the shape and location of said active region on the physical page. - FIG. 7 shows an example of the information on the Active Regions Table304 corresponding to the active regions (e.g., R0: “Hong Kong”, R1: “Hong Kong Island”, R2: “Aberdeen”, R3: “New Territories”) created on a page (e.g. “
Page 133”) of a physical document (e.g., an “Atlas of Asia”). A plurality of hyperlinks have been associated with each active region. Thus, the user can access different multimedia information or services from each active region. - FIG. 8 illustrates the relationship between active regions (e.g., R0, R1, R2, R3) defined on a page of a physical document and the associated reference and hyperlink information on the Active Regions Table. This figure illustrates also another principle of the present invention. When the user selects a
point 800 on a physical document, the hyperlink data (stored on the Active Regions Table) of all active regions enclosing the selected point (e.g., R2, R1, R0) is displayed to the user on theuser workstation 203. In the example illustrated in FIGS. 8 and 10, there are three outlines R0, R1 and R2 which surround/encompass thetouch point 800. So, when the user touchespoint 800, the hyperlinks for all three active regions R0, R1 and R2 are displayed on the user workstation. Then, the user can select any or all of these hyperlinks to see the corresponding web pages onuser workstation 203. - FIG. 17 illustrates the steps and programming by which a user can use the present invention to obtain more information about a topic in a physical document, after the active regions were defined as described above with reference to FIG. 15. A user selects and identifies to the user workstation203 a physical document (step 1701). The physical document comprises one or a plurality of pages. A user selects a page of the physical document (step 1702) for which the user would like additional information. Then, the user places and aligns a transparent ELDT device over the selected page (step 1703). The ELDT device is connected to the
workstation 203. Next, the user identifies the selected page to the user workstation (step 1704). The page comprises one or a plurality of active regions defined earlier. The document management program withinworkstation 203, based on the page's Active Regions Table identifies active regions within the identified page (step 1705) and directs the ELDT to display their geometric outlines. Next, the user selects an active region using the stylus (step 1706). Next, the ELDT determines the position of the stylus on said transparent ELDT device and conveys the touch position coordinates to the document management program (step 1707). Next, the document management program, by reference to the Active Regions Table, identifies the active region (or plurality of active regions that encompass the touch point) corresponding to the stylus position on said transparent ELDT device (step 1708). Next, by reference to the Active Regions Table, the document management program identifies hyperlinks defined for the active region (or plurality of active regions that encompass the touch point) (step 1709). As explained above, the Active Regions Table includes for each active region, an identification of the respective hyperlinks and the location of associated hyperlinked information or service. Next, the user selects one of the hyperlinks corresponding to selected active region (or plurality of selected active regions) (step 1710). In response, theworkstation 203 accesses the information or service associated with the selected hyperlink (step 1711). - FIG. 9 further illustrates the foregoing steps of FIG. 17. While flipping through the pages or reading the document, the user finds on a certain page (e.g. “
Page 133”) 906 one or several items for which he or she would like to receive further information or access to some services. As was discussed herein before, by means of the present invention, the active regions of a physical document may be created by the final user (e.g., by the same reader), or alternatively, by the editor or publisher of said physical document. In the first case, because the Active Regions Table of the document has been created by the user, it should already be stored and immediately accessible by the user from the same user's workstation. In the second case, because the Active Regions Table has been created by the document publisher, usually it will be stored on a publisher's Web server for distribution to final users. In this second case, using theURL 306 of the Publisher Server and thedocument reference number 307, the final user (i.e., the reader) can access and download from the publisher server to the user workstations the Active Regions Table of the received document. As stated above, theURL 306 of the Publisher Server and thedocument reference number 307, used to identify the document and locate and retrieve through the Web the associated Active Regions Table, is printed or attached at a predefined reserved place on the physical document 201 (e.g., on the front cover, back cover or first page). In any case, in the following discussion it is assumed that the Active Regions Table of the selected physical document is already stored on the user workstation, or is accessible through a network (e.g., through the internet network) from the user workstation. - To access the Active Regions Table304 of the selected
physical document 200, by means of any user interface (keyboard, mouse, voice recognition software and microphone, . . . ) or any other reading means (e.g., barcode reader . . . ), the user enters codes or names for identifying the document. In the embodiment illustrated in FIG. 9, the user identifies the document by typing on the user workstation the physical document reference number or ISBN (e.g., “071104”) 905. If an Active Regions Table 304 has been defined for this document number, it is accessed and displayed by the user workstation. FIG. 9 also illustrates how, once the Active Regions Table 304 is accessed and displayed by theuser workstation 203, the user identifies the selected page of physical document by typing the Page Number (e.g., page “133”) 906. - FIG. 10 shows how, once the user has identified to the system the selected page (e.g. “
Page 133”) 906 ofphysical document 200, thegeometric data 604 of all active regions defined on this page is retrieved from the Active Regions Table 304 of the selected document. By means of an ELDT device driver, the user workstation controls the display of the active regions (e.g., R0, R1, R2, R3, R4), which are displayed highlighted by theELDT device 201. This same figure shows how, by placing and aligning the transparent ELDT device over the physical page, the relationship of the active regions illuminated by the ELDT device and the content of physical page content becomes immediately apparent for the user. - FIG. 11 illustrates how the user identifies an interesting item (e.g., “Aberdeen”) on the
physical page 407, and checks that this item is contained within an active region (e.g., R2), illuminated by theELDT device 201. To select the item, the user, keeping aligned thetransparent ELDT device 201 over the physical page, points with the stylus to apoint 1103 on theELDT device 201 over its position on the physical page. The coordinates of the point pointed by the user with the stylus, are sensed by theELDT device 201 and are transmitted to theuser workstation 203. - FIG. 12 illustrates how, when the user selects with the stylus an
item 1200 on aphysical page 407, the coordinates of the point sensed by theELDT 201 are transmitted to theuser workstation 203. From those coordinates, by means of interior point algorithms widely known by those skilled on the art, usinggeometric data 604 of the active regions (e.g., R0, R1, R2, R3, R4) stored on the Active Regions Table 304, corresponding to the selectedphysical page 407, the active regions that enclose the point selected by the user are identified (e.g., R2, R1, R0), and the information corresponding to those “selected”active regions 1200 is extracted from the Active Regions Table and displayed by the user workstation. This figure shows how, associated with a single point selected by the user on the physical page, a plurality of active regions and a plurality of hyperlinks associated with each region may be presented to the user onuser workstation 203. - FIG. 13 illustrates how from the list of active regions (e.g., R2, R1, R0) encompassing the selected point, and the descriptive hyperlinks related to those active regions, the user selects and triggers a hyperlink 1201 (e.g., “Aberdeen & Stanley Guide”, defined on active region R2) to access related multimedia information or service from the Web. The destination address or URL associated with the selected hyperlink (e.g., http://www.inm-asiaguides.com/hongkong/ehkgsou.htm ) 1304 is identified from the corresponding entry in the Active Regions Table and is sent from the
user workstation 203 through the internet 1203 to the corresponding Web server 1202 (e.g., “www. inm-asiaguides.com”). - FIG. 14 illustrates how the requested multimedia information or service1201, (e.g., HTML file named “ehkgsou.htm”) related with the item 1100 selected by the user on the
physical page 407, is received from Web server 1202 (e.g., “www.inm-asiaguides.com”), being finally played or displayed to the user on theuser workstation 203. - There are alternative embodiments to the methods for selecting and accessing active regions on a physical document. On a first alternative embodiment, only active regions (if any) comprising the point pointed by the user with the stylus over the physical document are illuminated by transparent-ELDT device. This alternative embodiment of the disclosed method has the advantage of focussing the attention of the user exclusively to the domain (i.e., to the content portion) related with the topic of interest that has been selected by the user, and introduces a minimum interference with the visibility of the document page through transparent ELDT device.
- In another embodiment of the present invention, both visualization modes can be provided as options to the user. In a first mode, all active regions defined on the selected page can be simultaneously displayed to the user, so that the user can identify all printed items for which additional information could be accessible. In the second mode, only the selected active regions will be displayed to the user.
- In another embodiment of the present invention, instead of using an ELDT device and a stylus for selecting and accessing multimedia information or services from active regions defined on physical documents, the user may alternatively prefer to use an opto-touch foil, and select illuminated active regions by touching with the finger over the opto-touch foil, instead of using a stylus.
- Furthermore, if active regions illuminated by the ELDT device (or even by an opto-foil), can be seen by transparency through the physical page, the user can choose alternatively the transparent digitizing tablet or opto-foil to be placed under, instead of over, the physical document page.
- Several possible applications of the present invention are described below. Each of these applications uses the same previously described method and system.
- For example, a customer receives complex computer equipment, with an installation manual comprising drawings and schematics of parts and subassemblies of the equipment. With the transparent ELDT device over any one of these schematics, the user can immediately see certain parts of the complex schematic illuminated as active regions. These illuminated parts are identified as hyperlinks items and can be used for accessing additional information on a remote Web server or on a local computer. When the customer points to one of those illuminated regions, multimedia instructions showing how the part needs to be installed or serviced are displayed. It is not necessary to look through printed manuals to obtain this information. No complex navigation is required. A single printed copy of a general view of the equipment is sufficient to navigate with the system according to the present invention. The customer need only press with the stylus on the desired illuminated region on the surface of the installation manual.
- A subscriber reading a newspaper or magazine, may be interested in seeing computer multimedia or TV video information associated with the articles he or she reads. While reading the sports pages (e.g., on the New York Times), key events can be instantly recalled and played on demand (e.g., the opening ceremony of Athens Olympic Games, the last images of the “Tour de France”, the last tennis match on Wimbledon, etc.) simply by touching highlighted regions encompassing titles, news or editorial sections printed on newspaper pages.
- A user flipping through the paper pages and glancing printed figures and text on a newspaper written in a foreign language (such as the Japanese language edited sample of “Asahi Shimbun”, shown in FIG. 1), may select a section of the newspaper content (e.g., an article), and receive from the newspaper publisher server the selected content translated to the user's native language (e.g., to receive the selected article translated to English).
- Today, many free-toll calls originate from people reading advertisements in newspapers or magazines or in direct mail ads. According to the present invention, people can instantly access multimedia presentations of advertised products or services simply by pointing to the highlighted ads that have drawn their attention.
- Extensive reading is easier to do from paper, but animated video explanations and demonstrations are much more effective for some purposes. The two features can be tied together by creating active regions on selected items printed in a textbook. These hyperlinked active regions can, for example, link textbook pictures, paragraphs or sections, to live discussion groups with other students or to live interactions with professors and tutors.
- What has been described is merely illustrative of the application of the principles of the present invention. Other arrangements and methods can be implemented by those skilled in the art without departing from the spirit and scope of the present invention.
Claims (16)
1. A method for presenting and selecting an active region of a physical document page so that a user can access corresponding information via a workstation, said method comprising the steps of:
positioning a transparent electro-luminiscent tablet or other touch sensitive plate over said page, said tablet or plate being coupled to the workstation;
identifying said page to said workstation, said workstation storing information defining an outline of the active region for said page and a hyperlink or information related to content of the active region;
said workstation directing said tablet or plate to display the outline of said active region over said page; and
a user touching a point within said outline, and in response, said tablet or plate conveying the touch point to said workstation, and said workstation displaying on a computer screen said information related to the content of said active region.
2. A method as set forth in claim 1 further comprising the prior steps of:
positioning a transparent electro-luminiscent tablet or other touch sensitive plate over said page, said tablet or plate being coupled to a workstation;
an operator defining said outline over said active region, and in response, said tablet or plate transferring to said computer said information defining said outline of said active region.
3. A method as set forth in claim 2 further comprising the step of:
an operator entering into said computer said information related to the content of said active region and correlating it to said active region.
4. A method as set forth in claim 1 wherein said information related to the content of said physical document page within said active region comprises a hyperlink to a web page or web file.
5. A method as set forth in claim 4 wherein said information related to the content of said physical document page within said active region further comprises a summary or description of said web page or web file.
6. A method as set forth in claim 4 further comprising the step of said workstation accessing and displaying said web page or web file in response to the user selecting said hyperlink.
7. A method as set forth in claim 1 wherein said information related to the content of the physical document page within said active region comprises a plurality of hyperlinks to a respective plurality of web pages or web files.
8. A method as set forth in claim 7 further comprising the step of said computer accessing and displaying said web pages or web files in response to the user selecting said hyperlinks.
9. A method as set forth in claim 1 wherein said information related to the content of said active region is a language translation of said content.
10. A computer program product for presenting and selecting an active region of a physical document page so that a user can access corresponding information via a workstation, a transparent electro-luminiscent tablet or other touch sensitive plate being positioned over said page, said tablet or plate being coupled to the workstation, said computer program product comprising:
a computer readable medium;
first program instructions to receive identification of said page from the user, said workstation storing information defining an outline of the active region for said page and a hyperlink or information related to content of the active region;
second program instructions to direct said tablet or plate to display the outline of said active region over said page; and
in response to a user touching a point within said outline, third program instructions to receive from said tablet or plate information conveying the touch point, and direct display on a computer screen of said information related to the content of said active region; and wherein
said first, second and third program instructions are recorded on said medium.
11. A method for presenting and selecting an active region of a physical document page so that a user can access corresponding information via a workstation, said method comprising the steps of:
positioning a transparent electro-luminiscent tablet or other touch sensitive plate over said physical document page, said tablet or plate being coupled to the workstation;
identifying said physical document page to said workstation, said workstation storing information defining an active region for said physical document page and a hyper link to a web page or web file containing information related to content of the active region;
said workstation directing said tablet or plate to display the active region over said physical document page; and
a user touching a point within said active region, and in response, said tablet or plate conveying the touch point to said workstation, and said workstation displaying on a computer screen said hyper link.
12. A method as set forth in claim 11 wherein said workstation displays adjacent to said hyper link a summary or description of said web page or web file.
13. A method as set forth in claim 11 further comprising the step of said workstation accessing and displaying said web page or web file in response to the user selecting said hyper link.
14. A method as set forth in claim 11 wherein, in response to said user touching said point within said active region, said workstation displaying another hyper link to another web page or web file containing information related to content of the active region, and a summary or description of said other web page or web file.
15. A computer program product for presenting and selecting an active region of a physical document page so that a user can access corresponding information via a workstation, a transparent electro-luminiscent tablet or other touch sensitive plate being positioned over said physical document page, said tablet or plate being coupled to the workstation, said method comprising the steps of:
a computer readable medium;
first program instructions to receive identification of said physical document page, said workstation storing information defining an active region for said physical document page and a hyperlink to a web page or web file containing information related to content of the active region;
second program instructions to direct said tablet or plate to display the active region over said physical document page; and
in response to a user touching a point within said active region, third program instructions to receive information from said tablet or plate identifying the touch point and directing said workstation to display said hyperlink on a computer screen; and
wherein said first, second and third program instructions are recorded on said medium.
16. A method for presenting and simultaneously selecting first and second active regions of a physical document page so that a user can access corresponding information via a workstation, said method comprising the steps of:
positioning a transparent electro-luminiscent tablet or other touch sensitive plate over said page, said tablet or plate being coupled to the workstation;
identifying said page to said workstation, said workstation storing information defining outlines of said first and second active regions for said page and first and second hyperlinks or first and second documents related to contents of said first and second active regions, respectively, wherein the outline for said second active region encompasses the outline for said first active region;
said workstation directing said tablet or plate to display the outlines of said first and second active regions over said page; and
a user touching a point within the outline of said first active region, and in response, said tablet or plate conveying the touch point to said workstation, and said workstation displaying on a computer screen said first and second hyperlinks or said first and second documents related to the contents of said first and second active regions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/842,192 US8196041B2 (en) | 2003-06-26 | 2007-08-21 | Method and system for processing information relating to active regions of a page of physical document |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03368054 | 2003-06-26 | ||
FR03368054.7 | 2003-06-26 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/842,192 Continuation US8196041B2 (en) | 2003-06-26 | 2007-08-21 | Method and system for processing information relating to active regions of a page of physical document |
Publications (2)
Publication Number | Publication Date |
---|---|
US20040262051A1 true US20040262051A1 (en) | 2004-12-30 |
US7310779B2 US7310779B2 (en) | 2007-12-18 |
Family
ID=33522489
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/818,790 Expired - Fee Related US7310779B2 (en) | 2003-06-26 | 2004-04-06 | Method for creating and selecting active regions on physical documents |
US11/842,192 Expired - Fee Related US8196041B2 (en) | 2003-06-26 | 2007-08-21 | Method and system for processing information relating to active regions of a page of physical document |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/842,192 Expired - Fee Related US8196041B2 (en) | 2003-06-26 | 2007-08-21 | Method and system for processing information relating to active regions of a page of physical document |
Country Status (1)
Country | Link |
---|---|
US (2) | US7310779B2 (en) |
Cited By (117)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050288943A1 (en) * | 2004-05-27 | 2005-12-29 | Property Publications Pte Ltd. | Apparatus and method for creating an electronic version of printed matter |
US20080091534A1 (en) * | 2006-10-17 | 2008-04-17 | Silverbrook Research Pty Ltd | Method of delivering an advertisement after receiving a hyperlink context |
US20090049388A1 (en) * | 2005-06-02 | 2009-02-19 | Ronnie Bernard Francis Taib | Multimodal computer navigation |
US7747749B1 (en) * | 2006-05-05 | 2010-06-29 | Google Inc. | Systems and methods of efficiently preloading documents to client devices |
AU2007312931B2 (en) * | 2006-10-17 | 2010-07-01 | Silverbrook Research Pty Ltd | Method of delivering an advertisement from a computer system |
WO2011076111A1 (en) * | 2009-12-22 | 2011-06-30 | 深圳市王菱科技开发有限公司 | Electrical reading device provided with window system on paper |
US20110167350A1 (en) * | 2010-01-06 | 2011-07-07 | Apple Inc. | Assist Features For Content Display Device |
US8065275B2 (en) | 2007-02-15 | 2011-11-22 | Google Inc. | Systems and methods for cache optimization |
US20120054168A1 (en) * | 2010-08-31 | 2012-03-01 | Samsung Electronics Co., Ltd. | Method of providing search service to extract keywords in specific region and display apparatus applying the same |
US20120066076A1 (en) * | 2010-05-24 | 2012-03-15 | Robert Michael Henson | Electronic Method of Sharing and Storing Printed Materials |
US8196041B2 (en) | 2003-06-26 | 2012-06-05 | International Business Machines Corporation | Method and system for processing information relating to active regions of a page of physical document |
US8224964B1 (en) | 2004-06-30 | 2012-07-17 | Google Inc. | System and method of accessing a document efficiently through multi-tier web caching |
US8275790B2 (en) | 2004-06-30 | 2012-09-25 | Google Inc. | System and method of accessing a document efficiently through multi-tier web caching |
WO2013167084A3 (en) * | 2012-12-10 | 2014-01-03 | 中兴通讯股份有限公司 | Intelligent terminal with built-in screenshot function and implementation method thereof |
US8676922B1 (en) | 2004-06-30 | 2014-03-18 | Google Inc. | Automatic proxy setting modification |
EP2757445A1 (en) * | 2011-09-13 | 2014-07-23 | Tsai, Hsiung-kuang | Visual interface system |
US8812651B1 (en) | 2007-02-15 | 2014-08-19 | Google Inc. | Systems and methods for client cache awareness |
US8970540B1 (en) * | 2010-09-24 | 2015-03-03 | Amazon Technologies, Inc. | Memo pad |
JP2015525396A (en) * | 2012-06-01 | 2015-09-03 | ジョン, ボヨンJeong, Boyeon | Method for digitizing paper document using transparent display or terminal equipped with air gesture and beam screen function and system therefor |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US20170140219A1 (en) * | 2004-04-12 | 2017-05-18 | Google Inc. | Adding Value to a Rendered Document |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10664153B2 (en) | 2001-12-21 | 2020-05-26 | International Business Machines Corporation | Device and system for retrieving and displaying handwritten annotations |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US12108191B1 (en) * | 2024-01-09 | 2024-10-01 | SoHive | System and method for drop-in video communication |
Families Citing this family (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7721197B2 (en) * | 2004-08-12 | 2010-05-18 | Microsoft Corporation | System and method of displaying content on small screen computing devices |
US8332401B2 (en) | 2004-10-01 | 2012-12-11 | Ricoh Co., Ltd | Method and system for position-based image matching in a mixed media environment |
US8176054B2 (en) | 2007-07-12 | 2012-05-08 | Ricoh Co. Ltd | Retrieving electronic documents by converting them to synthetic text |
US9171202B2 (en) | 2005-08-23 | 2015-10-27 | Ricoh Co., Ltd. | Data organization and access for mixed media document system |
US7885955B2 (en) * | 2005-08-23 | 2011-02-08 | Ricoh Co. Ltd. | Shared document annotation |
US7920759B2 (en) | 2005-08-23 | 2011-04-05 | Ricoh Co. Ltd. | Triggering applications for distributed action execution and use of mixed media recognition as a control input |
US9405751B2 (en) | 2005-08-23 | 2016-08-02 | Ricoh Co., Ltd. | Database for mixed media document system |
US7991778B2 (en) | 2005-08-23 | 2011-08-02 | Ricoh Co., Ltd. | Triggering actions with captured input in a mixed media environment |
US8825682B2 (en) | 2006-07-31 | 2014-09-02 | Ricoh Co., Ltd. | Architecture for mixed media reality retrieval of locations and registration of images |
US8521737B2 (en) | 2004-10-01 | 2013-08-27 | Ricoh Co., Ltd. | Method and system for multi-tier image matching in a mixed media environment |
US7812986B2 (en) * | 2005-08-23 | 2010-10-12 | Ricoh Co. Ltd. | System and methods for use of voice mail and email in a mixed media environment |
US7970171B2 (en) | 2007-01-18 | 2011-06-28 | Ricoh Co., Ltd. | Synthetic image and video generation from ground truth data |
US8989431B1 (en) | 2007-07-11 | 2015-03-24 | Ricoh Co., Ltd. | Ad hoc paper-based networking with mixed media reality |
US8195659B2 (en) | 2005-08-23 | 2012-06-05 | Ricoh Co. Ltd. | Integration and use of mixed media documents |
US9384619B2 (en) | 2006-07-31 | 2016-07-05 | Ricoh Co., Ltd. | Searching media content for objects specified using identifiers |
US8156427B2 (en) | 2005-08-23 | 2012-04-10 | Ricoh Co. Ltd. | User interface for mixed media reality |
US8276088B2 (en) | 2007-07-11 | 2012-09-25 | Ricoh Co., Ltd. | User interface for three-dimensional navigation |
US8005831B2 (en) | 2005-08-23 | 2011-08-23 | Ricoh Co., Ltd. | System and methods for creation and use of a mixed media environment with geographic location information |
US7702673B2 (en) | 2004-10-01 | 2010-04-20 | Ricoh Co., Ltd. | System and methods for creation and use of a mixed media environment |
US8600989B2 (en) | 2004-10-01 | 2013-12-03 | Ricoh Co., Ltd. | Method and system for image matching in a mixed media environment |
US7917554B2 (en) * | 2005-08-23 | 2011-03-29 | Ricoh Co. Ltd. | Visibly-perceptible hot spots in documents |
US8369655B2 (en) | 2006-07-31 | 2013-02-05 | Ricoh Co., Ltd. | Mixed media reality recognition using multiple specialized indexes |
US8144921B2 (en) | 2007-07-11 | 2012-03-27 | Ricoh Co., Ltd. | Information retrieval using invisible junctions and geometric constraints |
US8949287B2 (en) | 2005-08-23 | 2015-02-03 | Ricoh Co., Ltd. | Embedding hot spots in imaged documents |
US8156116B2 (en) | 2006-07-31 | 2012-04-10 | Ricoh Co., Ltd | Dynamic presentation of targeted information in a mixed media reality recognition system |
US8385589B2 (en) | 2008-05-15 | 2013-02-26 | Berna Erol | Web-based content detection in images, extraction and recognition |
US8838591B2 (en) | 2005-08-23 | 2014-09-16 | Ricoh Co., Ltd. | Embedding hot spots in electronic documents |
US9530050B1 (en) | 2007-07-11 | 2016-12-27 | Ricoh Co., Ltd. | Document annotation sharing |
US8086038B2 (en) | 2007-07-11 | 2011-12-27 | Ricoh Co., Ltd. | Invisible junction features for patch recognition |
US8184155B2 (en) | 2007-07-11 | 2012-05-22 | Ricoh Co. Ltd. | Recognition and tracking using invisible junctions |
US8510283B2 (en) | 2006-07-31 | 2013-08-13 | Ricoh Co., Ltd. | Automatic adaption of an image recognition system to image capture devices |
US9373029B2 (en) | 2007-07-11 | 2016-06-21 | Ricoh Co., Ltd. | Invisible junction feature recognition for document security or annotation |
US8856108B2 (en) | 2006-07-31 | 2014-10-07 | Ricoh Co., Ltd. | Combining results of image retrieval processes |
US8868555B2 (en) | 2006-07-31 | 2014-10-21 | Ricoh Co., Ltd. | Computation of a recongnizability score (quality predictor) for image retrieval |
US8335789B2 (en) | 2004-10-01 | 2012-12-18 | Ricoh Co., Ltd. | Method and system for document fingerprint matching in a mixed media environment |
US8001476B2 (en) | 2004-11-16 | 2011-08-16 | Open Text Inc. | Cellular user interface |
US8418075B2 (en) | 2004-11-16 | 2013-04-09 | Open Text Inc. | Spatially driven content presentation in a cellular environment |
US8676810B2 (en) | 2006-07-31 | 2014-03-18 | Ricoh Co., Ltd. | Multiple index mixed media reality recognition using unequal priority indexes |
US9063952B2 (en) | 2006-07-31 | 2015-06-23 | Ricoh Co., Ltd. | Mixed media reality recognition with image tracking |
US8073263B2 (en) | 2006-07-31 | 2011-12-06 | Ricoh Co., Ltd. | Multi-classifier selection and monitoring for MMR-based image recognition |
US9176984B2 (en) | 2006-07-31 | 2015-11-03 | Ricoh Co., Ltd | Mixed media reality retrieval of differentially-weighted links |
US8201076B2 (en) | 2006-07-31 | 2012-06-12 | Ricoh Co., Ltd. | Capturing symbolic information from documents upon printing |
US9020966B2 (en) | 2006-07-31 | 2015-04-28 | Ricoh Co., Ltd. | Client device for interacting with a mixed media reality recognition system |
US8489987B2 (en) | 2006-07-31 | 2013-07-16 | Ricoh Co., Ltd. | Monitoring and analyzing creation and usage of visual content using image and hotspot interaction |
US8091030B1 (en) * | 2006-12-14 | 2012-01-03 | Disney Enterprises, Inc. | Method and apparatus of graphical object selection in a web browser |
US20080235257A1 (en) * | 2007-03-23 | 2008-09-25 | Scott Henry Berens | Customizing the format of web document pages received at requesting computer controlled web terminals |
US7930642B1 (en) * | 2008-03-20 | 2011-04-19 | Intuit Inc. | System and method for interacting with hard copy documents |
TWI397850B (en) * | 2008-05-14 | 2013-06-01 | Ind Tech Res Inst | Sensing apparatus and scanning actuation method thereof |
KR20100046586A (en) * | 2008-10-27 | 2010-05-07 | 삼성전자주식회사 | Map-based web search method and apparatus |
US20100188342A1 (en) * | 2009-01-26 | 2010-07-29 | Manufacturing Resources International, Inc. | Method and System for Positioning a Graphical User Interface |
US8385660B2 (en) | 2009-06-24 | 2013-02-26 | Ricoh Co., Ltd. | Mixed media reality indexing and retrieval for repeated content |
KR20110014444A (en) * | 2009-08-05 | 2011-02-11 | 삼성전자주식회사 | User interface method for web browsing, electronic device performing the method and recording medium thereof |
EP2355472B1 (en) * | 2010-01-22 | 2020-03-04 | Samsung Electronics Co., Ltd. | Apparatus and method for transmitting and receiving handwriting animation message |
KR101294306B1 (en) * | 2011-06-09 | 2013-08-08 | 엘지전자 주식회사 | Mobile device and control method for the same |
US9058331B2 (en) | 2011-07-27 | 2015-06-16 | Ricoh Co., Ltd. | Generating a conversation in a social network based on visual search results |
US9229928B2 (en) * | 2012-03-13 | 2016-01-05 | Nulu, Inc. | Language learning platform using relevant and contextual content |
EP2891068A4 (en) * | 2012-08-31 | 2016-01-20 | Hewlett Packard Development Co | Active regions of an image with accessible links |
US10319408B2 (en) | 2015-03-30 | 2019-06-11 | Manufacturing Resources International, Inc. | Monolithic display with separately controllable sections |
US10922736B2 (en) | 2015-05-15 | 2021-02-16 | Manufacturing Resources International, Inc. | Smart electronic display for restaurants |
US10269156B2 (en) | 2015-06-05 | 2019-04-23 | Manufacturing Resources International, Inc. | System and method for blending order confirmation over menu board background |
US10319271B2 (en) | 2016-03-22 | 2019-06-11 | Manufacturing Resources International, Inc. | Cyclic redundancy check for electronic displays |
JP2019526948A (en) | 2016-05-31 | 2019-09-19 | マニュファクチャリング・リソーシズ・インターナショナル・インコーポレーテッド | Electronic display remote image confirmation system and method |
US10510304B2 (en) | 2016-08-10 | 2019-12-17 | Manufacturing Resources International, Inc. | Dynamic dimming LED backlight for LCD array |
US11895362B2 (en) | 2021-10-29 | 2024-02-06 | Manufacturing Resources International, Inc. | Proof of play for images displayed at electronic displays |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020087598A1 (en) * | 2000-04-26 | 2002-07-04 | International Business Machines Corporation | Method and system for accessing interactive multimedia information or services by touching highlighted items on physical documents |
US6537324B1 (en) * | 1997-02-17 | 2003-03-25 | Ricoh Company, Ltd. | Generating and storing a link correlation table in hypertext documents at the time of storage |
US20030117378A1 (en) * | 2001-12-21 | 2003-06-26 | International Business Machines Corporation | Device and system for retrieving and displaying handwritten annotations |
US20050028092A1 (en) * | 2001-11-13 | 2005-02-03 | Carro Fernando Incertis | System and method for selecting electronic documents from a physical document and for displaying said electronic documents over said physical document |
US6980202B2 (en) * | 2001-12-21 | 2005-12-27 | International Business Machines Corporation | Method and system for creating and accessing hyperlinks from annotations relating to a physical document |
Family Cites Families (201)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3760360A (en) * | 1971-11-01 | 1973-09-18 | E Systems Inc | Matrix switch |
US4190831A (en) * | 1978-05-08 | 1980-02-26 | The Singer Company | Light pen detection system for CRT image display |
JPS5525118A (en) * | 1978-08-11 | 1980-02-22 | Hitachi Ltd | Data input device |
US4289333A (en) * | 1978-12-07 | 1981-09-15 | Think, Inc. | Method for locating features on a map |
US4277783A (en) * | 1979-07-02 | 1981-07-07 | Bell Telephone Laboratories, Incorporated | Light pen tracking method and apparatus |
US4263592A (en) * | 1979-11-06 | 1981-04-21 | Pentel Kabushiki Kaisha | Input pen assembly |
US4367465A (en) * | 1980-04-04 | 1983-01-04 | Hewlett-Packard Company | Graphics light pen and method for raster scan CRT |
US4348660A (en) * | 1980-09-09 | 1982-09-07 | Sheldon Industries Inc. | Automatically relegendable keyboard |
US4377810A (en) * | 1980-12-04 | 1983-03-22 | Data General Corporation | Light pen detection circuit and method |
JPS57201808A (en) * | 1981-06-08 | 1982-12-10 | Nippon Denso Co Ltd | Navigator to be mounted on car |
US4602907A (en) * | 1981-08-17 | 1986-07-29 | Foster Richard W | Light pen controlled interactive video system |
US4550310A (en) * | 1981-10-29 | 1985-10-29 | Fujitsu Limited | Touch sensing device |
US4454417A (en) * | 1982-02-05 | 1984-06-12 | George A. May | High resolution light pen for use with graphic displays |
US4523188A (en) * | 1982-10-25 | 1985-06-11 | The United States Of America As Represented By The Secretary Of The Army | Automated map and display alignment |
DE3275769D1 (en) * | 1982-12-22 | 1987-04-23 | Ibm | A method and apparatus for continuously updating a display of the coordinates of a light pen |
JPS59174713A (en) * | 1983-03-25 | 1984-10-03 | Nippon Denso Co Ltd | Vehicle mounted map display device |
GB8324318D0 (en) * | 1983-09-12 | 1983-10-12 | British Telecomm | Video map display |
US4532395A (en) * | 1983-09-20 | 1985-07-30 | Timex Corporation | Electroluminescent flexible touch switch panel |
US4591841A (en) * | 1983-11-01 | 1986-05-27 | Wisconsin Alumni Research Foundation | Long range optical pointing for video screens |
AU552619B2 (en) * | 1984-02-29 | 1986-06-12 | Fujitsu Limited | Co-ordinate detecting apparatus |
US4565947A (en) * | 1984-03-12 | 1986-01-21 | International Business Machines Corporation | Color cathode ray tube for use with a light pen |
US4697175A (en) * | 1984-03-30 | 1987-09-29 | American Telephone And Telegraph Company, At&T Technologies, Inc. | Lightpen control circuit |
JPS60254227A (en) * | 1984-05-30 | 1985-12-14 | Ascii Corp | Display controller |
US4620107A (en) * | 1984-09-10 | 1986-10-28 | Liprad Associates | Compensated light pen with variable attenuator |
SE455968B (en) * | 1985-03-01 | 1988-08-22 | Ericsson Telefon Ab L M | Optical cable |
JPS61261772A (en) * | 1985-05-16 | 1986-11-19 | 株式会社デンソー | Map display unit |
US4642459A (en) * | 1985-05-17 | 1987-02-10 | International Business Machines Corporation | Light pen input system having two-threshold light sensing |
US5422812A (en) * | 1985-05-30 | 1995-06-06 | Robert Bosch Gmbh | Enroute vehicle guidance system with heads up display |
US4677428A (en) * | 1985-06-07 | 1987-06-30 | Hei, Inc. | Cordless light pen |
US6002799A (en) | 1986-07-25 | 1999-12-14 | Ast Research, Inc. | Handwritten keyboardless entry computer system |
US4849911A (en) * | 1986-09-25 | 1989-07-18 | Modern Body And Engineering Corp. | Method for imputting data to a computer aided design system |
US4782328A (en) * | 1986-10-02 | 1988-11-01 | Product Development Services, Incorporated | Ambient-light-responsive touch screen data input method and system |
US4868912A (en) * | 1986-11-26 | 1989-09-19 | Digital Electronics | Infrared touch panel |
US4868919A (en) * | 1987-03-05 | 1989-09-19 | Sharp Kabushiki Kaisha | Color image copying device |
DE3809677A1 (en) * | 1987-03-19 | 1988-12-01 | Toshiba Kk | DISPLAY AND INPUT DEVICE |
GB2205669B (en) * | 1987-05-11 | 1990-12-19 | Toshiba Machine Co Ltd | Input display apparatus |
US4855725A (en) * | 1987-11-24 | 1989-08-08 | Fernandez Emilio A | Microprocessor based simulated book |
US5010323A (en) * | 1988-05-09 | 1991-04-23 | Hoffman Clifford J | Interactive overlay driven computer display system |
US4853498A (en) * | 1988-06-13 | 1989-08-01 | Tektronix, Inc. | Position measurement apparatus for capacitive touch panel system |
US4923401A (en) * | 1988-11-25 | 1990-05-08 | The United States Of America As Represented By The Secretary Of The Navy | Long range light pen |
JPH02202618A (en) * | 1989-02-01 | 1990-08-10 | Nippon I B M Kk | Display terminal equipment |
US4973960A (en) * | 1989-02-24 | 1990-11-27 | Amoco Corporation | Data entry method and apparatus |
JPH02278326A (en) * | 1989-04-19 | 1990-11-14 | Sharp Corp | Information input/output device |
US5402151A (en) * | 1989-10-02 | 1995-03-28 | U.S. Philips Corporation | Data processing system with a touch screen and a digitizing tablet, both integrated in an input device |
US5179368A (en) * | 1989-11-09 | 1993-01-12 | Lippincott Douglas E | Method and apparatus for interfacing computer light pens |
US5150457A (en) * | 1990-05-02 | 1992-09-22 | International Business Machines Corporation | Enhanced visualization using translucent contour surfaces |
US5063600A (en) * | 1990-05-14 | 1991-11-05 | Norwood Donald D | Hybrid information management system for handwriting and text |
USRE34476E (en) * | 1990-05-14 | 1993-12-14 | Norwood Donald D | Hybrid information management system for handwriting and text |
US5138304A (en) * | 1990-08-02 | 1992-08-11 | Hewlett-Packard Company | Projected image light pen |
US5315129A (en) * | 1990-08-20 | 1994-05-24 | University Of Southern California | Organic optoelectronic devices and methods |
US5239152A (en) * | 1990-10-30 | 1993-08-24 | Donnelly Corporation | Touch sensor panel with hidden graphic mode |
US5149919A (en) * | 1990-10-31 | 1992-09-22 | International Business Machines Corporation | Stylus sensing system |
US5187467A (en) * | 1991-01-10 | 1993-02-16 | Hewlett Packard Company | Universal light pen system |
JP3085471B2 (en) * | 1991-01-24 | 2000-09-11 | ソニー株式会社 | Remote commander |
US5231698A (en) * | 1991-03-20 | 1993-07-27 | Forcier Mitchell D | Script/binary-encoded-character processing method and system |
US5202828A (en) * | 1991-05-15 | 1993-04-13 | Apple Computer, Inc. | User interface system having programmable user interface elements |
US5283557A (en) * | 1991-07-05 | 1994-02-01 | Ncr Corporation | Method for converting high resolution data into lower resolution data |
US5250929A (en) * | 1991-07-29 | 1993-10-05 | Conference Communications, Inc. | Interactive overlay-driven computer display system |
US5105544A (en) * | 1991-09-17 | 1992-04-21 | Othon Ontiveros | Geographical location pinpointer |
CA2058219C (en) * | 1991-10-21 | 2002-04-02 | Smart Technologies Inc. | Interactive display system |
US5315667A (en) * | 1991-10-31 | 1994-05-24 | International Business Machines Corporation | On-line handwriting recognition using a prototype confusability dialog |
US5495581A (en) * | 1992-02-25 | 1996-02-27 | Tsai; Irving | Method and apparatus for linking a document with associated reference information using pattern matching |
US5243149A (en) * | 1992-04-10 | 1993-09-07 | International Business Machines Corp. | Method and apparatus for improving the paper interface to computing systems |
US5311302A (en) * | 1992-07-02 | 1994-05-10 | Hughes Aircraft Company | Entertainment and data management system for passenger vehicle including individual seat interactive video terminals |
US5420607A (en) * | 1992-09-02 | 1995-05-30 | Miller; Robert F. | Electronic paintbrush and color palette |
US5739814A (en) * | 1992-09-28 | 1998-04-14 | Sega Enterprises | Information storage system and book device for providing information in response to the user specification |
US5915285A (en) | 1993-01-21 | 1999-06-22 | Optical Coating Laboratory, Inc. | Transparent strain sensitive devices and method |
US5528735A (en) * | 1993-03-23 | 1996-06-18 | Silicon Graphics Inc. | Method and apparatus for displaying data within a three-dimensional information landscape |
US5555354A (en) * | 1993-03-23 | 1996-09-10 | Silicon Graphics Inc. | Method and apparatus for navigation within three-dimensional information landscape |
DE69431055T2 (en) * | 1993-04-28 | 2003-01-23 | Nissha Printing | CLEAR TOUCH SENSIBLE PANEL |
EP0622722B1 (en) | 1993-04-30 | 2002-07-17 | Xerox Corporation | Interactive copying system |
KR100324989B1 (en) | 1993-11-08 | 2002-06-24 | 마츠시타 덴끼 산교 가부시키가이샤 | Input display integrated information processing device |
US6681029B1 (en) | 1993-11-18 | 2004-01-20 | Digimarc Corporation | Decoding steganographic messages embedded in media signals |
US5748763A (en) | 1993-11-18 | 1998-05-05 | Digimarc Corporation | Image steganography system featuring perceptually adaptive and globally scalable signal embedding |
US5905251A (en) | 1993-11-24 | 1999-05-18 | Metrologic Instruments, Inc. | Hand-held portable WWW access terminal with visual display panel and GUI-based WWW browser program integrated with bar code symbol reader in a hand-supportable housing |
JP3174899B2 (en) * | 1993-12-21 | 2001-06-11 | 三菱電機株式会社 | Image display device |
US5664111A (en) | 1994-02-16 | 1997-09-02 | Honicorp, Inc. | Computerized, multimedia, network, real time, interactive marketing and transactional system |
JPH07334287A (en) * | 1994-06-02 | 1995-12-22 | Brother Ind Ltd | Small electronic device |
US5757127A (en) | 1994-06-10 | 1998-05-26 | Nippondenso Co., Ltd. | Transparent thin-film EL display apparatus with ambient light adaptation means |
US5737740A (en) * | 1994-06-27 | 1998-04-07 | Numonics | Apparatus and method for processing electronic documents |
US5897648A (en) * | 1994-06-27 | 1999-04-27 | Numonics Corporation | Apparatus and method for editing electronic documents |
US5624265A (en) * | 1994-07-01 | 1997-04-29 | Tv Interactive Data Corporation | Printed publication remote contol for accessing interactive media |
US5640193A (en) * | 1994-08-15 | 1997-06-17 | Lucent Technologies Inc. | Multimedia service access by reading marks on an object |
US5600348A (en) * | 1994-08-19 | 1997-02-04 | Ftg Data Systems | Adjustable tip light pen |
US5597183A (en) * | 1994-12-06 | 1997-01-28 | Junkyard Dogs, Ltd. | Interactive book having electroluminescent display pages and animation effects |
US5703436A (en) * | 1994-12-13 | 1997-12-30 | The Trustees Of Princeton University | Transparent contacts for organic devices |
US5707745A (en) * | 1994-12-13 | 1998-01-13 | The Trustees Of Princeton University | Multicolor organic light emitting devices |
JP3442893B2 (en) | 1995-01-27 | 2003-09-02 | 富士通株式会社 | Input device |
EP1531379B9 (en) | 1995-02-13 | 2013-05-29 | Intertrust Technologies Corporation | Systems and methods for secure transaction management and electronic rights protection |
JPH08297267A (en) | 1995-04-25 | 1996-11-12 | Alps Electric Co Ltd | Liquid crystal display device with tablet |
US5654529A (en) * | 1995-05-03 | 1997-08-05 | Hewlett-Packard Company | Stylus-input computing system with erasure |
JPH08329011A (en) | 1995-06-02 | 1996-12-13 | Mitsubishi Corp | Data copyright management system |
JPH096518A (en) * | 1995-06-16 | 1997-01-10 | Wacom Co Ltd | Side switch mechanism and stylus pen |
US5572643A (en) * | 1995-10-19 | 1996-11-05 | Judson; David H. | Web browser with dynamic display of information objects during linking |
US6081261A (en) | 1995-11-01 | 2000-06-27 | Ricoh Corporation | Manual entry interactive paper and electronic document handling and processing system |
US6161126A (en) | 1995-12-13 | 2000-12-12 | Immersion Corporation | Implementing force feedback over the World Wide Web and other computer networks |
EP1021735B1 (en) | 1996-01-11 | 2004-06-30 | The Trustees of Princeton University | Organic luminescent coating for light detectors |
US6166834A (en) | 1996-03-15 | 2000-12-26 | Matsushita Electric Industrial Co., Ltd. | Display apparatus and method for forming hologram suitable for the display apparatus |
US5918012A (en) | 1996-03-29 | 1999-06-29 | British Telecommunications Public Limited Company | Hyperlinking time-based data files |
US6035330A (en) | 1996-03-29 | 2000-03-07 | British Telecommunications | World wide web navigational mapping system and method |
US5818425A (en) * | 1996-04-03 | 1998-10-06 | Xerox Corporation | Mapping drawings generated on small mobile pen based electronic devices onto large displays |
US5692073A (en) * | 1996-05-03 | 1997-11-25 | Xerox Corporation | Formless forms and paper web using a reference-based mark extraction technique |
US6512840B1 (en) | 1996-05-30 | 2003-01-28 | Sun Microsystems, Inc. | Digital encoding of personal signatures |
US6048630A (en) | 1996-07-02 | 2000-04-11 | The Trustees Of Princeton University | Red-emitting organic light emitting devices (OLED's) |
JP3189736B2 (en) | 1996-07-26 | 2001-07-16 | 株式会社デンソー | Composite display |
US5844363A (en) * | 1997-01-23 | 1998-12-01 | The Trustees Of Princeton Univ. | Vacuum deposited, non-polymeric flexible organic light emitting devices |
US5990869A (en) | 1996-08-20 | 1999-11-23 | Alliance Technologies Corp. | Force feedback mouse |
US6407757B1 (en) | 1997-12-18 | 2002-06-18 | E-Book Systems Pte Ltd. | Computer-based browsing method and computer program product for displaying information in an electronic book form |
JP3634089B2 (en) | 1996-09-04 | 2005-03-30 | 株式会社半導体エネルギー研究所 | Display device |
US5850214A (en) * | 1996-09-17 | 1998-12-15 | Ameranth Technology Systems, Inc. | Information manangement system with electronic clipboard |
US5903729A (en) | 1996-09-23 | 1999-05-11 | Motorola, Inc. | Method, system, and article of manufacture for navigating to a resource in an electronic network |
US5944769A (en) | 1996-11-08 | 1999-08-31 | Zip2 Corporation | Interactive network directory service with integrated maps and directions |
US5870767A (en) * | 1996-11-22 | 1999-02-09 | International Business Machines Corporation | Method and system for rendering hyper-link information in a printable medium from a graphical user interface |
US6088023A (en) | 1996-12-10 | 2000-07-11 | Willow Design, Inc. | Integrated pointing and drawing graphics system for computers |
US5861219A (en) * | 1997-04-15 | 1999-01-19 | The Trustees Of Princeton University | Organic light emitting devices containing a metal complex of 5-hydroxy-quinoxaline as a host material |
US5998803A (en) | 1997-05-29 | 1999-12-07 | The Trustees Of Princeton University | Organic light emitting device containing a hole injection enhancement layer |
US5834893A (en) * | 1996-12-23 | 1998-11-10 | The Trustees Of Princeton University | High efficiency organic light emitting devices with light directing structures |
US6046543A (en) | 1996-12-23 | 2000-04-04 | The Trustees Of Princeton University | High reliability, high efficiency, integratable organic light emitting devices and methods of producing same |
US6013982A (en) | 1996-12-23 | 2000-01-11 | The Trustees Of Princeton University | Multicolor display devices |
US6045930A (en) | 1996-12-23 | 2000-04-04 | The Trustees Of Princeton University | Materials for multicolor light emitting diodes |
US5811833A (en) * | 1996-12-23 | 1998-09-22 | University Of So. Ca | Electron transporting and light emitting layers based on organic free radicals |
US5986401A (en) | 1997-03-20 | 1999-11-16 | The Trustee Of Princeton University | High contrast transparent organic light emitting device display |
US5995084A (en) | 1997-01-17 | 1999-11-30 | Tritech Microelectronics, Ltd. | Touchpad pen-input and mouse controller |
US5917280A (en) | 1997-02-03 | 1999-06-29 | The Trustees Of Princeton University | Stacked organic light emitting devices |
US5757139A (en) * | 1997-02-03 | 1998-05-26 | The Trustees Of Princeton University | Driving circuit for stacked organic light emitting devices |
US6067080A (en) | 1997-02-21 | 2000-05-23 | Electronics For Imaging | Retrofittable apparatus for converting a substantially planar surface into an electronic data capture device |
US6138072A (en) | 1997-04-24 | 2000-10-24 | Honda Giken Kogyo Kabushiki Kaisha | Navigation device |
US5932895A (en) | 1997-05-20 | 1999-08-03 | The Trustees Of Princeton University | Saturated full color stacked organic light emitting devices |
US5877752A (en) * | 1997-05-30 | 1999-03-02 | Interactive Computer Products, Inc. | Computer light pen interface system |
US6154213A (en) | 1997-05-30 | 2000-11-28 | Rennison; Earl F. | Immersive movement-based interaction with large complex information structures |
JP2959525B2 (en) | 1997-06-02 | 1999-10-06 | 日本電気株式会社 | Data processing apparatus and method, information storage medium |
GB9715516D0 (en) | 1997-07-22 | 1997-10-01 | Orange Personal Comm Serv Ltd | Data communications |
JP3478725B2 (en) | 1997-07-25 | 2003-12-15 | 株式会社リコー | Document information management system |
US5957697A (en) | 1997-08-20 | 1999-09-28 | Ithaca Media Corporation | Printed book augmented with an electronic virtual book and associated electronic data |
JP3746378B2 (en) | 1997-08-26 | 2006-02-15 | シャープ株式会社 | Electronic memo processing device, electronic memo processing method, and computer-readable recording medium recording electronic memo processing program |
NO309698B1 (en) | 1997-09-01 | 2001-03-12 | Nils Chr Trosterud | System for selling printed information from a vending machine |
US6279014B1 (en) | 1997-09-15 | 2001-08-21 | Xerox Corporation | Method and system for organizing documents based upon annotations in context |
US6256638B1 (en) | 1998-04-14 | 2001-07-03 | Interval Research Corporation | Printable interfaces and digital linkmarks |
US6150043A (en) | 1998-04-10 | 2000-11-21 | The Trustees Of Princeton University | OLEDs containing thermally stable glassy organic hole transporting materials |
US5953587A (en) | 1997-11-24 | 1999-09-14 | The Trustees Of Princeton University | Method for deposition and patterning of organic thin film |
US6013538A (en) | 1997-11-24 | 2000-01-11 | The Trustees Of Princeton University | Method of fabricating and patterning OLEDs |
US5959616A (en) | 1997-12-23 | 1999-09-28 | International Business Machines Corporation | Computer input stylus and color control system |
US5953001A (en) | 1997-12-23 | 1999-09-14 | International Business Machines Corporation | Computer input stylus and texture control system |
US6115008A (en) | 1998-02-12 | 2000-09-05 | Lear Automotive Dearborn, Inc. | Transparent EL display |
US5984362A (en) | 1998-03-13 | 1999-11-16 | Christman; Edwin Roy | Two-book map volume |
US6330976B1 (en) | 1998-04-01 | 2001-12-18 | Xerox Corporation | Marking medium area with encoded identifier for producing action through network |
US6097376A (en) | 1998-05-11 | 2000-08-01 | Rothschild; Omri | Light pen system for use with a CRT scanning display |
US6389541B1 (en) | 1998-05-15 | 2002-05-14 | First Union National Bank | Regulating access to digital content |
US6167382A (en) | 1998-06-01 | 2000-12-26 | F.A.C. Services Group, L.P. | Design and production of print advertising and commercial display materials over the Internet |
US6256649B1 (en) | 1998-06-17 | 2001-07-03 | Xerox Corporation | Animated spreadsheet for dynamic display of constraint graphs |
US6429846B2 (en) | 1998-06-23 | 2002-08-06 | Immersion Corporation | Haptic feedback for touchpads and other touch controls |
JP3287312B2 (en) | 1998-08-13 | 2002-06-04 | 日本電気株式会社 | Pointing device |
US6282539B1 (en) | 1998-08-31 | 2001-08-28 | Anthony J. Luca | Method and system for database publishing |
US6326946B1 (en) | 1998-09-17 | 2001-12-04 | Xerox Corporation | Operator icons for information collages |
US20010039587A1 (en) | 1998-10-23 | 2001-11-08 | Stephen Uhler | Method and apparatus for accessing devices on a network |
US6448979B1 (en) | 1999-01-25 | 2002-09-10 | Airclic, Inc. | Printed medium activated interactive communication of multimedia information, including advertising |
US6256009B1 (en) | 1999-02-24 | 2001-07-03 | Microsoft Corporation | Method for automatically and intelligently scrolling handwritten input |
AUPQ439299A0 (en) | 1999-12-01 | 1999-12-23 | Silverbrook Research Pty Ltd | Interface system |
US7079712B1 (en) | 1999-05-25 | 2006-07-18 | Silverbrook Research Pty Ltd | Method and system for providing information in a document |
SG121849A1 (en) | 1999-06-30 | 2006-05-26 | Silverbrook Res Pty Ltd | Method and system for conferencing using coded marks |
US6304898B1 (en) | 1999-10-13 | 2001-10-16 | Datahouse, Inc. | Method and system for creating and sending graphical email |
US6546397B1 (en) | 1999-12-02 | 2003-04-08 | Steven H. Rempell | Browser based web site generation tool and run time engine |
AU2079701A (en) | 1999-12-08 | 2001-06-18 | Sun Microsystems, Inc. | Technique for configuring network deliverable components |
US6674426B1 (en) | 2000-03-10 | 2004-01-06 | Oregon Health & Science University | Augmenting and not replacing paper based work practice via multi-modal interaction |
JP2001264128A (en) | 2000-03-22 | 2001-09-26 | Mitsubishi Electric Corp | Abnormality detector for sensor and controller for vehicle |
US6963334B1 (en) | 2000-04-12 | 2005-11-08 | Mediaone Group, Inc. | Smart collaborative whiteboard integrated with telephone or IP network |
US20010056439A1 (en) | 2000-04-26 | 2001-12-27 | International Business Machines Corporation | Method and system for accessing interactive multimedia information or services by touching marked items on physical documents |
US6738049B2 (en) | 2000-05-08 | 2004-05-18 | Aquila Technologies Group, Inc. | Image based touchscreen device |
US20010053252A1 (en) | 2000-06-13 | 2001-12-20 | Stuart Creque | Method of knowledge management and information retrieval utilizing natural characteristics of published documents as an index method to a digital content store |
US20020013129A1 (en) | 2000-06-26 | 2002-01-31 | Koninklijke Philips Electronics N.V. | Data delivery through beacons |
TW528967B (en) | 2000-08-29 | 2003-04-21 | Ibm | System and method for locating on a physical document items referenced in an electronic document |
EP1186986A3 (en) * | 2000-08-29 | 2005-11-16 | International Business Machines Corporation | System and method for locating on a physical document items referenced in an electronic document |
TW494323B (en) | 2000-08-29 | 2002-07-11 | Ibm | System and method for locating on a physical document items referenced in another physical document |
US7003308B1 (en) | 2000-09-12 | 2006-02-21 | At&T Corp. | Method and system for handwritten electronic messaging |
FR2814829B1 (en) | 2000-09-29 | 2003-08-15 | Vivendi Net | METHOD AND SYSTEM FOR OPTIMIZING CONSULTATIONS OF DATA SETS BY A PLURALITY OF CLIENTS |
US6824066B2 (en) | 2000-10-06 | 2004-11-30 | Leon H. Weyant | Electronic access security key card pamphlet |
US6940491B2 (en) | 2000-10-27 | 2005-09-06 | International Business Machines Corporation | Method and system for generating hyperlinked physical copies of hyperlinked electronic documents |
US6816615B2 (en) | 2000-11-10 | 2004-11-09 | Microsoft Corporation | Implicit page breaks for digitally represented handwriting |
US6741745B2 (en) | 2000-12-18 | 2004-05-25 | Xerox Corporation | Method and apparatus for formatting OCR text |
US7346841B2 (en) | 2000-12-19 | 2008-03-18 | Xerox Corporation | Method and apparatus for collaborative annotation of a document |
US6957384B2 (en) | 2000-12-27 | 2005-10-18 | Tractmanager, Llc | Document management system |
US6798907B1 (en) | 2001-01-24 | 2004-09-28 | Advanced Digital Systems, Inc. | System, computer software product and method for transmitting and processing handwritten data |
US6814642B2 (en) | 2001-04-04 | 2004-11-09 | Eastman Kodak Company | Touch screen display and method of manufacture |
US6424094B1 (en) | 2001-05-15 | 2002-07-23 | Eastman Kodak Company | Organic electroluminescent display with integrated resistive touch screen |
US20020184332A1 (en) | 2001-05-30 | 2002-12-05 | Kindberg Timothy Paul James | Physical registration method and system for resources |
US6904570B2 (en) | 2001-06-07 | 2005-06-07 | Synaptics, Inc. | Method and apparatus for controlling a display of data on a display screen |
US7154622B2 (en) | 2001-06-27 | 2006-12-26 | Sharp Laboratories Of America, Inc. | Method of routing and processing document images sent using a digital scanner and transceiver |
US20030001899A1 (en) | 2001-06-29 | 2003-01-02 | Nokia Corporation | Semi-transparent handwriting recognition UI |
US6798358B2 (en) | 2001-07-03 | 2004-09-28 | Nortel Networks Limited | Location-based content delivery |
US20030024975A1 (en) | 2001-07-18 | 2003-02-06 | Rajasekharan Ajit V. | System and method for authoring and providing information relevant to the physical world |
US6914695B2 (en) | 2001-08-08 | 2005-07-05 | International Business Machines Corporation | Process of operations with an interchangeable transmission device and apparatus for use therein for a common interface for use with digital cameras |
US20030048487A1 (en) | 2001-08-31 | 2003-03-13 | Johnston Kairi Ann | Variable resolution scanning |
US7131061B2 (en) | 2001-11-30 | 2006-10-31 | Xerox Corporation | System for processing electronic documents using physical documents |
US7050835B2 (en) | 2001-12-12 | 2006-05-23 | Universal Display Corporation | Intelligent multi-media display communication system |
US7305702B2 (en) | 2002-01-09 | 2007-12-04 | Xerox Corporation | Systems and methods for distributed administration of public and private electronic markets |
US7116316B2 (en) | 2002-03-07 | 2006-10-03 | Intel Corporation | Audible and visual effects as a result of adaptive tablet scanning |
US7181502B2 (en) | 2002-03-21 | 2007-02-20 | International Business Machines Corporation | System and method for locating on electronic documents items referenced in a physical document |
KR100804519B1 (en) | 2002-10-10 | 2008-02-20 | 인터내셔널 비지네스 머신즈 코포레이션 | Apparatus and method for selecting, ordering, and accessing copyrighted information in physical documents |
US6814624B2 (en) | 2002-11-22 | 2004-11-09 | Adc Telecommunications, Inc. | Telecommunications jack assembly |
WO2005001710A2 (en) | 2003-06-26 | 2005-01-06 | International Business Machines Corporation | System and method for composing an electronic document from physical documents |
US7310779B2 (en) | 2003-06-26 | 2007-12-18 | International Business Machines Corporation | Method for creating and selecting active regions on physical documents |
-
2004
- 2004-04-06 US US10/818,790 patent/US7310779B2/en not_active Expired - Fee Related
-
2007
- 2007-08-21 US US11/842,192 patent/US8196041B2/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6537324B1 (en) * | 1997-02-17 | 2003-03-25 | Ricoh Company, Ltd. | Generating and storing a link correlation table in hypertext documents at the time of storage |
US20020087598A1 (en) * | 2000-04-26 | 2002-07-04 | International Business Machines Corporation | Method and system for accessing interactive multimedia information or services by touching highlighted items on physical documents |
US6771283B2 (en) * | 2000-04-26 | 2004-08-03 | International Business Machines Corporation | Method and system for accessing interactive multimedia information or services by touching highlighted items on physical documents |
US20050028092A1 (en) * | 2001-11-13 | 2005-02-03 | Carro Fernando Incertis | System and method for selecting electronic documents from a physical document and for displaying said electronic documents over said physical document |
US20030117378A1 (en) * | 2001-12-21 | 2003-06-26 | International Business Machines Corporation | Device and system for retrieving and displaying handwritten annotations |
US6980202B2 (en) * | 2001-12-21 | 2005-12-27 | International Business Machines Corporation | Method and system for creating and accessing hyperlinks from annotations relating to a physical document |
Cited By (161)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US10664153B2 (en) | 2001-12-21 | 2020-05-26 | International Business Machines Corporation | Device and system for retrieving and displaying handwritten annotations |
US8196041B2 (en) | 2003-06-26 | 2012-06-05 | International Business Machines Corporation | Method and system for processing information relating to active regions of a page of physical document |
US20170140219A1 (en) * | 2004-04-12 | 2017-05-18 | Google Inc. | Adding Value to a Rendered Document |
US9811728B2 (en) * | 2004-04-12 | 2017-11-07 | Google Inc. | Adding value to a rendered document |
US20050288943A1 (en) * | 2004-05-27 | 2005-12-29 | Property Publications Pte Ltd. | Apparatus and method for creating an electronic version of printed matter |
US8477331B2 (en) * | 2004-05-27 | 2013-07-02 | Property Publications Pte Ltd. | Apparatus and method for creating an electronic version of printed matter |
US8825754B2 (en) | 2004-06-30 | 2014-09-02 | Google Inc. | Prioritized preloading of documents to client |
US8275790B2 (en) | 2004-06-30 | 2012-09-25 | Google Inc. | System and method of accessing a document efficiently through multi-tier web caching |
US9485140B2 (en) | 2004-06-30 | 2016-11-01 | Google Inc. | Automatic proxy setting modification |
US8788475B2 (en) | 2004-06-30 | 2014-07-22 | Google Inc. | System and method of accessing a document efficiently through multi-tier web caching |
US8676922B1 (en) | 2004-06-30 | 2014-03-18 | Google Inc. | Automatic proxy setting modification |
US8639742B2 (en) | 2004-06-30 | 2014-01-28 | Google Inc. | Refreshing cached documents and storing differential document content |
US8224964B1 (en) | 2004-06-30 | 2012-07-17 | Google Inc. | System and method of accessing a document efficiently through multi-tier web caching |
US20090049388A1 (en) * | 2005-06-02 | 2009-02-19 | Ronnie Bernard Francis Taib | Multimodal computer navigation |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US7747749B1 (en) * | 2006-05-05 | 2010-06-29 | Google Inc. | Systems and methods of efficiently preloading documents to client devices |
US20080091534A1 (en) * | 2006-10-17 | 2008-04-17 | Silverbrook Research Pty Ltd | Method of delivering an advertisement after receiving a hyperlink context |
WO2008046130A1 (en) * | 2006-10-17 | 2008-04-24 | Silverbrook Research Pty Ltd | Method of delivering an advertisement from a computer system |
AU2007312931B2 (en) * | 2006-10-17 | 2010-07-01 | Silverbrook Research Pty Ltd | Method of delivering an advertisement from a computer system |
US20080091533A1 (en) * | 2006-10-17 | 2008-04-17 | Silverbrook Research Pty Ltd | Method of delivering an advertisement to a user interacting with a hyperlink |
US20080091532A1 (en) * | 2006-10-17 | 2008-04-17 | Silverbrook Research Pty Ltd | Method of delivering an advertisement from a computer system |
US20080097828A1 (en) * | 2006-10-17 | 2008-04-24 | Silverbrook Research Pty Ltd | Method of delivering an advertisement via related computer systems |
US8065275B2 (en) | 2007-02-15 | 2011-11-22 | Google Inc. | Systems and methods for cache optimization |
US8996653B1 (en) | 2007-02-15 | 2015-03-31 | Google Inc. | Systems and methods for client authentication |
US8812651B1 (en) | 2007-02-15 | 2014-08-19 | Google Inc. | Systems and methods for client cache awareness |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
WO2011076111A1 (en) * | 2009-12-22 | 2011-06-30 | 深圳市王菱科技开发有限公司 | Electrical reading device provided with window system on paper |
US20110167350A1 (en) * | 2010-01-06 | 2011-07-07 | Apple Inc. | Assist Features For Content Display Device |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US12087308B2 (en) | 2010-01-18 | 2024-09-10 | Apple Inc. | Intelligent automated assistant |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US20120066076A1 (en) * | 2010-05-24 | 2012-03-15 | Robert Michael Henson | Electronic Method of Sharing and Storing Printed Materials |
US20120054168A1 (en) * | 2010-08-31 | 2012-03-01 | Samsung Electronics Co., Ltd. | Method of providing search service to extract keywords in specific region and display apparatus applying the same |
US8970540B1 (en) * | 2010-09-24 | 2015-03-03 | Amazon Technologies, Inc. | Memo pad |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9335849B2 (en) | 2011-09-13 | 2016-05-10 | Hsiung-Kuang Tsai | Visual interface system |
EP2757445A1 (en) * | 2011-09-13 | 2014-07-23 | Tsai, Hsiung-kuang | Visual interface system |
EP2757445A4 (en) * | 2011-09-13 | 2015-07-15 | Hsiung-Kuang Tsai | Visual interface system |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
JP2015525396A (en) * | 2012-06-01 | 2015-09-03 | ジョン, ボヨンJeong, Boyeon | Method for digitizing paper document using transparent display or terminal equipped with air gesture and beam screen function and system therefor |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
WO2013167084A3 (en) * | 2012-12-10 | 2014-01-03 | 中兴通讯股份有限公司 | Intelligent terminal with built-in screenshot function and implementation method thereof |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US11556230B2 (en) | 2014-12-02 | 2023-01-17 | Apple Inc. | Data detection |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US12108191B1 (en) * | 2024-01-09 | 2024-10-01 | SoHive | System and method for drop-in video communication |
Also Published As
Publication number | Publication date |
---|---|
US8196041B2 (en) | 2012-06-05 |
US7310779B2 (en) | 2007-12-18 |
US20080017422A1 (en) | 2008-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7310779B2 (en) | Method for creating and selecting active regions on physical documents | |
US6771283B2 (en) | Method and system for accessing interactive multimedia information or services by touching highlighted items on physical documents | |
US7373588B2 (en) | Method and system for accessing interactive multimedia information or services by touching marked items on physical documents | |
US7181502B2 (en) | System and method for locating on electronic documents items referenced in a physical document | |
US7472338B2 (en) | Method and apparatus for locating items on a physical document and method for creating a geographical link from an electronic document to the physical document | |
US10664153B2 (en) | Device and system for retrieving and displaying handwritten annotations | |
TW494323B (en) | System and method for locating on a physical document items referenced in another physical document | |
US6980202B2 (en) | Method and system for creating and accessing hyperlinks from annotations relating to a physical document | |
US7747949B2 (en) | System and method comprising an electronic document from physical documents | |
KR100556331B1 (en) | A system and method for selecting an electronic document from a physical document and displaying the electronic document on the physical document | |
US8423889B1 (en) | Device specific presentation control for electronic book reader devices | |
US6940491B2 (en) | Method and system for generating hyperlinked physical copies of hyperlinked electronic documents | |
WO2003038668A2 (en) | Internet browsing system | |
IES20070382A2 (en) | A method and apparatus for providing an on-line directory service | |
CN110765902B (en) | Digital protection and inheritance device for ancient and old newspapers | |
EP1186986A2 (en) | System and method for locating on a physical document items referenced in an electronic document | |
IES85037Y1 (en) | A method and apparatus for providing an on-line directory service |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CARRO, F.I.;REEL/FRAME:014790/0383 Effective date: 20040316 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20111218 |