US20010045965A1 - Method and system for receiving user input - Google Patents
Method and system for receiving user input Download PDFInfo
- Publication number
- US20010045965A1 US20010045965A1 US09/784,808 US78480801A US2001045965A1 US 20010045965 A1 US20010045965 A1 US 20010045965A1 US 78480801 A US78480801 A US 78480801A US 2001045965 A1 US2001045965 A1 US 2001045965A1
- Authority
- US
- United States
- Prior art keywords
- user
- menu options
- menu
- software
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/9038—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/954—Navigation, e.g. using categorised browsing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04804—Transparency, e.g. transparent or translucent windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04806—Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99933—Query processing, i.e. searching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99941—Database schema or data structure
- Y10S707/99944—Object-oriented database structure
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99941—Database schema or data structure
- Y10S707/99944—Object-oriented database structure
- Y10S707/99945—Object-oriented database structure processing
Definitions
- This invention relates generally to visual programming. More particularly, in one embodiment, the invention relates to visual programming in which graphically represented software articles are manipulated to create custom software applications.
- users cannot interconnect a capability provided by one application sold by a first vendor with a second capability provided by a second application sold by a second, unrelated, vendor without significant expertise, programming skill, and effort.
- users cannot interface input devices, such as a mouse or a data source at a remote location, such as a World Wide Web (hereinafter “Web”) site with one or more application programs to create a custom application.
- Web World Wide Web
- the term “software article” will be understood to comprise software ranging from a single line of software code written in any programming language (including machine language, assembly language, and higher level languages), through blocks of software code comprising lines of software code, software objects (as that term is commonly used in the software arts), programs, interpreted, compiled, or assembled code, and including entire software application programs, as well as applets, data files, hardware drivers, web servers, sevlets, and clients.
- a software article can be abstracted, and represented visually, using a specific visual format that will be explained in detail below. The visual representation of a software article can be referred to as an abstracted software article.
- browser will be understood to comprise software and/or hardware for navigating, viewing, and interacting with local and/or remote information. Examples of browsers include, but are not limited to Netscape NavigatorTM and CommunicatorTM, Internet ExplorerTM, and MosaicTM.
- the invention in one embodiment, provides systems and methods for a user, having little or no programming skill or experience, to use visual programming to create custom applications that can employ user input, information obtained from remote devices, such as information obtained on the web, and applications programs.
- the systems and methods of the invention involve the use of one or more computers. In embodiments that involve a plurality of components, the computers are interconnected in a network.
- the systems and methods of the invention provide abstractions of software articles which include inputs such as a mouse or a keyboard, and outputs, such as a video display or a printer.
- An abstraction of a software article is an analog of an electronic circuit which provides functionality such as logic, memory, computational capability, and the like, and which includes inputs and outputs for interconnection to allow construction of a specific application circuit.
- the user can select software articles from a repository, such as a software library, and can place an abstracted software article on a computer display.
- the user can interconnect an output of one abstracted software article to an input of another abstracted software article using “wires.”
- “Wires” are linear graphical structures that are drawn on the computer display by the user.
- the user can draw “wires” using a pointing device such as a mouse.
- the user can construct a software application that performs a customized function by the selection and interconnection of abstracted operator software articles (also referred to as “operators”).
- the operator software articles represented by the abstractions communicate using a common language, with connections via a central hub software article (also referred to as a “hub”).
- a bidirectional software adapter (also referred to as an “adapter”) for each software article provides translation between the “native” communication language of the article and the common language of the system.
- the bidirectional software adapter is transparent to the user.
- the systems and methods of the invention provide a readily understood, essentially intuitive, graphical environment for program development.
- the systems and methods of the invention provide feedback that eases program development and debugging.
- the systems and methods of the invention reduce the technological expertise needed to develop sophisticated applications.
- the systems and methods of the invention employ techniques of viewing material on a display that use panning (e.g. two-dimensional motion parallel to the plane of the display) and zooming (e.g. motion perpendicular of the plane of the display).
- panning e.g. two-dimensional motion parallel to the plane of the display
- zooming e.g. motion perpendicular of the plane of the display.
- the zooming and panning features enable the user to easily navigate the programmed design over many orders of magnitude to grasp both micro and macro operation.
- the zoom and pan is smooth and analogous with nearly infinite degrees of zoom and having a nearly infinitely sized display space.
- the invention relates to a method of receiving user input.
- the method includes receiving user input identifying a location on a graphical user interface, displaying menu options, a first menu option appearing substantially at the identified location, the remaining menu options appearing at locations proximate to the identified location, and receiving user selection of one of the displayed menu options.
- the remaining menu options appear at locations equidistant from the identified location.
- receiving user input identifying a location involves determining the location of a cursor.
- the remaining menu options appear at regular radial intervals around the identified location.
- the method further includes providing hierarchical levels of menu options.
- receiving user selection of at least one of the menu options causes display of menu options at a different hierarchical level.
- the menu option located substantially at the identified location includes a menu option that causes display of menu options at a hierarchical level higher than the current level.
- the method further includes enabling a user to select menu options to present. In one embodiment, the method further includes selecting menu options to present based at least in part on an application context.
- the invention features a method of receiving user input.
- the method includes providing hierarchical levels of menu options, receiving user input identifying a location on a graphical user interface, the user input includes a location of a cursor, displaying menu options from one hierarchical level, a first menu option appearing substantially at the identified location, the remaining menu options appearing at locations proximate to the identified location and being positioned at regular radial intervals around the identified location, the menu option located substantially at the identified location includes a menu option that when activated causes a display of menu options at a hierarchical level one level higher than the current level, and receiving user selection of one of the displayed menu options.
- the remaining menu options appear at locations equidistant from the identified location.
- selecting one of the remaining menu options activates a predetermined function.
- selecting one of the remaining menu options causes display of menu options at a hierarchical level one level lower than the current level.
- the display of menu options at a hierarchical level one level lower than the level of the selected option involves the display of the selected option substantially at the identified location, and the display of one or more suboptions of the selected option, the suboptions being located proximate to the identified location and being positioned at regular radial intervals around the identified location.
- the one or more suboptions of the selected option are displayed based on application context.
- the remaining menu options appear at locations equidistant from the identified location.
- the invention relates to a computer program, recorded on a computer readable medium, for receiving user input.
- the program includes instructions for causing a processor to receive user input identifying a location on a graphical user interface, to display menu options, a first menu option appearing about the identified location, the remaining menu options appearing at locations proximate to the identified location, and to receive user selection of one of the displayed menu options.
- the remaining menu options appear at locations equidistant from the identified location.
- the instructions that receive user input identifying a location includes instructions that identify the location of a cursor.
- the remaining menu options are displayed at regular radial intervals around the identified location.
- the program further includes instructions that provide hierarchical levels of menu options, and the instructions that receive user selection of at least one of the menu options cause display of different menu options at a different hierarchical level.
- the menu option located substantially at the identified location comprises a menu option that causes display of menu options at a hierarchical level one level higher than the current level.
- the program further includes instructions that select menu options to present.
- selecting menu options to present is based at least in part on an application context.
- FIG. 1A is an image of a screenshot of a graphical user interface for a visual programming system, according to an embodiment of the invention
- FIG. 1B shows an example of a connection of a source software article to a destination software article, according to an embodiment of the invention
- FIG. 1C shows a connector having a repeated indication of a source, according to an embodiment of the invention
- FIG. 2 is an image of a screenshot of a graphical user interface for a visual programming system, according to an embodiment of the invention
- FIG. 3A is an image of a screenshot of a graphical user interface for a visual programming system, according to an embodiment of the invention.
- FIG. 3B is an image of a schematic of an unconnected abstracted software article, according to an embodiment of the invention.
- FIG. 3C is an image of a screenshot depicting several unconnected abstracted software articles, according to an embodiment of the invention.
- FIG. 4A is a diagram illustrating a “virtual” camera that a user can maneuver to zoom-in and out of a graphic representation of an application as the application is being developed, according to an embodiment of the invention
- FIG. 4B is a drawing depicting an embodiment of an encapsulation of a plurality of abstractions of software articles, according to principles of the invention.
- FIGS. 5 A- 5 D are embodiments of graphic representations of software articles at varying levels of generality, according to principles of the invention.
- FIGS. 6 A- 6 D are images of a hierarchy of radial popup menus, according to an embodiment of the invention.
- FIG. 7 is a flow diagram of a menu hierarchy, according to an embodiment of the invention.
- FIG. 8 is a diagram of a sample architecture for the visual programming system, according to an embodiment of the invention.
- FIG. 9 is a diagram of an embodiment of a computer network upon which the invention can be practiced.
- FIG. 10 is a conceptual diagram illustrating generation of a virtual display space in accord with an embodiment of the invention.
- FIG. 11 is a schematic view depicting multiple viewing perspectives in accordance with an embodiment of the invention.
- FIGS. 12 A- 12 C are schematic views depicting data objects modeled as a node tree
- FIG. 13 is a conceptual diagram illustrating use of using a plurality of templates in accordance with the invention.
- FIG. 14 is a flowchart depicting a method of rendering detail in accordance with an embodiment of the invention.
- FIG. 15 is an illustrative example of rendering detail in accordance with an embodiment of the invention.
- FIG. 16 depicts illustrative embodiments of breadcrumb trails in accordance with the invention.
- FIG. 17 illustrates use of search terms in accordance with an embodiment of the invention
- FIG. 18 illustrates operation of a visual wormhole, in accordance with an embodiment of the invention
- FIG. 19 is a schematic view depicting a viewing system architecture in accordance with an embodiment of the invention.
- FIG. 20 is a schematic view depicting the conversion of a file system directory tree into a hierarchical structure of data objects in accordance with an embodiment of the invention
- FIG. 21 is a schematic view depicting the conversion of a Web page to a hierarchical structure of data objects in accordance with an embodiment of the invention.
- FIG. 22 is a schematic view depicting the conversion of a Web page to a hierarchical structure of data objects in accordance with an embodiment of the invention
- FIG. 23 is a schematic diagram depicting the conversion of an XMLTM hierarchical structure of data objects to the ZMLTM format in accordance with an embodiment of the invention
- FIG. 24 depicts a method of downloading data from/to a server to/from a PDA client, respectively, in accordance with an embodiment of the invention
- FIG. 25 depicts illustrative display images of user viewing perspectives as rendered by a PDA in accordance with an embodiment of the invention
- FIG. 26 depicts illustrative display images of user viewing perspectives as rendered by a wireless telephone in accordance with an embodiment of the invention.
- FIG. 27 is an embodiment of a handheld wireless navigation device that can be used as a user control in conjunction with the viewing system, according to principles of the invention.
- FIG. 1A shows a screenshot of a visual programming system user interface 100 .
- the interface 100 enables a programmer to quickly assemble an application by assembling and interconnecting different abstractions of software articles 102 a - 102 d . This enables the user to focus on problem-solving using existing abstractions of software articles as the programmer's toolset, instead of spending time writing code to connect the different pieces.
- the interface 100 provides rich graphic feedback, making application development more intuitive, faster, and enjoyable.
- FIG. 1A shows graphic representations of different abstractions of software articles 102 a - 102 d .
- Each abstraction of a software article 102 a - 102 d provides one or more software services.
- software articles include components (e.g., COM/DCOM Objects (Component Object Model/Distributed Component Object Model), ActiveX components, and JavaBeansTM), software routines (e.g., C++, Pascal, and JavaTM routines), functions provided by commercial products (e.g., Microsoft ExcelTM and WordTM, MatLabTM, and LabviewTM), and access to a database (e.g., using ODBC (Open Database Connectivity)).
- components e.g., COM/DCOM Objects (Component Object Model/Distributed Component Object Model), ActiveX components, and JavaBeansTM
- software routines e.g., C++, Pascal, and JavaTM routines
- functions provided by commercial products e.g., Microsoft ExcelTM and WordTM,
- a software article may also handle HTTP calls needed to access Internet sites (e.g., retrieving stock prices from a URL (Universal Resource Locator)), e-mail, and FTP (File Transfer Protocol) services.
- HTTP calls needed to access Internet sites e.g., retrieving stock prices from a URL (Universal Resource Locator)
- e-mail e.g., e-mail
- FTP File Transfer Protocol
- the actual instructions for the software article need not reside on the device displaying the graphical representations.
- the user interface presents abstractions of software articles 102 a - 102 d as “black boxes” by representing the software article as having simple input and/or output ports corresponding to software article input and output parameters.
- black box is used to denote a circuit element having one or more inputs and one or more outputs, but whose detailed internal structure and contents remain hidden from view, or need not be known to the application engineer in detail.
- a “black box” is typically characterized by a transfer function, which relates an output response of the “black box” to an input excitation, thereby providing an engineer with the necessary information to use the “black box” in an application. As shown in FIG.
- the boxes are not in fact “black,” but can instead present pictures and text that indicate their function and current state.
- the boxes are “black boxes” to the extent that the programmer does not need to know or understand the precise manner in which the particular box accomplishes the task that it is designed to perform.
- a user simply connects an output port 103 of an abstraction of a source software article 102 a to the input ports 105 of one or more destination abstractions of software articles 102 b , as shown in FIG. 1B.
- the connection is performed by drawing a line or a “wire 104 a ,” using a pointing device such as a mouse, from an output port 103 on one abstraction of a software article to an input port 105 of an abstraction of a software article.
- the wire 104 a can indicate a direction of information flow.
- the drawing is performed by locating a cursor at a desired end of the line, depressing a mouse button, moving the mouse cursor by manipulating the mouse from the beginning of the line to the desired end location of the line, and releasing the mouse button.
- the user can use a trackball, a keyboard, voice commands or any other convenient suitable input device.
- a keyboard can be used by activating particular keys for the functions of moving the cursor (e.g., arrow keys), starting the line (e.g., for example, the combination ⁇ Control>-D to denote “pen down”) and ending the line (e.g., for example, the combination ⁇ Control>-U to denote “pen up”).
- an application can involve connecting a plurality of abstractions of software articles.
- an application can involve connecting an output of an abstraction of a software article to an input of the same abstraction of a software article, as when a recursive action is required.
- multiple functional aspects of a single software article can be interconnected to create a program.
- the system can indicate to the user whether a proposed connection is allowable.
- the wire used for a connection is green to indicate an acceptable connection (for example, from an output port to an input port), and the wire turns red to indicate an unacceptable connection (such as from an output port to another output port).
- the CommonFace system builds up a connection table which defines the inputs and output of all the wires.
- the system uses the connection table to determine how data, commands, and the like are translated and transmitted from software article to software article to perform the programmed operations.
- the destination abstraction of software article 102 b continually receives data or calls from the abstraction of software source article 102 a .
- a user draws a connection 104 b from the output port 103 of an abstraction of a “mouse” software article 102 b to the input port 105 of an abstraction of an “output” software article 102 c .
- the abstraction of the “mouse” software article 102 b appears to continually feed the abstraction of the “output” software article 102 c with data describing user manipulation of a mouse.
- the abstraction of the “output” software article 102 c appears to display these values in real-time.
- connection 104 c from an output port 103 of the abstraction of the “mouse” software article 102 b carries data to an input port 105 of the abstraction of the “fireworks” software article 102 d .
- an identifier 103 ′ of a connection 104 a or wire corresponding to an output port 103 can be repeated at either end of the connection 104 a , so that the user can see which source is connected to which destination, without having to retrace the entire connection 104 a from input port 105 back to output port 103 .
- the user can select a connection, and the system automatically traverses the connection to display the opposite end of the connection and any associated abstractions.
- selecting a representation of a software article instantiates the underlying software article in response to the system displaying the graphical representation of the software article.
- selecting an ExcelTM spreadsheet causes both the display of an abstraction of the software article, or the display of a graphical representation of the software article, as well as instantiating an ExcelTM spreadsheet itself.
- the graphical representation of the software article serves both to identify the identity of the related software article, and to indicate its state.
- an ExcelTM spreadsheet comprising two columns and two rows can be represented by a graphical representation indicating an ExcelTM spreadsheet having two inputs and two outputs, while a spreadsheet with three rows and two columns can be represented by a graphical representation indicating an ExcelTM spreadsheet having three inputs and two outputs.
- one graphical representation generates output on its own. In one embodiment, one graphical representation generates output based on input not its own. In one embodiment, one graphical representation performs a function without inputs and without outputs.
- FIGS. 1A, 2 and 3 A Features referred to as panning and zooming, the operation of which is described in greater detail below with respect to FIGS. 10 - 25 , are shown in FIGS. 1A, 2 and 3 A.
- the user interface 100 provides a “virtual camera” that enables a user to smoothly pan over and zoom in and out of a work space.
- the “virtual camera” described in more detail below.
- the detail shown of each abstraction of an article and connection image is a function of the virtual distance between the abstraction and a virtual viewing position of the user, represented by the virtual camera. For example, in FIG. 1 a user has moved the virtual camera far away from the work space.
- This view displays all abstractions of the application articles 102 a - 102 d and connections 104 a - 104 c . However, in this view, details, such as the names of the software article ports, are difficult to discern.
- the interface displays greater detail of the portion of the system in view. While zooming in from the virtual camera position of FIG. 1A moves some of the application features out of view, a user can see greater detail such as the ports 106 a - 106 c names, and article configuration information 108 , such as a control that determines whether transmission of data between articles is “automatic” or “manual”.
- FIG. 3A which depicts the details of an abstraction of a “mouse” software article 102 b , further illustrates the results of zooming in.
- the abstraction of the “mouse” software article 102 b includes, on the left, input ports, respectively labeled “MIN X” 302 , “MAX X” 304 , “MINY” 306 and “MAX Y” 308 .
- These inputs define the range of motion of an object, such as a cursor, in Cartesian coordinates, such as columns and rows, respectively, of a display.
- An input depicted by the line or wire 310 is shown going to port “MIN X” 302 from a source not shown.
- the abstraction On the right side of the embodiment are three output ports, respectively labeled “HORIZONTAL” 320 , “VERTICAL” 322 , and “CLICK” 324 .
- the output ports indicate the horizontal position of a mouse, the vertical position of the mouse, and the state of a mouse button, respectively.
- the abstraction further includes a simulated mouse pad 340 and a simulated mouse 342 , which moves about the mouse pad in conformity with the input signals obtained form an associated input device, such as a real mouse pointer operated by the user.
- FIG. 3B is an image of a schematic of a generalized unconnected abstracted software article 344 .
- an input port 346 which can accept input information from an abstracted software article.
- the input port 346 has a concave form indicative of the ability to accept input information.
- an output port 348 which can transmit information to an abstracted software article.
- the output port 348 has a convex form indicative of the ability to transmit output information.
- the center of FIG. 3B is a body 350 which represents the processing and control functions of the abstracted software article 344 .
- the body 350 can also be used to express visually for the benefit of a user information about the capabilities, functions and/or features of the abstracted software article 344 , and the software article that it represents.
- the body 350 can be augmented with text indicative of some feature or description that informs the user of the purpose and capabilities of the abstracted software article 344 , and the software article that it represents.
- FIG. 3C is an image of a screenshot 352 depicting several unconnected embodiments of abstracted software articles.
- an embodiment of a generic abstraction of an “input” software article 354 having a single output port labeled “VALUE” 356 and having two indicators 358 and 359 , that correspond, respectively, to the indicators 330 and 332 described above.
- the abstracted “input” software article 354 is useful for accepting input of a value from, for example, a keyboard, a touchscreen, or a digital input such as an analog-to digital converter.
- the lower left is another depiction 360 of an embodiment of the abstraction of the mouse software article 102 b of FIGS. 1 and 3A.
- FIG. 3C At the center of FIG. 3C is depicted an embodiment of an abstraction of an ExcelTM software article 362 , that comprises an input port on the left side having a concave form indicative of an input direction, and labeled “PORT 1 ” 364 , an output port on the right side having a convex form indicative of an output direction, and labeled “PORT 1 ” 366 , an iconic representation 368 that a user can recognize as an ExcelTM application, and two indicators 370 and 372 , that correspond, respectively, to the indicators 330 and 332 , described above.
- an ExcelTM software article 362 that comprises an input port on the left side having a concave form indicative of an input direction, and labeled “PORT 1 ” 364 , an output port on the right side having a convex form indicative of an output direction, and labeled “PORT 1 ” 366 , an iconic representation 368 that a user can recognize as an ExcelTM application, and two indicators 370 and 372 , that correspond, respectively, to
- FIG. 3C At the upper right of FIG. 3C is depicted an embodiment of an abstraction of a MatLabTM software article 374 , that has four input ports 376 , 378 , 380 and 382 , respectively labeled “X,” “Y,” “XMAX,” and “YMAX.” These ports, respectively, accept input corresponding to values of an x variable, a y variable, the maximum value the x variable can attain, and the maximum value the y variable can attain.
- At the top of the abstracted MatLabTM software article 374 there are two indicators 384 and 386 that correspond, respectively, to the indicators 330 and 332 described above.
- the embodiment of the abstraction of the MatLabTM software article 374 further comprises a body that is an iconic representation 387 that a user can recognize as a MatLabTM application.
- a body that is an iconic representation 387 that a user can recognize as a MatLabTM application.
- an “output” software article 388 At the lower right of FIG. 3C is depicted an embodiment of an abstraction of an “output” software article 388 , having a single input port labeled “VALUE” 390 and having two indicators 392 , 394 , that correspond, respectively, to the indicators 330 and 332 described above.
- the abstracted “output” software article 388 is useful for displaying a value, for example to a video display, or to a printer, or both.
- the zooming and panning features enable the user to easily navigate the programmed design over many orders of magnitude to grasp both micro and macro operation.
- the zoom and pan is smooth and analogous with nearly infinite degrees of zoom and having a nearly infinitely sized display space.
- a user can, however, “bookmark” different coordinates or use system provided “bookmarks.”
- a “bookmark” indicates is a virtual display position that has been assumed at some time by the virtual camera, and recorded for future use, so as to return the virtual camera to a specific location.
- user interface buttons (not shown) enable a programmer to quickly move a camera to preset distances from a plane upon which the abstractions of software articles appear.
- the bookmarked position can be anywhere in a multi-dimensional display space.
- the virtual camera 402 can move in any dimension in the display space 404 .
- the camera 402 is axially fixed. That is, while a user can freely move the camera 402 along the z-axis, and translate the camera coordinates along the x-axis and the y-axis, the user may not rotate the camera. This restriction on the number of degrees of freedom of the virtual camera eases camera control. However, in other embodiments the user and thus, the camera 402 can also move rotationally.
- a variety of three-dimensional graphics engines can be used to provide the virtual camera. In one embodiment, the JavaTM JAZZTM zooming library package is used.
- the application under development represented by the interconnected software articles 408 , 410 , 412 and 414 , may appear on a single flat hierarchical plane 406 .
- the system does not impose this constraint.
- Other implementations may feature software article representations, located on multiple hierarchical planes having differing z-axis coordinates.
- a user can usually elevate important perhaps (high-level) design features.
- a user can encapsulate collections of software articles into a single larger article. Such important or encapsulated articles may also appear elevated.
- Cartesian space the articles can also appear in cylindrical, spherical, or other spaces described by different multi-dimensional coordinate systems.
- FIG. 4B is a drawing 416 depicting an embodiment of an encapsulation 418 of a plurality of abstractions of software articles.
- a plurality of software articles have been encapsulated by first connecting the plurality of abstracted software articles as the user desires, leaving at least one port 420 unconnected.
- FIG. 4B depicts the encapsulated plurality of abstracted software articles 418 to the user as one larger abstracted software article having as input ports those unconnected input ports, if any, of the individual abstracted software articles that are part of the encapsulation, and having as output ports those unconnected output ports, if any, of the individual abstracted software articles that are part of the encapsulation.
- the system creates a software article that corresponds to the encapsulated abstraction by combining the corresponding software articles in the corresponding manner to that carried out by the user.
- the system performs this interconnection using JavaTM adapters which involve Active X and JNI components. These adapters are described in greater detail below with regard to FIG. 8.
- the encapsulated plurality of abstractions 420 appears to be a single abstraction 418 of a single software article.
- an encapsulated software article can be a component of a further encapsulation.
- three levels of encapsulation are indicated by encapsulated software articles 422 , 424 and 426 , where 424 contains 426 as a component and 422 contains 424 .
- the user-interface provides full-color graphics and animation to visually express the function and state of system software articles and connections.
- a system can be “live” during development. In such cases, the system animates and updates the system display to reflect the current system state.
- the “wires” 104 a - 104 c or lines used to connect two ports can depict the activity of transmitting information or signals between the two ports.
- the “wire” 104 a - 104 c can change appearance to indicate activity in progress.
- the “wire” 104 a - 104 c can change color during periods of activity.
- the “wire” 104 a - 104 c can change width during periods of activity.
- the “wire” 104 a - 104 c can change appearance from one end to the other, such as simulating an activity meter, the extent of the changed portion indicating the progress of the transmission from 0% to 100% as the transmission occurs.
- the “wire” 104 a - 104 c can flash or blink to indicate activity.
- images of objects such as a person running while carrying an item, a train travelling, a car moving, or the like can “run” along the “wire” 104 a - 104 c to indicate transmission of information.
- the visualization system can access a potentially infinitely large workspace, because it can pan in two dimensions along the plane of a workspace, and because it can zoom closer to and further from a plane of the workspace.
- the system can display an area which represents a portion of the workspace, the area of which depends on the relative virtual distance between the virtual camera and the plane of the workspace, and the location of which depends on the position of the virtual camera with regard to coordinates, such as Cartesian coordinates, defined on the plane of the workspace.
- the system can depict abstractions of software articles using a variety of techniques. As shown in FIGS. 5 A- 5 D, abstractions of software articles may be depicted in a variety of ways.
- an abstraction of a software article is depicted as a product icon 500 .
- the embodiment depicted is that of a graphical display that plots mathematical functions. This provides quick identification of the capabilities a software article underlying a particular abstraction.
- abstractions of software articles may also be depicted with a more functionally descriptive icon 502 (e.g., a sine wave for a sine wave generator), and may additionally carry an alphanumeric label..
- Abstractions of software articles may also include updated graphics 504 and 506 or text reflecting current operation of the software article.
- the abstraction of software article 504 shows the state of a MatLabTM plotting tool by depicting the graph being plotted.
- the abstraction of software article 506 features a real-time visualization of the state (e.g., coordinates and button operation) of an abstraction of a mouse software article.
- the mouse 508 moves around in real-time on the virtual “mouse-pad” 509 synchronously with the user moving a real physical mouse pointing device.
- the frame-rate is preferably 15 frames/sec or greater.
- the mouse can alternatively be represented at a slower frame rate, for example as a display such as that of FIG. 5A, in which motion is not apparent at all, or in which the motion is discontinuous, and the mouse appears to move in a hesitating manner.
- the system eases development by providing, at input ports, a graphic indicator that identifies the source port of a software article, along with an indicator at output ports that identifies a destination.
- an output port of a matrix operation may be a small icon of a grid.
- the interface can also display the state of the software article ports.
- ports can provide visualizations of the data that is being imported or exported (e.g., an image received by a software article may be represented with a “thumbnail,” or iconic image of the image transferred).
- the user interface also features hierarchical radial menus 602 .
- the menu 602 operates like traditional hierarchical menus (e.g., a Windows 98TM menu bar), however, each menu option 604 a - 604 d appears equidistant from a center point 606 .
- the menu 602 allows efficient access to the options 604 a - 604 d by minimizing the amount of mouse movement needed to reach an option.
- the options 604 a - 604 d occur at regular intervals around a circumference about the center point 606 . That is, an option appears at an angular position every [(360)/(number of menu options)] degrees. For example, an embodiment of an eight option menu features options located at North-West, North, North-East, East, South-East, South, South-West, and West.
- Each option 604 a - 604 d can appear as a circular text-labeled icon that highlights upon being co-located with a cursor (e.g., “mouse-over”).
- a user activating the center option 606 moves the displayed menu up one level of hierarchy.
- a user activating the outer options 604 a - 604 d either causes the display of another radial menu one level lower in the hierarchy, or causes a command to be executed if there is no lower hierarchy of commands corresponding to the selection activated (e.g., the activated menu option is a leaf of a menu tree, as explained further below).
- the radial menu 602 is context-sensitive. For example, given a history of user interaction, application state, and user selected items, the system can determine a menu of relevant options. In one embodiment, the radial menu 602 that appears is also dependent on the location of the cursor when the radial menu 602 is activated.
- the radial menu is defined not to appear on the user interface to conserve screen real-estate.
- a user can call up the menu as needed, for example, by clicking a right-most mouse button.
- An embodiment of a menu system in which the menu is normally hidden from view, and in which the menu appears on command, is called a popup menu system.
- the location of the mouse cursor at the time the mouse button is clicked is used as the center-point 606 of the radial menu 602 .
- the embodiments of the menu system shown in FIGS. 6 A- 6 D are radial popup menus.
- FIGS. 6 A- 6 D are images of a hierarchy of radial popup menus (referred to as “RadPop” menus).
- FIG. 6A shows an embodiment of a uppermost hierarchical level of a RadPop menu 602 .
- the central menu option labeled “Home” 606
- the central menu option “Home” 606 is located at the position that the cursor occupies when the right mouse button is depressed, activating the RadPop menu system.
- Activation of the centrally located menu option “Home” 606 causes the RadPop menu 602 to close, or to cease being displayed.
- the uppermost hierarchical level menu 602 has four options 604 a , 604 b , 604 c and 604 d located at angular positions 90 degrees apart.
- a first option 604 a is the View option, which, if activated, changes a view of the video display.
- a second option 604 b is the Help option, which, if activated, displays a help screen.
- a third option 604 c is the Insert option, which, if activated, opens a lower level menu of insertion action options.
- a fourth option 604 d is the file option, which, if activated, opens a menu of actions that can be performed on a file.
- the user selects the insert option, 604 c , and the system responds by opening the next lower level of options as another RadPop 620 (see FIG. 3B) whose central menu option overlays the central menu option of the higher hierarchical level RadPop 602 .
- FIG. 6B shows an image of an embodiment of the RadPop 620 .
- RadPop 620 appears at the same position at which RadPop 602 was displayed.
- the central menu option is labeled “Insert” 622 .
- Activation of the centrally located menu option “Insert” 622 causes the Insert RadPop menu 620 to close, or to cease being displayed, and to be replaced by the hierarchically next higher Radpop menu, Home 602 , of FIG. 6A.
- the “Insert” 620 RadPop menu has three options, labeled “Annotation” 624 , “Built-In Components” 626 , and “File” 628 , respectively.
- a user who selects the menu option “Built-In Components” 626 causes the system to move down one additional level in the hierarchy of menus, to FIG. 6C, by closing RadPop 620 and displaying RadPop 630 .
- the menu options “Annotation” 624 , and “File” 628 respectively, when activated can cause an action to be performed or can move the user one level down in the hierarchy, depending on how the system is designed.
- FIG. 6C shows an image of an embodiment of the RadPop 630 .
- RadPop 630 appears at the same position at which RadPop 620 was displayed.
- the central menu option is labeled “Built-In Components” 631 .
- Activation of the centrally located menu option “Built-In Components” 631 causes the Insert RadPop menu 630 to close, or to cease being displayed, and to be replaced by the next higher Radpop menu, “Insert” 620 , of FIG. 6B, which moves the user up one level in the hierarchy.
- the “Built-In Components” 632 RadPop menu has four options, labeled “JavaTM Components” 634 , “MatLabTM Functions” 636 , “ExcelTM sheets” 638 , and “LabViewTM Files” 640 , respectively.
- a user who selects the menu option “JavaTM Components” 634 causes the system to move down one additional level in the hierarchy of menus, to FIG. 6D.
- FIG. 6D shows an image of an embodiment of the RadPop 660 .
- RadPop 660 appears at the same position at which RadPop 630 was displayed.
- the central menu option is labeled “JavaTM Components” 640 .
- Activation of the centrally located menu option “JavaTM Components” 640 causes the JavaTM m Components RadPop menu 660 to close, or to cease being displayed, and to be replaced by the next higher Radpop menu, Built-In Components 630 , of FIG. 6D, which moves the user up one level in the hierarchy.
- the “JavaTM Components” RadPop menu 660 has seven radial options, labeled “Email” 642 , “Fireworks” 644 , “FTP” 646 , “Text Input” 648 , “Mouse” 650 ,“Text Cutout” 652 , and “Stock Ticker” 654 , respectively.
- a user can select any one of the seven radial menu options, each of which opens a respective interaction with the user, in which a component is presented for editing and insertion into a design of an application, as an abstracted software article.
- “Stock Ticker” 654 is a component that has the capability to read the statistical information relating to a stock symbol representing a publicly traded stock, mutual fund, bond, government security, option, commodity, or the like, on a regular basis, such as every few seconds.
- “Stock Ticker” 654 reads this information via a connection over the Web or the Internet to a site that records such information, such as a brokerage, or a stock exchange.
- “Stock Ticker” 654 provides this information as an output stream at an output port of an abstraction of a software article, and the output stream can be connected to other abstractions of software articles, such as an abstraction of a display or output software article for viewing the raw data.
- the data stream from the “Stock Ticker” 654 component can be transmitted to one or more software articles, such as an ExcelTM spreadsheet software article that can analyze the data according to some analytical function constructed therein, which analyzed data can the n be transmitted to an output software article for display.
- the application example involving transmitting information obtained by a Stock Ticker to an ExcelTM spreadsheet to display is programmed by invoking the Stock Ticker 654 abstraction of a software article, providing a ticker symbol, invoking an ExcelTM spreadsheet, entering in the spreadsheet an analytical function (or invoking a spreadsheet that has already been programmed with a suitable analytical function), and invoking an output software article.
- the software articles appear on the user's computer display as abstractions of software articles previously described.
- the user wires a connection from an output port of the abstraction of the Stock Ticker software article to an input port of the abstraction of the ExcelTM spreadsheet software article, and wires a connection from an output port of the abstraction of the ExcelTM spreadsheet software article to an input port of the abstraction of the display software article.
- the system automatically makes the appropriate connections for data to flow from one software article to the next, as is described in more detail below with regard to FIG. 8.
- the radial menus of FIGS. 6 A- 6 D enable a user to quickly navigate up and down a hierarchy of levels.
- FIG. 7 shows an exemplary hierarchy of menu options 700 in the form of a tree structure 702 .
- the menu of FIG. 6A is the first hierarchical level.
- the tree 702 has as its “root” the node labeled “Home” 704 .
- One level down in the hierarchical tree 702 are four options, “File” 706 , “Help” 708 , “View” 710 , and “Insert” 712 .
- User activation of Insert 712 causes the system to descend an additional level.
- the next lower level has three menu options, “File” 714 , “Annotation” 716 , and “Built-In Components” 718 .
- “Built-In Components” 718 causes the system to descend yet another level.
- the next level has four menu options, “JavaTM Components” 720 , “MatLabTM” functions 722 , “ExcelTM Sheets” 724 , and “LabViewTM Files” 726 .
- “JavaTM Components” 720 menu option When a user triggers the “JavaTM Components” 720 menu option, a still lower hierarchical level is reached, which comprises seven components including “Email” 728 , “Fireworks” 730 , FTP (File Transfer Protocol) 732 , “Text Input” 734 , “Mouse” 736 , “Text Output” 738 , and “Stock Ticker” 740 .
- use selection of one of the seven menu options last enumerated causes a software article to be activated, and the corresponding abstraction of the software article to be visible on the user's computer display, for customization by the user, for example, indicating what file is desired to be moved using the FTP protocol. 732 .
- the menu option that connects one hierarchical level with a higher hierarchical level such as the “JavaTM Components” 720 menu option, which connects the two lowest hierarchical levels of the tree 702 , also serves as the central menu option for the next lower level, and causes the system to move up a level if activated by the user.
- menus having any desired number of menu options, and any number of desired hierarchical levels can be constructed using the systems and methods described herein.
- FIG. 8 shows one embodiment of a visual programming system that uses software article 802 adapter wrappers 804 to make different articles 802 uniformly accessible to a middle-ware hub 806 software article.
- the hub software article 806 is depicted as having four docking ports, which are shown as concave semicircular ports 806 a - 806 d .
- the hub software article 806 can have a plurality of docking ports, limited in number only by the time delays associated with transmitting information between ports.
- the hub 806 monitors the states of different software articles 802 , and initiates and handles communication between the software articles 808 .
- the software articles 802 and their respective adapters 804 have equal access to system resources.
- the software articles 802 are abstracted as JavaTM code modules, such as JavaTM Serialization or Extensible Mark-up Language (XMLTM).
- JavaTM code modules such as JavaTM Serialization or Extensible Mark-up Language (XMLTM).
- XMLTM Extensible Mark-up Language
- the system has the property of persistence of state, in which a model can be saved (e.g., appear to be shut down) and restored later, appearing to “remember” where it was when it was last saved.
- tags include: ⁇ component>, ⁇ wire>,
- ⁇ component>attributes include: type, id.
- ⁇ wire>attributes include: source_component, source_port, target_component, target_port.
- the hub 806 maintains an object model (e.g., a JavaTM data structure) of application software articles 802 and software article connections.
- object model e.g., a JavaTM data structure
- the system correspondingly updates the object model.
- the system represents this in the object model by identifying the desired transfer of data from operator software article A to operator software article B, or the methods or servers to invoke in operator software article B by operator software article A.
- communication between operator software articles 802 may be performed using JavaTM object passing, a remote procedure call (e.g., RMI (Remote Method Invocation), or a TCP/IP (Transmission Control Protocol/Internet Protocol) based protocol.
- RMI Remote Method Invocation
- TCP/IP Transmission Control Protocol/Internet Protocol
- FIG. 8 shows four operator software articles, labeled “Math” 808 , “Image” 810 , “Data” 812 , and “Internet” 814 , respectively.
- each software article accepts input and provides output on a format unique to that software article.
- the Math software article 810 uses formats such as integers and floating point values, m x n arrays of data for matrices and vectors (where m and n are positive integers), series of coefficients and powers for polynomials, series of coefficients for Fourier and digital filters, and the like.
- the image software article 810 uses one or more of files formatted according to protocols such as the bitmap (.bmp) protocol, the JPEG (.jpg) protocol, or such image file protocols as .tif, .gif, .pcx, .cpt, .psd, .pcd, .wmf, .emf, .wpg, .emx, and .fpx.
- the Data software article 812 uses formats including, but not limited to, a single bit, a byte, an integer, a long integer, a floating point, a double floating point, a character, and the like.
- the Internet software article 814 can use protocols such as TCP/IP, DSL, Component Object Model (COM), and the like. None of the four types of software articles in these exemplary embodiments are capable of communicating directly with any other software article. This incompatibility is denoted graphically by depicting each distinct software article with a terminal that is unique in shape and size, for example, the math software article 808 having an arrow-shaped terminal 808 a , and the Internet software article 814 having a square-shaped terminal 814 a.
- protocols such as TCP/IP, DSL, Component Object Model (COM), and the like. None of the four types of software articles in these exemplary embodiments are capable of communicating directly with any other software article. This incompatibility is denoted graphically by depicting each distinct software article with a terminal that is unique in shape and size, for example, the math software article 808 having an arrow-shaped terminal 808 a , and the Internet software article 814 having a square-shaped terminal 814 a.
- FIG. 8 There are also depicted in FIG. 8 four adapter articles 816 , 818 , 820 , and 822 .
- the adapter article 816 has a triangular terminal at one end, which is the mating triangular shape to that of the triangular terminal of the math software article 808 .
- the opposite end of adapter 816 is a convex semicircular shape, which is designed to mate with any of the plurality of terminals of the hub software article 806 having concave semicircular shape.
- a user can recognize that the adapter article 816 is adapted to communicate bidirectionally with the hub software article 806 on one end and a software article such as the math software article 808 having the appropriate arrow-shaped terminal on the other.
- the adapter 816 is designed to translate between the information flow to and from the hub software article 806 in the native language of the hub software article 806 , and the information flow to and from any software article using the protocols that the Math software article 808 uses in the format or formats that such protocol requires.
- the adapters 810 , 812 and 814 perform similar bidirectional translations between the native language of the hub software article 806 on one end, and the protocols used by a particular kind of software article, such as the Image 810 , Data 812 and Internet 814 software articles, respectively.
- the mating shapes are visual indications to the user as to which adapter functions with which software article. The user need not be aware of, or be troubled by programming the details of the translations that are required. These details are preprogrammed and encompassed in the adapters 816 , 818 , 820 , and 822 .
- the hub software article 806 is assembled with the math software article 816 , the Image software article 810 , the Data software article 812 , and the Internet software article 814 , using the adapters 816 , 818 , 820 , and 822 , respectively.
- any software article can communicate with any other software article via the common language of the hub software article, the necessary translations being performed by the adapters 816 , 818 , 820 , and 822 .
- the utility of the illustrative system is clear upon considering the following mathematical analysis.
- the system requires at most N adapters such as 816 , 818 , 820 and 822 for N software articles 808 , 810 , 812 , and 814 to communicate, where N is an integer greater than or equal to 2.
- N is an integer greater than or equal to 2.
- a system that attempted to connect different software articles using adapters that had dissimilar ends to connect two specific types of software articles would require N*(N ⁇ 1)/2 different types of adapters (e.g., the number of adapters required to connect two things selected from a set of N things, or equivalently, the number of combinations of N things taken two at a time).
- hub software article 806 can interconnect many software articles simultaneously, the advantage is in fact even greater, because the number of connectors necessary to interconnect three software articles simultaneously (e.g., allowing software articles A, B, and C to communicate pairwise) is given by the number N*(N ⁇ 1)*(N ⁇ 2)/6.
- the illustrative system of the invention requires only N adapters to connect N software articles in a manner in which any software article can communicate with any other software article where such communication is required.
- a method in which a hub is not employed requires of the order of N 2 adapters for communication between two software articles, and of the order of N 3 adapters for communication between three software articles.
- N 2 or N 3 adapters As compared to N adapters, when N is even moderately large (N>5) is daunting.
- a connection may specify process flow or timing relative to the activities of software article A to software article B.
- communication between software articles may occur manually when a user wants to control data transmission, (e.g., when debugging an application). Communication may also occur automatically.
- the system may initiate communication when the state of an output port changes.
- Software article wrappers such as 816 , 818 , 820 , and 822 may be manually programmed or automatically generated.
- many commercial programs such as MatLabTM, provide “header” files describing input and output parameters for different procedures. These header files can be parsed to identify the different input and output ports to use and the type of data that is to be transmitted or received (e.g., bit, byte, integer, long integer, floating point, double floating point, character, and the like).
- an entity creating the wrapper can elect to “hide” different parameters or functions from view. Though this may provide visual programmers with a subset of possible functions, this can also eliminate seldom used features from graphical representation, thereby, “uncluttering” the visual programming environment.
- a variety of different platforms can provide the visual programming system features described above. These platforms include televisions; personal computers; laptop computers; wearable computers; personal digital assistants; wireless telephones; kiosks; key chain displays; watch displays; touch screens; aircraft, watercraft, and/or automotive displays; video game displays; and the like.
- FIG. 9 is a diagram 900 of an embodiment of a computer network upon which the invention can be practiced.
- a plurality of computers 904 , 906 , 908 , 910 which are depicted as personal computers and a laptop computer.
- Other kinds of computers, such as workstations, servers, minicomputers, and the like can also be used as part of the network.
- the computers 904 , 906 , 908 , 910 are interconnected by a network 902 , which can be a local network, a wide area network, or a world-wide network, such as the Web.
- Each computer preferably has a display 920 , 920 ′, 920 ′′, 920 ′′′ and/or other output devices to display information to a user.
- Each computer preferably has one or more input devices, such as a keyboard 925 , 925 ′, 925 ′′, 925 ′′′ or a mouse 927 , 927 ′, 927 ′′, a trackball, or the like.
- each computer has a recording device, such as a floppy disk drive 930 , 930 ′, 930 ′′, 930 ′′′, a hard disk drive, memory, and/or a CD-ROM, for recording information, instructions and the like as necessary.
- each computer has a communication device, such as a modem, a serial port, a parallel ort and/or other communication devices for communicating with other computers on the network.
- the techniques described here are not limited to any particular hardware or software configuration.
- the techniques are applicable in any computing or processing environment.
- the techniques can be implemented in hardware, software, or firmware or a combination of the three.
- the techniques are implemented in computer programs executing on one or more programmable computers that each include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices.
- Program code is applied to data entered using the input device to perform the functions described and to generate output information.
- the output information is applied to one or more output devices.
- Each program is preferably implemented in a high level procedural or object oriented programming language to communicate with a computer system.
- the programs can be implemented in assembly or machine language, if desired.
- the language may be a compiled or interpreted language.
- Each such computer program is preferably stored on a storage medium or device (e.g., CD-ROM, hard disk or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer to perform the procedures described in this document.
- a storage medium or device e.g., CD-ROM, hard disk or magnetic diskette
- the system can be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner.
- FIG. 10 is a schematic diagram depicting an exemplary viewing system.
- the viewing system 1000 includes an extractor module 1002 , a stylizer module 1004 , a protocalizer 1006 , user controls 1007 , and a display 1008 , which presents data objects to the user in a virtual three dimensional space 1010 .
- the data source or sources 1012 may be external to the viewing system 1000 , or in some embodiments may be internal to the viewing system 1000 .
- the extractor 1002 , stylizer 1004 and protocolizer 1006 operate in conjunction to organize data objects from the data source 1012 and to locate for display those data objects in the virtual three-dimensional space 1010 .
- the virtual three-dimensional space 1010 is depicted to the user as the display space 103 of FIG. 1.
- Exemplary displayed data objects are shown at 1014 a - 1014 h .
- the data objects 1014 a - 1014 h can be.
- the adjustable user viewing perspective is represented by the position of a camera 1016 .
- the user manipulates the controls 1007 to change the viewing perspective, and thus the position of the camera 1016 .
- the user can travel throughout the virtual space 1010 , and view, search through, and interact with, the data objects 1014 a - 1014 g .
- the illustrative viewing system 1000 enables the user to change the viewing perspective of the camera 1016 in an unrestricted fashion to provide the user with the feeling of traveling anywhere within the virtual space 1010 .
- the virtual space 1010 is modeled as a Cartesian, three-dimensional coordinate system. However, other embodiments may include more dimensions.
- the viewing system 1000 may employ other three dimensional coordinate systems, such as cylindrical and spherical coordinate systems may be employed. Further, as discussed below, such as with respect to FIG. 11, the data objects 1014 a - 1014 h may be organized in the virtual space 1010 in a variety of manners.
- the camera 1016 does not rotate, but moves freely along any of the three axes (i.e., i, j, k). By disabling rotation, it becomes easier for the user to remain oriented, and simpler to display the data objects 1014 a - 1014 g . Disabling rotation also reduces the necessary computations and required display information details, which reduces data transfer bandwidths, processor and/or memory performance requirements. In other embodiments the camera 1016 can move rotationally.
- the viewing system 1000 changes the appearance of the data objects 101 4 a - 101 4 g accordingly. For example, as the user moves the camera 1016 closer to a data object (e.g., 1014 a ), the viewing system 1000 expands the appearance of the displayed image of the data object (e.g., 1014 a ). Similarly, as the user moves the camera 1016 farther away from a data object (e.g., 1014 a ), the viewing system 1000 contracts the image of the data object 1014 a . Also, the viewing system 1000 displays the data object closest to the camera 1016 as the largest data object and with the most detail.
- the viewing system 1000 displays data objects that are relatively farther away from the camera 1016 as smaller and with less detail, with size and detail being a function of the virtual distance from the camera 1016 . In this way, the viewing system 1000 provides the user with an impression of depth of the fields.
- the viewing system 1000 calculates the virtual distance from the camera to each data object using conventional mathematical approaches.
- the viewing system 1000 defines the smallest threshold virtual distance, less than which the viewing system 1000 defines as being behind the position of the camera 1016 . The viewing system 1000 removes from view those data objects determined to be virtually behind the camera 1016 .
- data objects can be hidden from view by other data objects determined to be virtually closer to the camera 1016 .
- FIG. 11 provides a diagram that illustrates one way the viewing system 1000 can conceptually organize data objects, such as the data objects 101 4 a - 101 4 h , depicted in FIG. 10.
- the viewing system 1000 conceptually organizes the data objects 1102 a - 1102 e on virtual plates 1104 a - 1104 c in the virtual space 1010 .
- the virtual space 1010 is modeled as a three axis (i.e., i, j, k) coordinate system.
- the position 1106 of a virtual camera 1016 represents the user's viewing perspective.
- the camera 1016 is fixed rotationally and free to move translationally.
- the data objects 1102 a - 1102 e are organized on the virtual plates 1104 a - 1104 c in a hierarchical fashion.
- the data objects represent items of clothing and the template employed relates to a clothing store.
- data objects can also represent, for example, various software abstractions, and templates relating to the display of those software abstractions can also be employed as the user views information in the virtual space, as indicated by position “a” 1106 a of the camera 1016 , the viewing system 1000 illustratively presents an icon or graphical representation for “women's clothes” (data object 1102 a ).
- the viewing system 1000 presents the user increasing detail with regard to specific items sold at the store.
- the system 100 displays less of the information contained on the particular plate to the user, but displays that portion within view of the user in greater detail.
- the system 100 displays more of the information contained on the plate, but with less detail.
- each plate 1104 a - 1104 c has a coordinate along the k-axis, and as the user's virtual position, represented by the position 1106 of the camera 1016 , moves past the k-axis coordinate for a particular plate, the viewing system 1000 determines that the particular plate is located virtually behind the user, and removes the data objects on that plate from the user's view.
- the closest plate for example the plate 1104 a
- the viewing system 1000 “removes the lid” (i.e. plate 1104 a ) to reveal the underlying plates 1104 b and 1104 c .
- the closest plate may contain the continent of Europe.
- the viewing system 1000 may display to the user a plurality of European countries organized on a plurality of smaller plates.
- the viewing system 1000 may display a plurality of European countries organized on a single plate.
- FIGS. 12 A- 12 C illustrate such a conceptualization. More particularly, FIG. 12A depicts a node tree 1200 that defines hierarchical relationships between the data nodes 1202 a - 1202 h .
- FIG. 12B depicts a tree structure 1204 that provides potential visual display representations 1206 a - 1206 h for each of the data objects 1202 a - 1202 h .
- FIG. 12A depicts a node tree 1200 that defines hierarchical relationships between the data nodes 1202 a - 1202 h .
- FIG. 12B depicts a tree structure 1204 that provides potential visual display representations 1206 a - 1206 h for each of the data objects 1202 a - 1202 h .
- FIG. 12A depicts a node tree 1200 that defines hierarchical relationships between the data nodes 1202 a - 1202 h .
- FIG. 12B depicts a tree structure 1204 that provides potential visual display representations 1206 a
- 12C provides a tree structure 1208 illustrative of how the user may navigate a displayed virtual representation 1206 a - 1206 h of the data objects 1202 a - 1202 h .
- the nodes of the node tree are representative of data objects and/or the appearance of those data objects.
- FIG. 12C also illustrates one method by which the viewing system 1000 enables the user to navigate the data objects 1202 a - 1202 h in an unrestricted manner.
- the viewing system 1000 enables the user to virtually pan across any data object on a common hierarchical level.
- the user may virtually navigate into a clothing store, graphically represented by the graphic appearance 1206 a and then navigate to women's clothing represented by the graphic appearance 1206 b .
- the viewing system 1000 based on a template related to a clothing store, has hierarchically organized men's clothing, represented by the graphic appearance 1206 c , to be at an equivalent hierarchical location to women's clothing 1206 b .
- the viewing system 1000 enables the user to pan visually from the women's clothing graphic appearance 1206 b to the men's clothing graphic appearance 1206 c , via the controls 1007 , to view men's clothing.
- FIG. 12C also illustrates how the viewing system 1000 enables the user to virtually travel through hierarchical levels.
- the user can virtually navigate from any data object, such as the object 1202 a , assigned to a parent node in the tree structures 1200 , 1204 and 1208 , to any data object, such as objects 1202 b and 1202 c assigned to a child node in those tree structures.
- the viewing system 1000 also enables the user to navigate visually for example, from a hierarchically superior data object, such as the object 1202 a , through data objects, such as the data object 1202 c along the paths 1212 b and 1212 d to a hierarchically inferior data object, such as the data object 1202 e .
- a hierarchically superior data object such as the object 1202 a
- data objects such as the data object 1202 c along the paths 1212 b and 1212 d
- a hierarchically inferior data object such as the data object 1202 e .
- the motion displayed to the user is seemingly continuous, so that while virtually traveling through for example, the data object 1202 c , the viewing system 1000 displays the graphic appearance 1206 c as being larger with more detail and then, as disappearing from view as it moves to a virtual position behind the user's viewing position.
- FIG. 12C also illustrates how the viewing system 1000 enables the user to navigate between data objects, without regard for hierarchical connections between the data objects 1202 a - 1202 h . More particularly, as indicated by the illustrative paths 1214 a and 1214 b , the user can navigate directly between the data object 1202 a and the data object 1202 g . As described in detail below, with respect to FIGS. 16 - 18 the viewing system 1000 provides such unrestricted navigation using a variety of methods including by use of “wormholing,” “warping,” and search terms. In the node tree model of FIGS.
- the viewing system 1000 displays a graphical representation of data objects to the user in a similar fashion to the coordinate-based system of FIGS. 10 and 11. More specifically, data objects located at nodes that are hierarchically closer to the user's virtual viewing position are displayed as being larger and with more detail than data objects located at nodes that are hierarchically farther away from the user's virtual viewing position.
- the viewing system 1000 displays the graphic appearance 1206 a to the user with greater detail and at a larger size than, for example, the graphic appearances 1206 b - 1206 h .
- the viewing system 1000 displays the graphic appearances 1206 b and 1206 c to the user with greater detail and at a larger size than it displays the graphic appearances 1206 d - 1206 h .
- the viewing system 1000 employs a variety of methods for determining virtual distance for the purpose of providing a display to the user that is comparable to a physical paradigm, such as for example, the clothing store of FIGS. 12 A- 12 C.
- the viewing system 1000 determines the user's virtual viewing position, indicated at 1216 a . Then, the viewing system 1000 determines which data object 1202 a - 1202 h is closest to the user's virtual position and defines a plurality of equidistant concentric radii 1218 a - 1218 c extending from the closest data object, 1212 c in the example of FIG. 12A. Since the data node 1202 c is closest to the user's virtual position, the viewing system 1000 displays the data object 1202 c with the most prominence (e.g., largest and most detailed).
- the most prominence e.g., largest and most detailed
- the data objects 1202 a , 1202 b , 1202 d and 1202 e which are located equidistant from the data node 1202 c are displayed similarly with respect to each other, but smaller and with less detail than the closest data node 1202 c.
- the virtual distance calculation between nodes is also based on the hierarchical level of the data node that is closest to the user's virtual position.
- the nodes on the same hierarchical level are displayed as being the same size and with the same detail. Those nodes that are organized, hierarchically lower than the node closest to the user are displayed smaller and with less detail. Even though some nodes may be an equal radial distance with respect to the closest node, they may yet be assigned a greater virtual distance based on their hierarchical position in the tree 1200 .
- the viewing system 1000 includes the number of hierarchical links 1212 a - 1212 g between nodes in the virtual distance calculation.
- the radial distance for nodes 1206 d e.g., shirts
- 1206 g e.g., type of shirt
- 1206 h e.g., type of pants
- the calculated virtual distance to node 1206 h is less then than the calculated virtual distance to nodes 1206 d (e.g., shirts) and 1206 g (e.g., type of shirt), since the node 1206 h (e.g., type of pants) is only one link 1212 g from the node 1206 e (e.g., pants).
- Nodes separated by a single hierarchical link such as the nodes 1206 e and 1206 h , are said to be directly related.
- the user is still able to freely travel to the less related nodes 1206 d and 1206 g in a straight line, so they are displayed.
- the viewing system 1000 displays those nodes as being smaller and with less detail.
- the user is more likely to want to know about a type of pants 1206 h when at the pants node 1206 e than a type of shirt 1206 g.
- the viewing system 1000 gives equal weight to the direct relationship basis and the same hierarchical level basis in the virtual distance calculation. With this method, the viewing system 1000 considers the nodes 1206 d and 1206 h to be an equal virtual distance from the node 1206 e , and the node 1206 g to be farther away from the node 1206 e . Other embodiments may weight variables such as directness of relationship and hierarchical level differently when calculating virtual distance. Again, discussing in terms of the physical paradigm, the user may be equally interested in shirts 1206 d or a type of pants 1206 h when at the pants node 1206 e .
- the viewing system 1000 assumes that the user is less likely to want to know about a type of shirt 1206 g and thus, the viewing system 1000 sets the virtual distance greater for that node 1206 g than the other two nodes 1206 d , 1206 h , even though the radial distance is equal for all three nodes 1206 d , 1206 g , 1206 h.
- the viewing system 1000 conceptually drapes a grouping sheet over the hierarchically lower data nodes to form a category of data nodes.
- the viewing system 1000 then conceptually drapes larger grouping sheets over the first grouping sheets, thus grouping the data objects into greater categories.
- Such groupings are also evident in the hierarchical node structures of FIGS. 12 A- 12 C.
- FIG. 13 depicts a block diagram 1300 illustrating the use of multiple templates in combination.
- four templates 1303 , 1305 , 1307 and 1309 represent four different transportation services; car rentals 1302 , buses 1304 , taxies 1306 , and subways 1308 .
- the bus 1304 and subway 1308 templates contain map and schedule information, and fares are based on the number of stops between which a rider travels.
- the taxi template 1306 has fare information based on mileage and can contain map information for calculating mileage and/or fares.
- the car rental template 1302 contains model/size information for various vehicles available for rent, and fares are based on time/duration of rental.
- each template 1302 - 1308 is organized in accord with the invention to provide an intuitive virtual experience to the user navigating the information.
- the templates 1302 - 1308 can themselves be hierarchically organized (i.e., a top-level hierarchical relationship) through the use of the super templates 1310 - 1314 .
- the viewing system 1000 organizes the templates 1302 - 1308 using a menu super template 1310 .
- the menu super template 1310 relates the templates 1302 - 1308 on a common hierarchical level, showing that all four transportation services 1302 - 1308 are available.
- the super template 1310 organizes the templates 1302 - 1308 alphabetically.
- the viewing system 1000 organizes the templates 1302 - 1308 using a map super template 1312 .
- the map super template 1312 relates to a geographical location physical paradigm.
- the map super template 1312 relates the four templates 1302 - 1308 in accordance with the geographical relationship between the represented transportation services (i.e. car rental, bus, taxi and subway).
- the map super template 1312 can be used, for example, when the user wants to know which transportation services are available at a particular geographical location. For example, the user may be trying to decide into which airport to fly in a certain state 1316 , and wants to locate information about transportation services available at the different airports within the state.
- the viewing system 1000 organizes the templates 1304 - 1308 using a street super template 1314 .
- the street super template 1314 relates to a street layout physical paradigm.
- the street super template 1314 spatially relates the templates 1304 - 1308 to each other in terms of their street location.
- the super template 1314 can be used, for example, when the user has a street address and wants to know which transportation services are available nearby.
- the user can begin with the map super template 1313 to find a general location and then pan and zoom to the street level using the street super template 1314 .
- the viewing system 1000 may additionally employ irregular display shapes for advanced visual recognition.
- the graphic appearance associated with each data node can be defined to have a unique shape such as star, pentagon, square, triangle, or the like.
- display area availability is at a premium, thus rendering it impractical to employ irregular shapes.
- the panning and zooming features of the viewing system 1000 render display space essentially infinite.
- the display of virtually any client can be configured in favor of readability and an overall user experience.
- An aspect of the illustrative viewing system 1000 provides the user with the sense of real-time control of the displayed data objects.
- the viewing system 1000 Rather than a stop and go display/interactive experience, the viewing system 1000 provides an information flow, a revealing and folding away of information, as the user requires information. Accordingly, the state of the viewing system 1000 is a function of time. The user adjusts the virtual viewing position over time to go from one data object to the next. Therefore, a command for the virtual viewing position of the user, represented in FIGS. 10 and 11 by the position of the camera 1016 , is of the form f(x, y, z), where (x, y, z) is a function of time, f(t). The appearance of data objects that the viewing system 1000 displays to the user is a function of time as well as position.
- the viewing system 1000 changes the appearance of a graphical representation of the data objects in a smooth, continuous, physics-based motion.
- the motion between viewing perspective positions is performed smoothly.
- the viewing system 1000 avoids generating discrete movements between locations. This helps ensure that the user experiences smooth, organic transitions of data object graphical appearances and maintains context of the relationship between proximal data objects in the virtual space, and between the displayed data objects and a particular physical paradigm being mimicked by the viewing system 1000 .
- the viewing system 1000 applies a sine transformation to determine the appropriate display.
- the discrete transition is changed to a smoother, rounded transition.
- One way to model the motion for adjustments of the user viewing perspective is to analogize the user to a driver of a car.
- the car and driver have mass, so that any changes in motion are continuous, as the laws of physics dictate.
- the car can be accelerated with a gas pedal or decelerated with brakes. Shock absorbers keep the ride smooth.
- the user controls 107 of system 100 are analogously equipped with these parts of the car, such as a virtual mass, virtual shocks, virtual pedals and a virtual steering wheel.
- the user's actions can be analogized to the driving of the car.
- the illustrative viewing system 1000 models adjusting of the user viewing perspective as movement of the camera 1016 .
- the system assigns a mass, a position, a velocity and an acceleration to the camera 1016 .
- the viewing system 1000 models the user's virtual position logarithmically, that is, for every virtual step the user takes closer to a data object (e.g., zooms in), the viewing system 1000 displays to the user a power more detail of that data object. Similarly, for every virtual step the user takes farther away from a data object (e.g., zooms out), the viewing system 1000 displays to the user a power less detail for that data object.
- FIG. 14 provides a simplified flow diagram 1400 depicting operation of the viewing system 1000 when determining how much detail of a particular data object to render for the user.
- This decision process can be performed by a client, such as the client 1014 depicted in FIG. 19 or by the stylizer module 1004 .
- the viewing system 1000 determines the virtual velocity of the change in the user's virtual position, and employs the virtual velocity as a factor in determining how much detail to render for the data objects.
- the viewing system 1000 also considers the display area available on the client to render the appearance of the data objects (e.g., screen size of client 1014 ).
- the viewing system 1000 in response to determining that the virtual velocity is above one or more threshold levels, the viewing system 1000 renders successively less detail. Similarly, as also indicated by steps 1402 and 1404 , the system 100 also renders less detail as the available display area at the client 1014 decreases. Alternatively, as indicated by steps 1402 and 1406 , as the virtual velocity decreases and/or as the available display area at the client 1014 increases, the viewing system 1000 renders more detail. Thus, the viewing system 1000 makes efficient use of display area and avoids wasting time rendering unnecessary details for fast-moving data objects that appear to pass by the user quickly.
- FIG. 15 illustrates various potential appearances 1502 a - 1502 c for a textual data object 1502 , along with various potential appearances 1504 a - 1504 c for an image data object 1504 .
- the axis 1506 indicates that as virtual velocity increases and/or as client display area decreases, the viewing system 1000 decreases the amount of detail in the appearance. At the “full” end of the axis 1506 , the virtual velocity is the slowest and /or the client display area is the largest, and thus, the viewing system 1000 renders the textual data object 1502 and the image data object 1504 with relatively more detail, shown at 1502 a and 1504 a .
- the velocity is the greatest and /or the client display area is the smallest, and thus the viewing system 1000 renders the appearance of the same data objects 1502 and 1504 with no detail 1502 c and 1504 c , respectively. Instead, the viewing system 1000 renders the data objects 1502 and 1504 simply as boxes to represent to the user that a data object does exist at that point in the virtual space 1000 , even though because of velocity or display area, the user cannot see the details.
- the middle of the axis 1506 is the “line art” portion.
- the viewing system 1000 renders the data objects 1502 and 1504 as line drawings, such as that depicted at 1502 b and 1504 b , respectively.
- the viewing system 1000 transmits and stores images in two formats.
- the two formats are raster graphic appearances and vector graphic appearances.
- the trade-off between the two is that raster graphic appearances provide more detail while vector graphic appearances require less information.
- raster graphic appearances are used to define the appearance of data objects. Raster graphic appearances defines graphic appearances by the bit. Since every bit is definable, raster graphic appearances enable the viewing system 1000 to display increased detail for each data object. However, since every bit is definable, a large amount of information is needed to define data objects that are rendered in a large client display area.
- the raster graphic appearances which require large size data words even when compressed, are omitted and instead the viewing system 1000 employs vector graphic appearances and text, which require a smaller size data word than raster graphic appearances, to define the appearance of data objects.
- Vector graphic appearances define the appearance of data objects as coordinates of lines and shapes, using x, y coordinates.
- a rectangle can be defined with four x, y coordinates, instead of the x times y bits necessary to define the rectangle in a raster format.
- the raster graphic appearance of the country of England which is in .gif form, highly compressed, is over three thousand bytes.
- the equivalent vector version is roughly seventy x, y points, where each x, y double is eight bytes for a total of five hundred sixty bytes.
- a delivery of text and vector images creates a real-time experience for users, even on a 14.4 kilobyte per second modem connection.
- FIG. 16 illustrates various embodiments of visual indicators employed by the viewing system 1000 .
- the viewing system 1000 In addition to displaying data objects to the user, the viewing system 1000 also displays visual indicators to provide the user an indication of the hierarchical path the user has virtually navigated through in the virtual space 1008 . This is sometimes referred to as a breadcrumb trail.
- the viewing system 1000 provides a text breadcrumb bar 1602 .
- the illustrative text breadcrumb bar 1602 is a line of text that concatenates each hierarchical level visited by the user. For example, referring back to FIG.
- the graphic appearance 1206 a is the “home” level
- the graphic appearance 1206 c is level 1
- the graphic appearance 1206 e is level 2
- the graphic appearance 1206 h is the “leaves” level.
- the associated text breadcrumb trail is thus, “store.mensclothing.menspants.” This represents the selections (e.g., plates, data nodes) that the user virtually traveled through (e.g., by way of zooming and panning) to arrive at the “leaves” level display.
- the viewing system 1000 provides a text and image bread crumb bar 1604 .
- the text and image breadcrumb trail 1604 is a concatenation of each hierarchical level through which the user virtually travels.
- the trail 1604 also includes thumbnail images 1604 a - 1604 c to give the user a further visual indication of the contents of each hierarchical level.
- the viewing system 1000 provides a trail of nested screens 1606 . Each nested screen 1606 a - 1606 c corresponds to a hierarchical level navigated through by the user.
- the viewing system 1000 provides a series of boxes 1608 in a portion of the display.
- Each box 1608 a - 1608 c represents a hierarchical level that the user has navigated through and can include, for example, mini screen shots (e.g., vector condensation), text and/or icons. According to a further feature, selecting any particular hierarchical level on a breadcrumb trail causes the user to automatically virtually navigate to the selected hierarchical level.
- the viewing system 1000 enables the user to preview data objects without having to zoom to them.
- the viewing system 1000 in response to the user moving a cursor over a region of the display, the viewing system 1000 reveals more detail about the data object(s) over which the cursor resides.
- the viewing system 1000 in response to the user placing the cursor in a particular location, the viewing system 1000 displays data objects on one or more plates behind the plate in current view.
- fisheye refers to a region, illustratively circular, in the display that acts conceptually as a magnified zoom lens.
- the viewing system 1000 expands and shows more detail of the appearance of the data objects within the fisheye region.
- these concepts are used in combination with a breadcrumb trail. For example, in response to the user locating the cursor or moving the “fisheye” on a particular hierarchical level of a breadcrumb trail the viewing system 1000 displays the contents of that particular hierarchical level. According to one embodiment, the viewing system 1000 displays such contents via a text information bar.
- these functions enable the user to preview a data object on a different hierarchical level, without actually having to change the viewing perspective to that level, and to make enhanced usage of the breadcrumb trails illustrated in FIG. 16.
- FIG. 17 provides a conceptual diagram 1700 illustrating two methods by which the user can virtually navigate to any available data object, or hierarchical level.
- the two illustrative methods are “warping” and search terms.
- An exemplary use of search terms and warping is as follows. Referring also back to FIG. 12A- 12 C, from the home graphic appearance 1206 a , the user can input a search term, such as “menspants” 1702 . In response, the viewing system 1000 automatically changes the user's virtual location (and thus, hierarchical level), and displays the graphic appearance 1206 e , whereby the user can zoom into any of the graphic appearance 1206 h to reveal available products 1206 h .
- the virtual motion of the viewing perspective is a seemingly continuous motion from a starting hierarchical level 1704 a at the data object graphic appearance 1206 a to the hierarchical level 1704 c of the data object graphic appearance 1206 e corresponding to the entered search term 1704 b .
- the viewing system 1000 also renders the data objects that are virtually and/or hierarchically proximate to the intermediate data object 1206 c . This provides the user with an experience comparable of traveling through the virtual, multi-dimensional space 1010 in which the data objects are located. However, very little detail is used, as the velocity of the automatic change of location of the viewing perspective is very fast.
- the viewing system 1000 enables the user to warp from one data object to another through the use of a visual “wormhole.”
- FIG. 18 illustrates the use of a wormhole 1806 within the graphic appearance 1808 .
- the graphic appearance 1808 there are two data objects identified, the document 1810 and a reduced version 1812 a of a document 1812 .
- the document 1808 is located virtually far away from the document 1812 .
- the template 1005 provides a connection (e.g., a hyperlink) between the two documents 1810 and 1812 .
- the viewing system 1000 creates a wormhole 1806 .
- the viewing system 1000 displays the reduced version 1812 a (e.g., thumbnail) of the data object graphic appearance associated with the document 1812 within document 1808 to indicate to the user that the wormhole (e.g., hyperlink) exists.
- the viewing system 1000 warps the user to the data object 1812 .
- the viewing system 1000 displays to the user a continuous, virtual motion through all of the existing data objects between the document 1808 and the document 1812 .
- the virtual path is direct and the user does not navigate, the viewing system 1000 automatically changes the user's viewing perspective. Of course, the user is always free to navigate to the document 1812 manually.
- warping is employed to provide the automatic breadcrumb navigation discussed above with respect to FIG. 16.
- FIG. 19 is a schematic view depicting another exemplary implementation of the viewing system 1000 .
- the viewing system 1000 includes an extractor module 1002 , a stylizer module 1004 , a protocolizer module 1006 , one or more templates 1005 , user controls 1007 and a display 1008 .
- FIG. 12 depicts each component 1002 , 1004 , 1005 , 1006 , 1007 and 1008 as individual components for illustrative clarity. However, actual physical location can vary, dependent on the software and/or hardware used to implement the viewing system 1000 .
- the components 1002 , 1004 , 1005 and 1006 reside on a server (not shown) and components 1007 and 1008 reside on a client 1014 . In another embodiment, for example, all of the components 1002 , 1004 , 1006 , 1007 and 1008 reside on a personal computer.
- the extractor module 1002 is in communication with a data source 1012 (e.g., a database) from which the extractor module 1002 extracts data objects.
- the extractor module 1002 converts, if necessary, the data objects into a W 3 C standard language format (e.g., extended markup language “XMLTM”).
- the extractor module 1002 uses a mapping module 1016 to relate each of the data objects to each of the other data objects.
- the mapping module 1016 is an internal sub-process of the extractor module 1002 .
- the extractor module 1002 is also in communication with the stylizer module 1004 .
- the extractor module 1002 transmits the data objects to the stylizer module 1004 .
- the stylizer module 1004 converts the data objects from their W 3 C standard language format (e.g., XMLTM) into a virtual space language format (e.g., ZMLTM, SZMLTM, referred to generally as ZMLTM).
- the ZMLTM format enables the user to view the data objects from an adjustable viewing perspective in the multi-dimensional, virtual space 1010 , instead of the two-dimensional viewing perspective of a typical Web page.
- the stylizer module 1004 uses one or more templates 1005 to aid in the conversion.
- the one or more templates, 1005 hereinafter referred to as the template 1005 include two sub-portions, a spatial layout style portion 1005 a and a content style portion 1005 b .
- the spatial layout style portion 1005 a relates the data objects in a hierarchical fashion according a physical paradigm.
- the contents style portion 1005 b defines how the data object are rendered to the user.
- the stylizer module 1004 is also in communication with the protocolizer module 1006 .
- the stylizer module 1004 transmits the data objects, now in ZMLTM format, to the protocolizer module 1006 .
- the protocolizer module 1006 converts the data objects to established protocols (e.g., WAP, HTML, GIF, Macromedia FLASHTM) to communicate with a plurality of available clients 1014 (e.g., televisions; personal computers; laptop computers; wearable computers; personal digital assistants; wireless telephones; kiosks; key chain displays; watch displays; touch screens; aircraft; watercraft; and/or automotive displays) and browsers 1018 (e.g., Microsoft Internet ExplorerTM, Netscape NavigatorTM) to display the data objects from the user's viewing perspective in a navigable, multi-dimensional virtual space 1010 .
- the browser 1018 is hardware and/or software for navigating, viewing and interacting with local and/or remote information.
- the viewing system 1000 also includes a zoom renderer 1020 .
- the zoom renderer 1020 is software that renders the graphic appearances to the user. This can be, for example, a stand-alone component or a plug-in to the browser 1018 , if the browser 1018 does not have the capability to display the ZMLTM formatted data objects.
- client 1014 is used to represent both the hardware and the software needed to view information although the hardware is not necessarily considered part of the viewing system 1000 .
- the protocolizer module 1006 communicates with the client 1014 via a communication channel 1022 .
- the communication channel 1022 can be any channel supporting the protocol to which the protocolizer module 106 converts the ZMLTM format.
- the communication channel 1022 can be a LAN, WAN, intranet, Internet, cellular telephone network, wireless communication network (including third generation wireless devices), infrared radiation (“IR”) communication channel, PDA cradle, cable television network, satellite television network, and the like.
- the data source 1012 provides the content (i.e., data objects).
- the content of the data source 1012 can be of any type.
- the content can take the form of a legacy database (e.g., OracleTM, SybaseTM, Microsoft ExcelTM, Microsoft AccessTM), a live information feed, a substantially real-time data source and/or an operating system file structure (e.g., MACTM OS , UNIXTM and variations of UNIXTM, MicrosoftTM WindowsTM and variations of WindowsTM).
- a legacy database e.g., OracleTM, SybaseTM, Microsoft ExcelTM, Microsoft AccessTM
- a live information feed e.g., a substantially real-time data source and/or an operating system file structure (e.g., MACTM OS , UNIXTM and variations of UNIXTM, MicrosoftTM WindowsTM and variations of WindowsTM).
- the data source 112 can be a Web server and the content can include, for example, an HTML page, a page written in ColdfusionTM Markup Language (“CFM”) by Allaire, an Active Server Page (“ASP”) and/or a page written for a Macromedia FLASHTM player.
- the content typically is not stored in the ZMLTM format (i.e., “zoom-enabled”). If the content is not stored in a ZMLTM format, the extractor module 1002 and stylizer module 1004 convert the content into the ZMLTMformat.
- the content can be one or more of an algorithm, a simulation, a model, a file, and a storage device.
- the stored content is in the ZMLTM format.
- the viewing system 1000 transfers the content from the data source 1012 to the extractor module 1002 , the stylizer module 1004 and protocolizer module 1006 , without any additional processing. For example, if the content of the data source 1012 is already in ZMLTM format, the stylizer module 1004 does not need to take any action and can transmit the content directly to the protocolizer module 1006 .
- the types of transactions processed by the data source 1012 are transactions for obtaining the desired content.
- a representative input can be “get record” and the corresponding output is the requested record itself
- a representative input can be “get file(dir)” and the corresponding output is the content information of the “file/dir.”
- a representative input can be “get page/part” and the corresponding output is the requested page/part itself.
- the viewing system 1000 transfers the output from the data source 1012 to the extractor module 1002 .
- the extractor module 1002 receives the content from the data source 1012 .
- the extractor module 1002 separates the content into pieces referred to as data objects.
- the extractor module 1002 converts the content into a hierarchical relationship between the data objects within the content.
- the hierarchical data structure is one that follows a common language standard (e.g., XMLTM).
- FIG. 20 is a schematic view 2000 depicting an illustrative conversion of a file system directory tree 2002 to a hierarchical structure 2004 of data objects by the extractor module 1002 .
- the extractor module 1012 relates each of the data objects, consisting of the directories 2006 a - 2006 d and the files 2008 a - 2008 c , to each other in the hierarchical data structure 2004 , illustratively represented as a node tree.
- relationships between the nodes 2006 a - 2006 d and 2008 a - 2008 h of the hierarchical data structure 2004 follow the relationships depicted in the directory tree 2002 .
- the types of transactions processed by the extractor module 1002 are transactions for converting the obtained content to data objects in a hierarchical data structure, for example, XMLTM.
- representative inputs to the extractor module 1002 can be data record numbers and mapping, if the data base already contains a mapping of the data - 15 objects.
- a representative command can be, for example, “get_record(name)
- the corresponding output from the extractor module 102 is an XMLTM data structure of the data objects.
- a representative input can be filename(s), with representative commands such as “get_file(directory, name)” and “get_file_listing(directory).”
- a representative input can be Web pages/parts, with a representative command such as “get_Web_content(URL, start tag, end tag).”
- the extractor module 1002 analyzes the content to convert the content to create an exemplary structure such as: struct ⁇ void* data... node* parent node* child[ren] ⁇ node;
- the illustrative extractor module 1002 uses the mapping module 1016 .
- Operation of the mapping module 1016 depends on the type of content received by the extractor module 1002 .
- the mapping module 1016 traverses the directory tree until it creates a node for each file (i.e., data object) and each directory (i.e., data object) and creates the appropriate parent-child relationship between the nodes (i.e., data objects).
- FIG. 20 illustrates how the mapping module 1016 follows the directory tree 2002 when creating the hierarchical data structure 2004 .
- the mapping module 1016 keeps the hierarchical relationships of data objects as they are in the data source.
- a retail store might organize its contents in, for example, an OracleTM database and into logical categories and sub-categories forming a hierarchical data structure that the mapping module 1016 can copy.
- Another database might be, for example, a list of geographic points.
- the mapping module 1016 can use geographical relationship to create the hierarchical relationship between the points.
- mapping module 1016 creates them.
- the hierarchical structure may be less evident.
- the mapping module 1016 preferably, creates the relationships using some predetermined priorities (e.g., parent nodes are state of residence first, then letters of the alphabet).
- the mapping module 1016 extracts the vital information by first determining the flow or order of the Web site. To zoom enable a typical Web site, the mapping module 1016 extracts from the Web site a data hierarchy. HTML pages are a mix of data and formatting instructions for that data. HTML pages also include links to data, which may be on the same page or a different page. In one embodiment, the mapping module 1016 “crawls” a Web site and identifies a “home” data node (for example, on the home page) and the name of the company or service.
- the mapping module 1016 identifies the primary components of the service such as, for example, a table of contents, along with the main features such as “order,” “contact us,” “registration,” “about us,” and the like. Then the mapping module 1016 recursively works through the sub-sections and sub-subsections, until it reaches “leaf nodes” which are products, services, or nuggets of information (i.e., ends of the node tree branches).
- This process determines critical data and pathways, stripping away non-essential data and creating a hierarchical tree to bind the primary content. This stripping down creates a framework suitable for zooming, provides the user with a more meaningful focused experience, and reduces strain on the client/server connection bandwidth.
- FIG. 21 is a flow diagram 2100 illustrating operation of an exemplary embodiment of the extractor module 1002 process for converting a Web page to a hierarchical XMLTM data structure 2102 .
- the extractor module 1002 downloads (step 2104 ) the Web page (e.g., HTML document). From the contents between the Web page ⁇ head> ⁇ /head>tags, the mapping module 1016 obtains (step 2106 ) the title and URL information and uses this information as the home node 2102 a (i.e., the root node). The extractor module 1002 also obtains (step 2108 ) the contents between the Web page ⁇ body> ⁇ /body>tags.
- the Web page e.g., HTML document
- the mapping module 1016 obtains (step 2106 ) the title and URL information and uses this information as the home node 2102 a (i.e., the root node).
- the extractor module 1002 also obtains (step 2108 ) the contents between the Web page ⁇ body> ⁇ /body
- the mapping module 1016 processes (step 2110 ) the HTML elements (e.g., 2102 b - 2102 c ) to create the hierarchical structure 2102 .
- the first HTML element encountered is a table 2102 b .
- the table 2102 b includes a first row 2102 c .
- the first row 2102 c includes a first cell 2102 d .
- the first cell 2102 d includes a table 2102 e , a link 2102 f and some text 2102 g .
- Any traversal algorithm can be used.
- the mapping module 1016 can proceed, after obtaining all of the contents 2102 e - 2102 g of the first cell 2102 d of the first row 2102 c , to a second cell (not shown) of the first row 2102 c . This traversal is repeated until all of the HTML elements of the Web page have been processed (step 2110 ) and mapped into the hierarchical structure 2102 .
- the extractor module 1002 extracts each displayable element from a Web page. Each element becomes a data object.
- the mapping module 1016 preferably, creates a hierarchical relationship between the data objects based on the value of the font size of the element.
- the mapping module 1016 positions those data objects (e.g., HTML elements) with a larger value font size higher in the hierarchical relationship than those data objects with a smaller value font size.
- the mapping module 1016 preferably, uses the location of each element in the Web page as a factor in creating the hierarchical relationship. More particularly, the mapping module 1016 locates those elements that are next to each other on the Web page, near each other in the hierarchical relationship.
- the mapping module 1016 uses techniques such as traversing the hyperlinks, the site index, the most popular paths traveled and/or the site toolbar, and parsing the URL.
- FIG. 22 is a diagram 2200 illustrating two of these techniques; traversing the hyperlinks 2202 and the site index 2204 .
- the mapping module 1016 traverses the hyperlinks 2202 to help create a hierarchy.
- the mapping module 1016 tracks how each page 2206 relates to each link 2208 , and essentially maps a spider web of pages 2206 and links 2208 , from which the mapping module 1016 creates a hierarchy.
- the mapping module 1016 can also use the site map 2204 and tool bars when those constructs reveal the structure of a Web site. As discussed above, the mapping module 1016 can also use the size of the font of the elements of the site map 2204 along with their relative position to each other to create a hierarchy.
- the mapping module 1016 can parse the URL to obtain information about the Web site.
- URLs are in the form http://www.name.com/dir 1 /dir 2 /file.html.
- the name.com field generally indicates the name of the organization and the type of the organization (.com company, .cr for Costa Rica, .edu for education and the like).
- the dir 1 and dir 2 fields provide hierarchical information.
- the file.html field can also reveal some information, if the file name is descriptive in nature, about the contents of the file.
- the mapping module 1016 can also access information from Web sites that track the popularity of URL paths traveled. Such sites track which links and pages are visited most often, and weights paths based on the number of times they are traversed.
- the illustrative mapping module 1016 uses the information obtained from such sites, alone or in combination with other relationship information gained with other techniques, to create the hierarchical relationships between extracted data objects.
- mapping module 1016 working in conjunction with the extractor module 1002 , creates a hierarchical data structure for the extracted data objects, the extractor module 1002 processes the data objects of the content in terms of their relationship in the hierarchy.
- a W3C standard language data structure e.g. XMLTM
- XMLTM XMLTM
- the types of transactions processed by the extractor module 1002 are transactions relating to obtaining the hierarchical relationships between data objects. For example, for node information, a representative input can be “get node[x]” and the corresponding output is the requested node[x] itself. For data information, a representative input can be “get data” and the corresponding output is the requested data itself. For parent information, a representative input can be “get parent” and the corresponding output is the requested parent itself. For child information, a representative input can be “get child[x]” and the corresponding output is the requested child[x] itself.
- the extractor module 1002 provides the output (i.e., the XMLTM data structure) to the stylizer module 1004 .
- the stylizer module 1004 converts the data objects from the extractor module 1002 into ZMLTM format.
- the stylizer module uses one or more templates 1005 , which are related to one or more physical paradigms, to aid in the conversion.
- the template 1005 includes two sub-portions, the spatial layout style portion 1005 a and the contents style portion 1005 b .
- the spatial layout style portion 1005 a relates the data objects in a hierarchical fashion according to a physical paradigm.
- the contents style portion 1005 b defines how the data objects are rendered to the user.
- the stylizer module 1004 can be implemented using any of a plurality of languages, including but not limited to JAVATM, C, XMLTM related software, layout algorithms, GUI-based programs, and C and Macromedia FLASHTM compatible programs.
- the stylizer module 1004 receives data objects from the extractor module 1002 and converts the data objects from an XMLTM format to the ZMLTM format.
- the ZMLTM format generated by the stylizer 1004 is analogous to HTML, except designed for the multi-dimensional virtual space 1010 .
- the ZMLTM format employs a mark up language that describes one or more of the data objects organized within the virtual space.
- ZMLTM format uses tags to describe the attributes of, for example, the conceptual plates 1104 a - 1104 c discussed above with respect to FIG. 11.
- the stylizer module 1006 uses one or more templates 1005 to generate ZMLTM formatted data objects.
- templates describe how data objects from a data source are arranged in the multi-dimensional virtual space 1010 .
- Templates include a plurality of properties relating to a physical paradigm.
- the following list contains some exemplary properties of a template relating to a financial paradigm. Specifically, the list of properties is for a section of the template 1005 for viewing a stock quote including historical data, news headlines and full text.
- Each property in the list is limited to a few letters to save memory for use in handheld devices and/or other low capacity (e.g. bandwidth, processor and/or memory limited) devices.
- the template properties listed above describe characteristics of the information relating to the exemplary financial paradigm and displayed to the user in the virtual space 1010 .
- Some properties describe visibility.
- the fade properties describe when the appearance of data objects on a hierarchical plate comes within the viewing perspective (e.g., becomes visible to the user).
- Properties can also describe the appearance of included text. For example, some properties describe how the text appears, whether the text is wrapped, how the text is justified and/or whether the text is inverted.
- Properties can also describe dimensions of the data objects on the plate. For example, some properties describe whether the data object of the focus node has any borders and/or how the data objects corresponding to any children nodes are arranged.
- Properties can further describe the appearance of the data object on the hierarchical plate. For example, some properties describe whether the hierarchical plate contains charts and/or maps and/or images.
- Templates also contain a plurality of placeholders for input variables.
- the following list includes illustrative input variables for the exemplary financial template.
- the input variables describe parameters, such as high price, low price, volume, history, plots and labels, news $q$ (name) $s_td$ (last) $o_td$ (open) $v_td$ (volume) $h_td$ (high) $l_td$ (low) $c_td$ (change) $b_td$ (bid) $a_td$ (ask) $pv_td$ (today's prices) $pmx_3m$ (3 month t0) $pxx_3m$ (3 month t1) $h_3m$ (3 month price high) $l_3m$ (3 month price low) $pv_3m$ (3 month prices) $pmx_3m$ (6 month t0) $pxx_3m$ (6 month t1) $h_3m$ (6 month price high) $l_3m
- the SZMLTM format is similar to ZMLTM format, except instead of plates, the SZMLTM format describes attributes of the appearance in terms of a screen display.
- the SZMLTM format is the ZMLTM format processed and optimized for display on a reduced sized screen.
- One advantage of the SZMLTM format is that when zooming and panning, the user tends to focus on certain screen-size quantities of information, regardless of what level of abstraction the user is viewing. In other words, when the user wants to look at something, the user wants it to be the full screen. For example, in a calendar program the user may want to concentrate on a day, a week, or a year. The user wants the screen to be at the level on which the user wants to concentrate.
- the SZMLTM format is vector based. Vector graphic appearances enable the appearance of data objects to be transmitted and displayed quickly and with little resources. Using the SZMLTM format gives the user a viewing experience like they are looking at a true three dimensional ZMLTM formatted environment, while in reality the user is navigating a graphical presentation optimized for a reduced size, two-dimensional display.
- the SZMLTM format provides the content author ultimate and explicit control of what the appearance user sees on the screen.
- the SZMLTM format is based on ‘Screens’ described by a series of vector graphic appearance elements such as rectangles, text, axes, and polygons.
- the ⁇ *all*> tag is not a separate tag, but shows attributes for each element, regardless of the type of the element. Each element has a name, rectangular bounds, and potentially a ‘zoom to’ attribute, which when clicked will transport the user to another screen.
- the SZMLTM tags can be reduced to one or two characters.
- SZMLTM formatted text may be compressed before 10 transmission and decompressed after reception. Any known compression/decompression algorithms suffice.
- the SZMLTM format stores and relates data objects as screens, and stores a plurality of full screens in memory.
- each screen has travel regions (e.g. ‘click-regions’) which are ‘zoom-links’ to other screens.
- travel regions e.g. ‘click-regions’
- the viewing perspective zooms from the currently viewed screen to the “zoom_to” screen indicated in the attributes of the screen.
- screens can be thought of as containing three states; small (e.g., 25% of normal display), normal (e.g., 100%) and large (e.g., 400% of normal display).
- the focus screen (e.g., the screen currently being displayed) transitions from normal to large (e.g., from 100% of normal display to 400% of normal display).
- the “zoom-to” screen is displayed and transitions from small to normal (e.g., 25% of normal display to 100% of normal display).
- the focus screen is no longer displayed in the appearance. This gives the appearance to the user of zooming into the ‘click region’ (which expands) through the focus screen (which also expands).
- the expansion is linear, but this need not be the case.
- the focus screen transitions from normal to small (e.g., from 100% of normal display to 25% of normal display).
- the parent screen transitions from large to normal (e.g., 400% of normal display to 100% of normal display) and at some point in time, the focus screen is no longer displayed.
- the contraction is also linear.
- There is no need for a three-dimensional display engine since the graphic appearances can be displayed using a two-dimensional display engine. Yet, the user still receives a three-dimensional viewing experience.
- screens are modeled as a pyramidal structure based on hierarchy level and relative position of parent screens within the pyramid.
- each screen can have a coordinate (x, y, z) location.
- the z coordinate corresponds to the hierarchical level of the screen.
- the x, y coordinates are used to indicated relative position to each other, base on where the parent screen is. For example, refer to the appearances of data objects 2604 and 2610 of FIG. 26.
- the “news” data object element is to the right of the “charts” data object element.
- the user changes the viewing perspective to the hierarchical level corresponding with the appearance 2610 .
- the user can pan at this level.
- the screen to the right is a more detailed screen, at that particular hierarchical level, of the travel region of the “news” data object element.
- One embodiment of the viewing system 1000 addresses low bandwidth, memory and processor limited clients 1014 . With high bandwidth and performance, these features become somewhat less critical, but are still very useful. Described above is an illustrative embodiment of the SZMLTM format, which is essentially the ZMLTM format transformed and optimized for direct screen display. The SZMLTM format defines graphic appearances as vectors. The SZMLTM format is much faster and simpler to render than the ZMLTM format.
- the stylizer module 1004 employs the template 1005 having a spatial layout portion 1005 a and a contents style portion 1005 b .
- FIG. 23 is a block diagram 2300 illustrating how the spatial layout style portion 1005 a and the contents style portion 1005 b of the template 1005 operate to enable the stylizer module 1004 to convert an XMLTM source content data structure extracted from a data source 2304 into ZMLTM formatted data objects.
- the spatial layout style portion 1005 a arranges a plurality of data records 2306 a - 2306 e in the multi-dimensional, virtual space 1010 independent of the details 2305 in each of the records 2306 a - 2306 e .
- the spatial layout style portion 1005 a arranges the records 2306 a - 2306 e , relative to each other, in the virtual space 1010 based on the person's name or some other identifying characteristic.
- the spatial layout style portion 1005 a generally does not deal with how the data 2305 is arranged within each record 2306 a - 2306 e .
- the nature of the arrangement e.g., mapping
- This mapping in one embodiment, translates to a function wherein the three-dimensional coordinates of the data objects are a function of the one-dimensional textual list of the data objects and the template 1005 .
- the contents style portion 1005 b determines how to render each record detail 2305 individually.
- the contents style portion 1005 b creates the user-friendly style, and arranges the data 2305 within each record 2306 a - 2306 e .
- the contents style portion 1005 b arranges the patient's information within a region 2316 (e.g., a plate), placing the title A 1 on top, the identification number B 1 of the patient over to the left, charts in the middle and other information D 1 in the bottom right corner.
- a region 2316 e.g., a plate
- the viewing system 1000 optionally, provides a graphical interface 2312 for enabling the user to easily modify the template 1005 .
- the interface 2312 includes a display screen 2313 .
- the display screen 2313 includes a portion 2314 a that enables the user modify hierarchical connections.
- the display screen 2313 also includes a portion 2314 b that enables the user to change the content of particular data nodes, and a portion 2314 c that enables the user to change the display layout of particular data nodes.
- the stylizer module 1004 has arranged all of the data objects spatially using the template 1005 , the data objects are now in ZMLTM format, and the have a location in the multi-dimensional, virtual space 1010 .
- the stylizer module 1004 transfers the data objects in ZMLTMformat to the protocolizer module 0106 for further processing.
- the protocolizer module 1006 receives the data objects in the ZMLTM format and transforms the data objects to a commonly supported protocol such as, for example, WAP, HTML, GIF, Macromedia FLASHTM and/or JAVATM.
- the protocolizer module 1006 converts the data objects to established protocols to communicate with a plurality of available clients 1014 and browsers 1018 to display the data objects from an adjustable viewing perspective in the navigable, multi-dimensional, virtual space 1010 .
- a Macromedia FLASHTM player/plug-in is available in many browsers and provides a rich graphical medium.
- the data objects in the spatial hierarchy can be browsed by any browsers with a Macromedia FLASHTM player/plug-in, without any additional software.
- the protocolizer module 1006 is implemented as a servlet utilizing JAVATM, C, WAP and/or ZMLTM formatting.
- the protocolizer module 1006 intelligently delivers ZMLTM formatted data objects as needed to client 1014 .
- the protocolizer module 1006 preferably receives information regarding the bandwidth of the communication channel 1022 used to communicate with the client 1014 .
- the protocolizer module 106 delivers those data objects that are virtually closest to the user's virtual position.
- the protocolizer module 1006 Upon request from the zoom renderer 1020 , the protocolizer module 1006 transmits the data objects over the communication channel 1022 to the client 1014 .
- An example illustrating operation of the protocolizer module 1006 involves data objects relating to clothing and a template 1005 relating to the physical paradigm of a clothing store. Due to the number of data objects involved, it is unrealistic to consider delivering all the data objects at once. Instead, the protocolizer module 1006 delivers a virtual representation of each data object in a timely manner, based at least in part on the virtual location and/or viewing perspective of the user. For example, if the user is currently viewing data objects relating to men's clothing, then the protocolizer module 1006 may deliver virtual representations of all of the data objects relating to men's pants and shirts, but not women's shoes and accessories. In a model of the data objects as a node tree, such as depicted in FIGS.
- the focus node 1202 c is the node corresponding to the data object appearance 1206 c displayed from the current viewing perspective shown by the camera 1216 a .
- the protocolizer module 1006 delivers to the client 1014 those data objects that correspond to the nodes virtually closest the user's focus node 402 c and progressively delivers data that are virtually further away.
- the viewing system 1000 employs a variety of methods to determine relative nodal proximity.
- the protocolizer module 1006 delivers those nodes that are within a certain radial distance from the focus node 1202 c . If the user is not moving, the protocolizer module 1006 delivers nodes 1202 a , 1202 b , 1202 d and 1202 e , which are all an equal radial distance away. As also discussed with regard FIG. 12A, calculating virtual distances between nodes can be influenced by the hierarchical level of the nodes and also the directness of the relationship between the nodes. As skilled artisans will appreciate, the importance of prioritizing is based at least in part on the bandwidth of the communication channel 1022 .
- the zoom renderer 1020 on the client 1014 receives the data transmitted by the protocolizer module 1006 authenticates data via checksum and other methods, and caching the data as necessary.
- the zoom renderer 1020 also tracks the location of the user's current viewing perspective and any predefined user actions indicating a desired change to the location of the current viewing perspective, and relays this information back to the protocolizer module 1006 .
- the protocolizer module 1006 provides data objects, virtually located at particular nodes or coordinates, to the client 1014 for display. More particularly, the zoom renderer 1020 tracks the virtual position of the user in the virtual space 1010 . According to our feature, if the user is using a mobile client 1014 , the zoom renderer 1020 orients the user's viewing perspective in relation to the physical space of the user's location (e.g., global positioning system (“GPS”) coordinates).
- GPS global positioning system
- the user can influence which data objects the protocolizer module 1006 provides to the client 1014 by operating the user controls 1007 to change virtual position/viewing perspective.
- delivery of data objects is a function of virtual direction (i.e. perspective) and the velocity with which the user is changing virtual position.
- the protocolizer module 1006 receives user position, direction and velocity information from the zoom renderer 1020 , and based on this information, transmits the proximal data node(s). For example, in FIG. 12A, if the user is at node 1202 c and virtually traveling toward nodes 1202 e and 1202 h , the protocolizer module 1006 delivers those nodes first.
- the client 1014 can be any device with a display including, for example, televisions, a personal computers, laptop computers, wearable computers, personal digital assistants, wireless telephones, kiosks, key chain displays, watch displays, touch screens, aircraft watercraft or automotive displays, handheld video games and/or video game systems.
- the kiosk does not contain a display.
- the kiosk only includes a transmitter (e.g., an IR transmitter) that sends targeted information to a user's client as the user travels within a close vicinity of the kiosk transmitter, whether or not the user requests data.
- the viewing system 1000 can accommodate any screen size.
- clients 1014 such as personal digital assistants, a wireless telephones, key chain displays, watch displays, handheld video games, and wearable computers typically have display screens which are smaller and more bandwidth limited than, for example, typical personal or laptop computers.
- the stylizer module 1004 addresses these limitations by relating data objects in the essentially infinite virtual space 1010 .
- the essentially infinite virtual space 1010 enables the user to view information at a macro level in the restricted physical display areas to pan through data objects at the same hierarchical level, and to zoom into data objects to view more detail when the desired data object(s) has been found.
- Bandwidth constraints are also less significant since the protocolizer module 1006 transfers data objects to the client 1014 according to the current location and viewing perspective of the user.
- the zoom renderer 1020 processes user input commands from the user controls 1007 to calculate how data objects are displayed and how to change the users position and viewing perspective.
- Commands from the user controls 1007 can include, for example, mouse movement, button presses, keyboard input, voice commands, touch screen inputs, and joystick commands.
- the user can enter commands to pan (dx, dy), to (dz), and in some embodiments to rotate.
- the user can also directly select items or various types of warping links to data objects whereupon the user automatically virtually travels to selected destination.
- the zoom render 1020 and the browser 1018 can be implemented in a variety of ways, depending on the client platform.
- JAVATM can be used with, for example, graphic appearance libraries or a custom library with or without the JAVA GraphicsTM API to create the zoom renderer 1020 and/or the browser 1018 for displaying the ZMLTM formatted data objects in the virtual viewing space 1010 .
- a custom C library can be used to create a stand-alone browser or plug-in.
- Macromedia FLASHTM compatible code can be employed.
- the zoom renderer 1020 and/or the browser 1018 can be implemented in the language of the telephone manufacturer.
- the zoom renderer 1020 and/or the browser 1018 can be implemented within a cable receiver or an equivalent service.
- the zoom renderer 1020 may reside on devices that are limited in capacity such as vehicle computers, key chains, and PDAs with limited memory and processing capabilities. Such devices often have limited and strained network bandwidth and are not designed for complicated graphic appearances. They may not have a typical browser 1018 that a personal computer would have.
- the following techniques help provide a high bandwidth experience over a low bandwidth connection (i.e., expensive experience over inexpensive capabilities).
- One goal of the following techniques are to keep the size of the code small, including a small stack and a small heap, using the heap over the stack.
- Another goal is to provide rapid graphic appearances with simple routines and small memory requirements.
- the following techniques can be variously combined to achieve desired goals.
- One technique is for use with the ZMLTM format. This technique uses parent-child relationships as impetus to minimize the need to specify explicit coordinates. It can be accomplished using a recursive table-like layout propagated over three-dimensional space.
- the table-like layout can contain n children per row, outside cell border percentages, intra cell border percentages, zoom-to, no screens.
- a table layout may be employed within the ZMLTM properties, such as children per row and outside, inside border percentages. Tables may be nested within tables. This method is analogous to HTML table layouts. The goal is to provide, at any given zoom level, a reasonable number of choices and a coherent display of information. Even though data objects are related to each other using a recursive, table-like layout, the coordinate system placement is not replaced entirely. This provides to the ability to place plates explicitly, independent of any parent or child.
- Another technique is to get as much as possible off of the end device (e.g., thin client) by performing these conversion steps on another, more powerful CPU, starting with the system storing, in ZMLTM format, a collection of one or more data objects. Then the viewing system 1000 takes the ZMLTM format (ASCII) as an input and generates virtual plate structures from the ZMLTM formatted data objects. The system 100 generates screens structures from the hierarchical plates. The viewing system 1000 generates, from screens, SZMLTM formatted data objects (ASCII form) as output. The end result is a text file in SZMLTM format that can be pasted into a PDA. This end result is a PDA application that does not have plate structures, screen structures, plate conversion function from ZMLTM format, plate conversion functions to screens, and screen conversion functions to SZMLTM format. Without these functions, the software is cheaper and faster.
- ASCII ZMLTM format
- SZMLTM formatted data objects ASCII form
- Another technique is to compress the ZMLTM/SZMLTM format on the more powerful CPU and uncompress on the less powerful CPU.
- the system 100 uses a compression algorithm to compress ZMLTM or SZMLTM into a CZMLTM or CSZMLTM format.
- the system 100 decompresses to ZMLTM or SZMLTM format on the less powerful CPU.
- SZMLTM formatted data objects have more characters because is the SZMLTM format is essentially the ZMLTM format expanded into its actual renderable elements, and thus it is larger. For example, it is one thing to describe the shape of a tea pot of size a, b, c and position x, y, z (i.e., ZMLTM format) and it is another to describe every polygon in the tea pot (i.e., SZMLTM format).
- ZMLTM format shape of a tea pot of size a, b, c and position x, y, z
- SZMLTM format i.e., SZMLTM format
- SZMLTM format explicitly commands the zoom renderer 1020 to draw the rectangle at screen coordinates ( 10 , 20 , 50 , 60 ).
- zoom renderer 1020 to draw the rectangle at screen coordinates ( 10 , 20 , 50 , 60 ).
- the viewing system 1000 summarizes ZMLTM formatted information into screens; that is a collection of M ⁇ N displays on which the user would typically focus. Each screen is a list of vector graphic appearances objects.
- the viewing system 1000 then smoothly transitions between source and destination screens by linearly scaling the current view, followed by the destination view, as described above. This creates the effect of a true three-dimensional camera and graphic appearances engine (typically expensive) using primitive, inexpensive two-dimensional graphic appearances techniques.
- the viewing system 1000 does not download everything, at once. Instead, the viewing system 1000 downloads the template(s) once and then subsequently only downloads irreproducible data. For example, if an appearance is defined by the example list of input variable for the exemplary financial template above, only the data for each data object has to be transmittal for the zoom renderer 1020 to display the data object. The layout of the appearance, the template, remains the same and the zoom renderer 1020 only changes the displayed values associated with each data object.
- the viewing system 1000 also includes a alteration program with a graphical user interface 2312 to enable the user to edit the definitions of data objects defined in ZMLTM or SZMLTM format, without the need of the user to understand those formats.
- the interface 2312 enables the user to manually change the zoomed layout of data objects like a paint program. The user selects graphic appearance tools and then edits the ZMLTM or SZMLTM formatted information manually using the interface 2312 .
- the user manually selects that node corresponding to the data object representing the special winter jackets and using the tools 23 14 a - 23 14 c makes modifications such as scaling, shrinking, growing, moving, adding, deleting, or otherwise modifying the data object.
- the user can use the interface 2312 to go beyond the layout algorithm and design the look and feel of the virtual space with greater control.
- the graphical alteration module operate in combination with the automated layout.
- a PDA is an applicable client device 1014 for the viewing system 1000 .
- FIG. 24 is a conceptual block diagram 2400 depicting a database server 2402 in communication with a PDA 2404 which is zoom enabled in accord with an illustrative embodiment of the invention.
- the database server 2402 contains the data objects 2406 a - 2406 f stored in the SZMLTM format.
- the database server 2402 first transmits the data objects 2406 a - 2406 f for the home screen 2412 via the communication channel 2408 .
- the data objects 2406 a - 2406 f that are in the closest vicinity of the home screen in the spatial hierarchical relationship are downloaded next.
- the PDA 1604 has a small memory 2410 that can hold, for example, fifty kilobytes of information.
- FIG. 25 depicts various hierarchically related graphic appearances 2502 - 2510 and 2510 a rendered on the PDA 2512 . As depicted, the user navigates down through hierarchical levels of data objects from the graphic appearance 2502 of a retail store 2502 to the graphic appearance 2510 a of a particular product.
- FIG. 26 illustrates the telephony devices 2602 a - 2602 c displaying the SZMLTM data objects using a financial template and the linear expansion and contraction algorithm described above.
- the telephony device 2602 a displays a graphic appearance 2604 of financial information for ABC Corp.
- the screen 2604 has a ‘click-region’ 2606 to expand the displayed chart to reveal to the user more detail of the chart.
- the telephony device 2602 b employs the above discussed linear expansion technique to provide the user with the graphic appearance 2608 and the feeling of zooming through the home graphic appearance 2604 to the zoom_to screen 2610 .
- the telephony device 2602 depicts the zoom_to screen 2610 at its normal state (i.e., 100%).
- the user can virtually zoom through the data objects using the keypad 2612 of the telephony devices 2602 .
- the user uses a CellZoomPadTM (“CZPTM”).
- CZPTM device is a clip-on device for cellular telephony devices.
- the CZPTM device turns the cellular telephony device screen into a touch pad, similar to those found on portable PCs. Moving around the pad performs the zooming.
- the cell phone attachment covers the screen and keys of the cell phone.
- the user activates the touch pad by touching portions of the cell phone attachment, which activates certain of the cell phone keys in a manner that is read and understood by the zooming software on the cell phone, for example by using combinations of keys pressed simultaneously.
- a wire or custom plug-in interfaces directly with the cell phone.
- a handheld navigation device 2700 another device that can be used as a user control 1007 in conjunction with the viewing system 1000 is a handheld navigation device 2700 .
- the navigation device 2700 is wireless.
- the device 2700 is a handheld joystick-like device that is custom tailored for browsing in virtual space 1010 .
- the MANOTM can be used across platforms and clients, for example a personal computer or a television.
- the device 2700 has an analog three-dimensional joystick 2702 , with a loop 2704 on the top. In response to the user actuating the joystick north, south, east or west, the viewing system 1000 pans. In response to the user pushing in or pulling out on the loop 2704 the viewing system 1000 zooms.
- buttons 2706 - 2712 can provide standard mouse functions, custom functions and/or redundant zooming functions.
- the functions of the buttons can be to cause the system to take a snapshot of the virtual location of the viewing perspective, or a snapshot of the history (e.g., breadcrumb trail) of the user's trajectory.
- Other examples of functions can include purchasing an item, sending an email, synchronizing data to or from the client, transmitting information to/from a client device recording music and signaling an alarm (e.g. causing a system to dial 911 ).
- An Infrared Sensor 2714 option replaces a wired connection.
- the device 2700 can be configured to vibrate in relation to user virtual movement, to provide tactical feedback to the user. This feedback can be in synchronization with the user's virtual movements through the multi-dimensional zoom space 1010 to give the user an improved sensory enriching experience.
- the device 2700 has a speaker and/or microphone to give and/or receive audio signals for interaction with the system.
Abstract
Description
- This application claims the benefit of and priority to U.S. provisional patent application Serial No. 60/182,326, filed Feb. 14, 2000, U.S. provisional patent application Serial No. 60/182,368, filed Feb. 14, 2000, U.S. provisional patent application Serial No. 60/240,287, our attorney docket number GPH-003PR2, filed Oct. 13, 2000, U.S. provisional patent application Serial No.______ not yet assigned, filed Nov. 16, 2000, and U.S. patent application Ser. No.______, Attorney Docket No. GPH-003, entitled “Method And Apparatus For Viewing Information,” filed on even date herewith, which applications are hereby incorporated in their entirety by reference.
- This invention relates generally to visual programming. More particularly, in one embodiment, the invention relates to visual programming in which graphically represented software articles are manipulated to create custom software applications.
- Programming environments have come a very long way since the days of punched paper cards. For example, visual programming environments such as Microsoft Visual C++, Basic, and Java provide developers with the ability to quickly develop application prototypes by dragging-and-dropping user interface controls onto WYSIWYG (What you see is what you get) displays. These systems also provide dialogs that enable users to enter values for objects, methods, and data instead of having to code software line-by-line. Some integrated applications allow users to select information from one application, such as a graph created in a spreadsheet, and to use such selected information in a different application, such as imbedding a graph in a word processing document.
- However, in general, users cannot interconnect a capability provided by one application sold by a first vendor with a second capability provided by a second application sold by a second, unrelated, vendor without significant expertise, programming skill, and effort. In particular, users cannot interface input devices, such as a mouse or a data source at a remote location, such as a World Wide Web (hereinafter “Web”) site with one or more application programs to create a custom application.
- In the discussion that follows, the term “software article” will be understood to comprise software ranging from a single line of software code written in any programming language (including machine language, assembly language, and higher level languages), through blocks of software code comprising lines of software code, software objects (as that term is commonly used in the software arts), programs, interpreted, compiled, or assembled code, and including entire software application programs, as well as applets, data files, hardware drivers, web servers, sevlets, and clients. A software article can be abstracted, and represented visually, using a specific visual format that will be explained in detail below. The visual representation of a software article can be referred to as an abstracted software article. The term “browser” will be understood to comprise software and/or hardware for navigating, viewing, and interacting with local and/or remote information. Examples of browsers include, but are not limited to Netscape Navigator™ and Communicator™, Internet Explorer™, and Mosaic™.
- The invention, in one embodiment, provides systems and methods for a user, having little or no programming skill or experience, to use visual programming to create custom applications that can employ user input, information obtained from remote devices, such as information obtained on the web, and applications programs. The systems and methods of the invention involve the use of one or more computers. In embodiments that involve a plurality of components, the computers are interconnected in a network. The systems and methods of the invention provide abstractions of software articles which include inputs such as a mouse or a keyboard, and outputs, such as a video display or a printer. An abstraction of a software article is an analog of an electronic circuit which provides functionality such as logic, memory, computational capability, and the like, and which includes inputs and outputs for interconnection to allow construction of a specific application circuit.
- The user can select software articles from a repository, such as a software library, and can place an abstracted software article on a computer display. The user can interconnect an output of one abstracted software article to an input of another abstracted software article using “wires.” “Wires” are linear graphical structures that are drawn on the computer display by the user. The user can draw “wires” using a pointing device such as a mouse. The user can construct a software application that performs a customized function by the selection and interconnection of abstracted operator software articles (also referred to as “operators”). The operator software articles represented by the abstractions communicate using a common language, with connections via a central hub software article (also referred to as a “hub”). A bidirectional software adapter (also referred to as an “adapter”) for each software article provides translation between the “native” communication language of the article and the common language of the system. The bidirectional software adapter is transparent to the user. The systems and methods of the invention provide a readily understood, essentially intuitive, graphical environment for program development. The systems and methods of the invention provide feedback that eases program development and debugging. The systems and methods of the invention reduce the technological expertise needed to develop sophisticated applications.
- The systems and methods of the invention employ techniques of viewing material on a display that use panning (e.g. two-dimensional motion parallel to the plane of the display) and zooming (e.g. motion perpendicular of the plane of the display). The zooming and panning features enable the user to easily navigate the programmed design over many orders of magnitude to grasp both micro and macro operation. The zoom and pan is smooth and analogous with nearly infinite degrees of zoom and having a nearly infinitely sized display space.
- In one aspect, the invention relates to a method of receiving user input. The method includes receiving user input identifying a location on a graphical user interface, displaying menu options, a first menu option appearing substantially at the identified location, the remaining menu options appearing at locations proximate to the identified location, and receiving user selection of one of the displayed menu options. In one embodiment, the remaining menu options appear at locations equidistant from the identified location. In one embodiment, receiving user input identifying a location involves determining the location of a cursor. In one embodiment, the remaining menu options appear at regular radial intervals around the identified location.
- In one embodiment, the method further includes providing hierarchical levels of menu options. In this embodiment, receiving user selection of at least one of the menu options causes display of menu options at a different hierarchical level. In one embodiment, the menu option located substantially at the identified location includes a menu option that causes display of menu options at a hierarchical level higher than the current level.
- In one embodiment, the method further includes enabling a user to select menu options to present. In one embodiment, the method further includes selecting menu options to present based at least in part on an application context.
- In another aspect, the invention features a method of receiving user input. The method includes providing hierarchical levels of menu options, receiving user input identifying a location on a graphical user interface, the user input includes a location of a cursor, displaying menu options from one hierarchical level, a first menu option appearing substantially at the identified location, the remaining menu options appearing at locations proximate to the identified location and being positioned at regular radial intervals around the identified location, the menu option located substantially at the identified location includes a menu option that when activated causes a display of menu options at a hierarchical level one level higher than the current level, and receiving user selection of one of the displayed menu options. In one embodiment, the remaining menu options appear at locations equidistant from the identified location.
- In one embodiment, selecting one of the remaining menu options activates a predetermined function. In one embodiment, selecting one of the remaining menu options causes display of menu options at a hierarchical level one level lower than the current level. In one embodiment, the display of menu options at a hierarchical level one level lower than the level of the selected option involves the display of the selected option substantially at the identified location, and the display of one or more suboptions of the selected option, the suboptions being located proximate to the identified location and being positioned at regular radial intervals around the identified location. In one embodiment, the one or more suboptions of the selected option are displayed based on application context. In one embodiment, the remaining menu options appear at locations equidistant from the identified location.
- In still another aspect, the invention relates to a computer program, recorded on a computer readable medium, for receiving user input. The program includes instructions for causing a processor to receive user input identifying a location on a graphical user interface, to display menu options, a first menu option appearing about the identified location, the remaining menu options appearing at locations proximate to the identified location, and to receive user selection of one of the displayed menu options. In one embodiment, the remaining menu options appear at locations equidistant from the identified location.
- In one embodiment, the instructions that receive user input identifying a location includes instructions that identify the location of a cursor. In one embodiment, the remaining menu options are displayed at regular radial intervals around the identified location.
- In one embodiment, the program further includes instructions that provide hierarchical levels of menu options, and the instructions that receive user selection of at least one of the menu options cause display of different menu options at a different hierarchical level. In one embodiment, the menu option located substantially at the identified location comprises a menu option that causes display of menu options at a hierarchical level one level higher than the current level.
- In one embodiment, the program further includes instructions that select menu options to present. In one embodiment, selecting menu options to present is based at least in part on an application context.
- The foregoing and other objects, aspects, features, and advantages of the invention will become more apparent from the following description and from the claims.
- The objects and features of the invention can be better understood with reference to the drawings described below, and the claims. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the drawings, like numerals are used to indicate like parts throughout the various views.
- FIG. 1A is an image of a screenshot of a graphical user interface for a visual programming system, according to an embodiment of the invention;
- FIG. 1B shows an example of a connection of a source software article to a destination software article, according to an embodiment of the invention;
- FIG. 1C shows a connector having a repeated indication of a source, according to an embodiment of the invention;
- FIG. 2 is an image of a screenshot of a graphical user interface for a visual programming system, according to an embodiment of the invention;
- FIG. 3A is an image of a screenshot of a graphical user interface for a visual programming system, according to an embodiment of the invention;
- FIG. 3B is an image of a schematic of an unconnected abstracted software article, according to an embodiment of the invention;
- FIG. 3C is an image of a screenshot depicting several unconnected abstracted software articles, according to an embodiment of the invention;
- FIG. 4A is a diagram illustrating a “virtual” camera that a user can maneuver to zoom-in and out of a graphic representation of an application as the application is being developed, according to an embodiment of the invention;
- FIG. 4B is a drawing depicting an embodiment of an encapsulation of a plurality of abstractions of software articles, according to principles of the invention;
- FIGS.5A-5D are embodiments of graphic representations of software articles at varying levels of generality, according to principles of the invention;
- FIGS.6A-6D are images of a hierarchy of radial popup menus, according to an embodiment of the invention;
- FIG. 7 is a flow diagram of a menu hierarchy, according to an embodiment of the invention;
- FIG. 8 is a diagram of a sample architecture for the visual programming system, according to an embodiment of the invention;
- FIG. 9 is a diagram of an embodiment of a computer network upon which the invention can be practiced;
- FIG. 10 is a conceptual diagram illustrating generation of a virtual display space in accord with an embodiment of the invention;
- FIG. 11 is a schematic view depicting multiple viewing perspectives in accordance with an embodiment of the invention;
- FIGS.12A-12C are schematic views depicting data objects modeled as a node tree;
- FIG. 13 is a conceptual diagram illustrating use of using a plurality of templates in accordance with the invention;
- FIG. 14 is a flowchart depicting a method of rendering detail in accordance with an embodiment of the invention;
- FIG. 15 is an illustrative example of rendering detail in accordance with an embodiment of the invention;
- FIG. 16 depicts illustrative embodiments of breadcrumb trails in accordance with the invention;
- FIG. 17 illustrates use of search terms in accordance with an embodiment of the invention;
- FIG. 18 illustrates operation of a visual wormhole, in accordance with an embodiment of the invention;
- FIG. 19 is a schematic view depicting a viewing system architecture in accordance with an embodiment of the invention;
- FIG. 20 is a schematic view depicting the conversion of a file system directory tree into a hierarchical structure of data objects in accordance with an embodiment of the invention;
- FIG. 21 is a schematic view depicting the conversion of a Web page to a hierarchical structure of data objects in accordance with an embodiment of the invention;
- FIG. 22 is a schematic view depicting the conversion of a Web page to a hierarchical structure of data objects in accordance with an embodiment of the invention;
- FIG. 23 is a schematic diagram depicting the conversion of an XML™ hierarchical structure of data objects to the ZML™ format in accordance with an embodiment of the invention;
- FIG. 24 depicts a method of downloading data from/to a server to/from a PDA client, respectively, in accordance with an embodiment of the invention;
- FIG. 25 depicts illustrative display images of user viewing perspectives as rendered by a PDA in accordance with an embodiment of the invention;
- FIG. 26 depicts illustrative display images of user viewing perspectives as rendered by a wireless telephone in accordance with an embodiment of the invention; and
- FIG. 27 is an embodiment of a handheld wireless navigation device that can be used as a user control in conjunction with the viewing system, according to principles of the invention.
- FIG. 1A shows a screenshot of a visual programming
system user interface 100. In one embodiment, theinterface 100 enables a programmer to quickly assemble an application by assembling and interconnecting different abstractions of software articles 102 a-102 d. This enables the user to focus on problem-solving using existing abstractions of software articles as the programmer's toolset, instead of spending time writing code to connect the different pieces. Theinterface 100 provides rich graphic feedback, making application development more intuitive, faster, and enjoyable. - In greater detail, FIG. 1A shows graphic representations of different abstractions of software articles102 a-102 d. Each abstraction of a software article 102 a-102 d provides one or more software services. Examples of software articles include components (e.g., COM/DCOM Objects (Component Object Model/Distributed Component Object Model), ActiveX components, and JavaBeans™), software routines (e.g., C++, Pascal, and Java™ routines), functions provided by commercial products (e.g., Microsoft Excel™ and Word™, MatLab™, and Labview™), and access to a database (e.g., using ODBC (Open Database Connectivity)). A software article may also handle HTTP calls needed to access Internet sites (e.g., retrieving stock prices from a URL (Universal Resource Locator)), e-mail, and FTP (File Transfer Protocol) services. In implementations that support distributive programming, the actual instructions for the software article need not reside on the device displaying the graphical representations.
- The user interface presents abstractions of software articles102 a-102 d as “black boxes” by representing the software article as having simple input and/or output ports corresponding to software article input and output parameters. In the electronic arts, the term “black box” is used to denote a circuit element having one or more inputs and one or more outputs, but whose detailed internal structure and contents remain hidden from view, or need not be known to the application engineer in detail. In the electronic arts, a “black box” is typically characterized by a transfer function, which relates an output response of the “black box” to an input excitation, thereby providing an engineer with the necessary information to use the “black box” in an application. As shown in FIG. 1, the boxes are not in fact “black,” but can instead present pictures and text that indicate their function and current state. The boxes are “black boxes” to the extent that the programmer does not need to know or understand the precise manner in which the particular box accomplishes the task that it is designed to perform.
- To create an application, a user simply connects an
output port 103 of an abstraction of asource software article 102 a to theinput ports 105 of one or more destination abstractions ofsoftware articles 102 b, as shown in FIG. 1B. The connection is performed by drawing a line or a “wire 104 a,” using a pointing device such as a mouse, from anoutput port 103 on one abstraction of a software article to aninput port 105 of an abstraction of a software article. Thewire 104 a can indicate a direction of information flow. In one embodiment, the drawing is performed by locating a cursor at a desired end of the line, depressing a mouse button, moving the mouse cursor by manipulating the mouse from the beginning of the line to the desired end location of the line, and releasing the mouse button. In alternative embodiments, the user can use a trackball, a keyboard, voice commands or any other convenient suitable input device. In one embodiment, a keyboard can be used by activating particular keys for the functions of moving the cursor (e.g., arrow keys), starting the line (e.g., for example, the combination <Control>-D to denote “pen down”) and ending the line (e.g., for example, the combination <Control>-U to denote “pen up”). Other systems and methods of drawing lines will become apparent to those of ordinary skill in the software arts. In some embodiments, an application can involve connecting a plurality of abstractions of software articles. In some embodiments, an application can involve connecting an output of an abstraction of a software article to an input of the same abstraction of a software article, as when a recursive action is required. According to this embodiment, multiple functional aspects of a single software article can be interconnected to create a program. A more detailed description of the components of abstractions of software articles is given below. - In some embodiments, the system can indicate to the user whether a proposed connection is allowable. In one embodiment, the wire used for a connection is green to indicate an acceptable connection (for example, from an output port to an input port), and the wire turns red to indicate an unacceptable connection (such as from an output port to another output port).
- In one embodiment, as the user designates operators, operator functions, and operator function variables, the CommonFace system builds up a connection table which defines the inputs and output of all the wires. The system uses the connection table to determine how data, commands, and the like are translated and transmitted from software article to software article to perform the programmed operations.
- Once a connection has been established, the destination abstraction of
software article 102 b continually receives data or calls from the abstraction ofsoftware source article 102 a. For example, as shown in FIG. 1 A, a user draws aconnection 104 b from theoutput port 103 of an abstraction of a “mouse”software article 102 b to theinput port 105 of an abstraction of an “output” software article 102 c. After establishing thisconnection 104 b, the abstraction of the “mouse”software article 102 b appears to continually feed the abstraction of the “output” software article 102 c with data describing user manipulation of a mouse. The abstraction of the “output” software article 102 c appears to display these values in real-time. In fact, the functions described are performed transparently to the user by the associated software articles. Anotherconnection 104 c from anoutput port 103 of the abstraction of the “mouse”software article 102 b carries data to aninput port 105 of the abstraction of the “fireworks”software article 102 d. As depicted in FIG. 1C, in one embodiment, anidentifier 103′ of aconnection 104 a or wire corresponding to anoutput port 103 can be repeated at either end of theconnection 104 a, so that the user can see which source is connected to which destination, without having to retrace theentire connection 104 a frominput port 105 back tooutput port 103. In one embodiment, the user can select a connection, and the system automatically traverses the connection to display the opposite end of the connection and any associated abstractions. - In some embodiments, selecting a representation of a software article instantiates the underlying software article in response to the system displaying the graphical representation of the software article. For example, selecting an Excel™ spreadsheet causes both the display of an abstraction of the software article, or the display of a graphical representation of the software article, as well as instantiating an Excel™ spreadsheet itself. In some embodiments, the graphical representation of the software article serves both to identify the identity of the related software article, and to indicate its state. For example, an Excel™ spreadsheet comprising two columns and two rows can be represented by a graphical representation indicating an Excel™ spreadsheet having two inputs and two outputs, while a spreadsheet with three rows and two columns can be represented by a graphical representation indicating an Excel™ spreadsheet having three inputs and two outputs.
- In one embodiment, one graphical representation generates output on its own. In one embodiment, one graphical representation generates output based on input not its own. In one embodiment, one graphical representation performs a function without inputs and without outputs.
- Features referred to as panning and zooming, the operation of which is described in greater detail below with respect to FIGS.10-25, are shown in FIGS. 1A, 2 and 3A. The
user interface 100 provides a “virtual camera” that enables a user to smoothly pan over and zoom in and out of a work space. The “virtual camera” described in more detail below. The detail shown of each abstraction of an article and connection image is a function of the virtual distance between the abstraction and a virtual viewing position of the user, represented by the virtual camera. For example, in FIG. 1 a user has moved the virtual camera far away from the work space. This view displays all abstractions of the application articles 102 a-102 d and connections 104 a-104 c. However, in this view, details, such as the names of the software article ports, are difficult to discern. After zooming in (e.g., moving the virtual camera closer to the work space) in FIG. 2, the interface displays greater detail of the portion of the system in view. While zooming in from the virtual camera position of FIG. 1A moves some of the application features out of view, a user can see greater detail such as the ports 106 a-106 c names, and article configuration information 108, such as a control that determines whether transmission of data between articles is “automatic” or “manual”. - FIG. 3A, which depicts the details of an abstraction of a “mouse”
software article 102 b, further illustrates the results of zooming in. In this embodiment, the abstraction of the “mouse”software article 102 b includes, on the left, input ports, respectively labeled “MIN X” 302, “MAX X” 304, “MINY” 306 and “MAX Y” 308. These inputs define the range of motion of an object, such as a cursor, in Cartesian coordinates, such as columns and rows, respectively, of a display. An input depicted by the line orwire 310 is shown going to port “MIN X” 302 from a source not shown. On the right side of the embodiment are three output ports, respectively labeled “HORIZONTAL” 320, “VERTICAL” 322, and “CLICK” 324. The output ports indicate the horizontal position of a mouse, the vertical position of the mouse, and the state of a mouse button, respectively. In this embodiment, there are twoindicators indicator 330 active) or transmit data only upon command of the user who activatesindicator 332, for example with a pointing device such as a mouse. Finally, in this embodiment, the abstraction further includes a simulated mouse pad 340 and asimulated mouse 342, which moves about the mouse pad in conformity with the input signals obtained form an associated input device, such as a real mouse pointer operated by the user. - FIG. 3B is an image of a schematic of a generalized unconnected
abstracted software article 344. At the left of FIG. 3B is aninput port 346, which can accept input information from an abstracted software article. Theinput port 346 has a concave form indicative of the ability to accept input information. At the right of FIG. 3B is anoutput port 348, which can transmit information to an abstracted software article. Theoutput port 348 has a convex form indicative of the ability to transmit output information. The center of FIG. 3B is abody 350 which represents the processing and control functions of theabstracted software article 344. Thebody 350 can also be used to express visually for the benefit of a user information about the capabilities, functions and/or features of theabstracted software article 344, and the software article that it represents. In some embodiments, thebody 350 can be augmented with text indicative of some feature or description that informs the user of the purpose and capabilities of theabstracted software article 344, and the software article that it represents. - FIG. 3C is an image of a screenshot352 depicting several unconnected embodiments of abstracted software articles. In the upper left of FIG. 3C, there is shown an embodiment of a generic abstraction of an “input”
software article 354, having a single output port labeled “VALUE” 356 and having twoindicators indicators software article 354 is useful for accepting input of a value from, for example, a keyboard, a touchscreen, or a digital input such as an analog-to digital converter. In the lower left is anotherdepiction 360 of an embodiment of the abstraction of themouse software article 102 b of FIGS. 1 and 3A. At the center of FIG. 3C is depicted an embodiment of an abstraction of an Excel™ software article 362, that comprises an input port on the left side having a concave form indicative of an input direction, and labeled “PORT 1” 364, an output port on the right side having a convex form indicative of an output direction, and labeled “PORT 1” 366, an iconic representation 368 that a user can recognize as an Excel™ application, and twoindicators indicators - At the upper right of FIG. 3C is depicted an embodiment of an abstraction of a MatLab
™ software article 374, that has fourinput ports ™ software article 374, there are twoindicators indicators ™ software article 374 further comprises a body that is aniconic representation 387 that a user can recognize as a MatLab™ application. At the lower right of FIG. 3C is depicted an embodiment of an abstraction of an “output”software article 388, having a single input port labeled “VALUE” 390 and having twoindicators indicators software article 388 is useful for displaying a value, for example to a video display, or to a printer, or both. - The zooming and panning features enable the user to easily navigate the programmed design over many orders of magnitude to grasp both micro and macro operation. The zoom and pan is smooth and analogous with nearly infinite degrees of zoom and having a nearly infinitely sized display space. A user can, however, “bookmark” different coordinates or use system provided “bookmarks.” A “bookmark” indicates is a virtual display position that has been assumed at some time by the virtual camera, and recorded for future use, so as to return the virtual camera to a specific location. For example, user interface buttons (not shown) enable a programmer to quickly move a camera to preset distances from a plane upon which the abstractions of software articles appear. As discussed in further detail below with respect to FIGS.9-24, the bookmarked position can be anywhere in a multi-dimensional display space.
- As shown in FIG. 4A, the
virtual camera 402 can move in any dimension in the display space 404. Preferably, thecamera 402 is axially fixed. That is, while a user can freely move thecamera 402 along the z-axis, and translate the camera coordinates along the x-axis and the y-axis, the user may not rotate the camera. This restriction on the number of degrees of freedom of the virtual camera eases camera control. However, in other embodiments the user and thus, thecamera 402 can also move rotationally. A variety of three-dimensional graphics engines can be used to provide the virtual camera. In one embodiment, the Java™ JAZZ™ zooming library package is used. - As shown in FIG. 4A, the application under development, represented by the
interconnected software articles hierarchical plane 406. However, the system does not impose this constraint. Other implementations may feature software article representations, located on multiple hierarchical planes having differing z-axis coordinates. For example, a user can usually elevate important perhaps (high-level) design features. Additionally, a user can encapsulate collections of software articles into a single larger article. Such important or encapsulated articles may also appear elevated. Although shown in Cartesian space, the articles can also appear in cylindrical, spherical, or other spaces described by different multi-dimensional coordinate systems. - FIG. 4B is a drawing416 depicting an embodiment of an
encapsulation 418 of a plurality of abstractions of software articles. In FIG. 4B, a plurality of software articles have been encapsulated by first connecting the plurality of abstracted software articles as the user desires, leaving at least oneport 420 unconnected. FIG. 4B depicts the encapsulated plurality ofabstracted software articles 418 to the user as one larger abstracted software article having as input ports those unconnected input ports, if any, of the individual abstracted software articles that are part of the encapsulation, and having as output ports those unconnected output ports, if any, of the individual abstracted software articles that are part of the encapsulation. The system creates a software article that corresponds to the encapsulated abstraction by combining the corresponding software articles in the corresponding manner to that carried out by the user. In a preferred embodiment, the system performs this interconnection using Java™ adapters which involve Active X and JNI components. These adapters are described in greater detail below with regard to FIG. 8. When viewed from a large virtual distance, the encapsulated plurality ofabstractions 420 appears to be asingle abstraction 418 of a single software article. However, as the view of the encapsulation is increased in size by viewing the encapsulated plurality of abstractions from a virtually close distance, the system displays the internal components of the encapsulation to the user to enable the user to recognize that the encapsulation comprises a plurality of interconnected abstractions of software articles. As depicted in FIG. 4B, an encapsulated software article can be a component of a further encapsulation. In FIG. 4B, three levels of encapsulation are indicated by encapsulatedsoftware articles - The user-interface provides full-color graphics and animation to visually express the function and state of system software articles and connections. Additionally, in some embodiments, a system can be “live” during development. In such cases, the system animates and updates the system display to reflect the current system state. The “wires”104 a-104 c or lines used to connect two ports can depict the activity of transmitting information or signals between the two ports. In some embodiments, the “wire” 104 a-104 c can change appearance to indicate activity in progress. For example, the “wire” 104 a-104 c can change color during periods of activity. Alternatively, or in addition, the “wire” 104 a-104 c can change width during periods of activity. In some embodiments, the “wire” 104 a-104 c can change appearance from one end to the other, such as simulating an activity meter, the extent of the changed portion indicating the progress of the transmission from 0% to 100% as the transmission occurs. In some embodiments, the “wire” 104 a-104 c can flash or blink to indicate activity. In other embodiments, images of objects such as a person running while carrying an item, a train travelling, a car moving, or the like can “run” along the “wire” 104 a-104 c to indicate transmission of information.
- The visualization system can access a potentially infinitely large workspace, because it can pan in two dimensions along the plane of a workspace, and because it can zoom closer to and further from a plane of the workspace. The system can display an area which represents a portion of the workspace, the area of which depends on the relative virtual distance between the virtual camera and the plane of the workspace, and the location of which depends on the position of the virtual camera with regard to coordinates, such as Cartesian coordinates, defined on the plane of the workspace.
- The system can depict abstractions of software articles using a variety of techniques. As shown in FIGS.5A-5D, abstractions of software articles may be depicted in a variety of ways. In FIG. 5A, an abstraction of a software article is depicted as a
product icon 500. The embodiment depicted is that of a graphical display that plots mathematical functions. This provides quick identification of the capabilities a software article underlying a particular abstraction. As shown in FIG. 5B, abstractions of software articles may also be depicted with a more functionally descriptive icon 502 (e.g., a sine wave for a sine wave generator), and may additionally carry an alphanumeric label.. - Abstractions of software articles may also include updated
graphics software article 504 shows the state of a MatLab™ plotting tool by depicting the graph being plotted. As another example, in FIG. 5D, the abstraction ofsoftware article 506 features a real-time visualization of the state (e.g., coordinates and button operation) of an abstraction of a mouse software article. Themouse 508 moves around in real-time on the virtual “mouse-pad” 509 synchronously with the user moving a real physical mouse pointing device. To give the illusion of continuous feedback, the frame-rate is preferably 15 frames/sec or greater. By comparison, to save transmission bandwidth, the mouse can alternatively be represented at a slower frame rate, for example as a display such as that of FIG. 5A, in which motion is not apparent at all, or in which the motion is discontinuous, and the mouse appears to move in a hesitating manner. - The system eases development by providing, at input ports, a graphic indicator that identifies the source port of a software article, along with an indicator at output ports that identifies a destination. For example, an output port of a matrix operation may be a small icon of a grid. Thus, a user can identify the input source without tracing the wire back. The interface can also display the state of the software article ports. For example, ports can provide visualizations of the data that is being imported or exported (e.g., an image received by a software article may be represented with a “thumbnail,” or iconic image of the image transferred).
- As shown in FIG. 6A, the user interface also features hierarchical
radial menus 602. Themenu 602 operates like traditional hierarchical menus (e.g., a Windows 98™ menu bar), however, each menu option 604 a-604 d appears equidistant from acenter point 606. Themenu 602 allows efficient access to the options 604 a-604 d by minimizing the amount of mouse movement needed to reach an option. - As shown, the options604 a-604 d occur at regular intervals around a circumference about the
center point 606. That is, an option appears at an angular position every [(360)/(number of menu options)] degrees. For example, an embodiment of an eight option menu features options located at North-West, North, North-East, East, South-East, South, South-West, and West. Each option 604 a-604 d can appear as a circular text-labeled icon that highlights upon being co-located with a cursor (e.g., “mouse-over”). A user activating thecenter option 606 moves the displayed menu up one level of hierarchy. A user activating the outer options 604 a-604 d either causes the display of another radial menu one level lower in the hierarchy, or causes a command to be executed if there is no lower hierarchy of commands corresponding to the selection activated (e.g., the activated menu option is a leaf of a menu tree, as explained further below). - Preferably, the
radial menu 602 is context-sensitive. For example, given a history of user interaction, application state, and user selected items, the system can determine a menu of relevant options. In one embodiment, theradial menu 602 that appears is also dependent on the location of the cursor when theradial menu 602 is activated. - By default, the radial menu is defined not to appear on the user interface to conserve screen real-estate. A user can call up the menu as needed, for example, by clicking a right-most mouse button. An embodiment of a menu system in which the menu is normally hidden from view, and in which the menu appears on command, is called a popup menu system. Preferably, the location of the mouse cursor at the time the mouse button is clicked is used as the center-
point 606 of theradial menu 602. The embodiments of the menu system shown in FIGS. 6A-6D are radial popup menus. - FIGS.6A-6D are images of a hierarchy of radial popup menus (referred to as “RadPop” menus). FIG. 6A shows an embodiment of a uppermost hierarchical level of a
RadPop menu 602. In this embodiment, the central menu option, labeled “Home” 606, is located at the position that the cursor occupies when the right mouse button is depressed, activating the RadPop menu system. Activation of the centrally located menu option “Home” 606 causes theRadPop menu 602 to close, or to cease being displayed. The uppermosthierarchical level menu 602 has fouroptions first option 604 a is the View option, which, if activated, changes a view of the video display. Asecond option 604 b is the Help option, which, if activated, displays a help screen. Athird option 604 c is the Insert option, which, if activated, opens a lower level menu of insertion action options. Afourth option 604 d is the file option, which, if activated, opens a menu of actions that can be performed on a file. In this embodiment, the user selects the insert option, 604 c, and the system responds by opening the next lower level of options as another RadPop 620 (see FIG. 3B) whose central menu option overlays the central menu option of the higherhierarchical level RadPop 602. - FIG. 6B shows an image of an embodiment of the
RadPop 620.RadPop 620 appears at the same position at whichRadPop 602 was displayed. The central menu option is labeled “Insert” 622. Activation of the centrally located menu option “Insert” 622 causes theInsert RadPop menu 620 to close, or to cease being displayed, and to be replaced by the hierarchically next higher Radpop menu,Home 602, of FIG. 6A. The “Insert” 620 RadPop menu has three options, labeled “Annotation” 624, “Built-In Components” 626, and “File” 628, respectively. A user who selects the menu option “Built-In Components” 626 causes the system to move down one additional level in the hierarchy of menus, to FIG. 6C, by closingRadPop 620 and displayingRadPop 630. The menu options “Annotation” 624, and “File” 628, respectively, when activated can cause an action to be performed or can move the user one level down in the hierarchy, depending on how the system is designed. - FIG. 6C shows an image of an embodiment of the
RadPop 630.RadPop 630 appears at the same position at whichRadPop 620 was displayed. The central menu option is labeled “Built-In Components” 631. Activation of the centrally located menu option “Built-In Components” 631 causes theInsert RadPop menu 630 to close, or to cease being displayed, and to be replaced by the next higher Radpop menu, “Insert” 620, of FIG. 6B, which moves the user up one level in the hierarchy. The “Built-In Components” 632 RadPop menu has four options, labeled “Java™ Components” 634, “MatLab™ Functions” 636, “Excel™ sheets” 638, and “LabView™ Files” 640, respectively. A user who selects the menu option “Java™ Components” 634 causes the system to move down one additional level in the hierarchy of menus, to FIG. 6D. - FIG. 6D shows an image of an embodiment of the
RadPop 660.RadPop 660 appears at the same position at whichRadPop 630 was displayed. The central menu option is labeled “Java™ Components” 640. Activation of the centrally located menu option “Java™ Components” 640 causes the Java™ mComponents RadPop menu 660 to close, or to cease being displayed, and to be replaced by the next higher Radpop menu, Built-In Components 630, of FIG. 6D, which moves the user up one level in the hierarchy. The “Java™ Components”RadPop menu 660 has seven radial options, labeled “Email” 642, “Fireworks” 644, “FTP” 646, “Text Input” 648, “Mouse” 650,“Text Cutout” 652, and “Stock Ticker” 654, respectively. A user can select any one of the seven radial menu options, each of which opens a respective interaction with the user, in which a component is presented for editing and insertion into a design of an application, as an abstracted software article. For example, in one embodiment, “Stock Ticker” 654 is a component that has the capability to read the statistical information relating to a stock symbol representing a publicly traded stock, mutual fund, bond, government security, option, commodity, or the like, on a regular basis, such as every few seconds. In this embodiment, “Stock Ticker” 654 reads this information via a connection over the Web or the Internet to a site that records such information, such as a brokerage, or a stock exchange. In this embodiment, “Stock Ticker” 654 provides this information as an output stream at an output port of an abstraction of a software article, and the output stream can be connected to other abstractions of software articles, such as an abstraction of a display or output software article for viewing the raw data. In another embodiment, the data stream from the “Stock Ticker” 654 component can be transmitted to one or more software articles, such as an Excel™ spreadsheet software article that can analyze the data according to some analytical function constructed therein, which analyzed data can the n be transmitted to an output software article for display. The application example involving transmitting information obtained by a Stock Ticker to an Excel™ spreadsheet to display is programmed by invoking theStock Ticker 654 abstraction of a software article, providing a ticker symbol, invoking an Excel™ spreadsheet, entering in the spreadsheet an analytical function (or invoking a spreadsheet that has already been programmed with a suitable analytical function), and invoking an output software article. The software articles appear on the user's computer display as abstractions of software articles previously described. The user wires a connection from an output port of the abstraction of the Stock Ticker software article to an input port of the abstraction of the Excel™ spreadsheet software article, and wires a connection from an output port of the abstraction of the Excel™ spreadsheet software article to an input port of the abstraction of the display software article. The system automatically makes the appropriate connections for data to flow from one software article to the next, as is described in more detail below with regard to FIG. 8. The radial menus of FIGS. 6A-6D enable a user to quickly navigate up and down a hierarchy of levels. - FIG. 7 shows an exemplary hierarchy of
menu options 700 in the form of atree structure 702. As shown, the menu of FIG. 6A is the first hierarchical level. In this embodiment, thetree 702 has as its “root” the node labeled “Home” 704. One level down in thehierarchical tree 702 are four options, “File” 706, “Help” 708, “View” 710, and “Insert” 712. User activation ofInsert 712 causes the system to descend an additional level. The next lower level has three menu options, “File” 714, “Annotation” 716, and “Built-In Components” 718. User selection of “Built-In Components” 718 causes the system to descend yet another level. The next level has four menu options, “Java™ Components” 720, “MatLab™” functions 722, “Excel™ Sheets” 724, and “LabView™ Files” 726. When a user triggers the “Java™ Components” 720 menu option, a still lower hierarchical level is reached, which comprises seven components including “Email” 728, “Fireworks” 730, FTP (File Transfer Protocol) 732, “Text Input” 734, “Mouse” 736, “Text Output” 738, and “Stock Ticker” 740. In this embodiment, use selection of one of the seven menu options last enumerated causes a software article to be activated, and the corresponding abstraction of the software article to be visible on the user's computer display, for customization by the user, for example, indicating what file is desired to be moved using the FTP protocol. 732. As previously indicated with regard to FIGS. 6A-6D, the menu option that connects one hierarchical level with a higher hierarchical level, such as the “Java™ Components” 720 menu option, which connects the two lowest hierarchical levels of thetree 702, also serves as the central menu option for the next lower level, and causes the system to move up a level if activated by the user. Those of ordinary skill in the programming arts will understand that menus having any desired number of menu options, and any number of desired hierarchical levels can be constructed using the systems and methods described herein. - The underlying visual programming system can be implemented using a wide variety of very different architectures. As an example, FIG. 8 shows one embodiment of a visual programming system that uses software article802 adapter wrappers 804 to make different articles 802 uniformly accessible to a middle-
ware hub 806 software article. In one embodiment, thehub software article 806 is depicted as having four docking ports, which are shown as concavesemicircular ports 806 a-806 d. Thehub software article 806 can have a plurality of docking ports, limited in number only by the time delays associated with transmitting information between ports. Thehub 806 monitors the states of different software articles 802, and initiates and handles communication between thesoftware articles 808. - The software articles802 and their respective adapters 804 have equal access to system resources. In a preferred embodiment, the software articles 802 are abstracted as Java™ code modules, such as Java™ Serialization or Extensible Mark-up Language (XML™). The system has the property of persistence of state, in which a model can be saved (e.g., appear to be shut down) and restored later, appearing to “remember” where it was when it was last saved.
- CommonFace systems may be saved, shared, loaded, and created using a mark-up language similar to HTML, XML and other tag based languages. In one embodiment, tags include: <component>, <wire>, In one embodiment, <component>attributes include: type, id. In one embodiment, <wire>attributes include: source_component, source_port, target_component, target_port.
- An example of CFML(CommonFace MarkUp Language) is presented below:
<component type=foo id=1></component> <component type=bar id=2></component> <wire source_component=1 source_port=port_1 target_component=2 target_port=port_x></wire> - In greater detail, the
hub 806 maintains an object model (e.g., a Java™ data structure) of application software articles 802 and software article connections. When a user adds or deletes operator software articles 802, the system correspondingly updates the object model. Similarly, when a user specifies an operator software article connection, the system represents this in the object model by identifying the desired transfer of data from operator software article A to operator software article B, or the methods or servers to invoke in operator software article B by operator software article A. In one embodiment, communication between operator software articles 802 may be performed using Java™ object passing, a remote procedure call (e.g., RMI (Remote Method Invocation), or a TCP/IP (Transmission Control Protocol/Internet Protocol) based protocol. - FIG. 8 shows four operator software articles, labeled “Math”808, “Image” 810, “Data” 812, and “Internet” 814, respectively. In some embodiments, each software article accepts input and provides output on a format unique to that software article. For example, in one embodiment, the Math software article 810 uses formats such as integers and floating point values, m x n arrays of data for matrices and vectors (where m and n are positive integers), series of coefficients and powers for polynomials, series of coefficients for Fourier and digital filters, and the like. In one embodiment, the image software article 810 uses one or more of files formatted according to protocols such as the bitmap (.bmp) protocol, the JPEG (.jpg) protocol, or such image file protocols as .tif, .gif, .pcx, .cpt, .psd, .pcd, .wmf, .emf, .wpg, .emx, and .fpx. In one embodiment, the Data software article 812 uses formats including, but not limited to, a single bit, a byte, an integer, a long integer, a floating point, a double floating point, a character, and the like. In one embodiment, the
Internet software article 814 can use protocols such as TCP/IP, DSL, Component Object Model (COM), and the like. None of the four types of software articles in these exemplary embodiments are capable of communicating directly with any other software article. This incompatibility is denoted graphically by depicting each distinct software article with a terminal that is unique in shape and size, for example, themath software article 808 having an arrow-shaped terminal 808 a, and theInternet software article 814 having a square-shaped terminal 814 a. - There are also depicted in FIG. 8 four
adapter articles 816, 818, 820, and 822. The adapter article 816 has a triangular terminal at one end, which is the mating triangular shape to that of the triangular terminal of themath software article 808. The opposite end of adapter 816 is a convex semicircular shape, which is designed to mate with any of the plurality of terminals of thehub software article 806 having concave semicircular shape. A user can recognize that the adapter article 816 is adapted to communicate bidirectionally with thehub software article 806 on one end and a software article such as themath software article 808 having the appropriate arrow-shaped terminal on the other. In terms of connections which take place in the system, and which are transparent to the user, the adapter 816 is designed to translate between the information flow to and from thehub software article 806 in the native language of thehub software article 806, and the information flow to and from any software article using the protocols that theMath software article 808 uses in the format or formats that such protocol requires. In similar fashion, theadapters 810, 812 and 814 perform similar bidirectional translations between the native language of thehub software article 806 on one end, and the protocols used by a particular kind of software article, such as the Image 810, Data 812 andInternet 814 software articles, respectively. The mating shapes are visual indications to the user as to which adapter functions with which software article. The user need not be aware of, or be troubled by programming the details of the translations that are required. These details are preprogrammed and encompassed in theadapters 816, 818, 820, and 822. - In an exemplary embodiment824, the
hub software article 806 is assembled with the math software article 816, the Image software article 810, the Data software article 812, and theInternet software article 814, using theadapters 816, 818, 820, and 822, respectively. In this embodiment, any software article can communicate with any other software article via the common language of the hub software article, the necessary translations being performed by theadapters 816, 818, 820, and 822. - The utility of the illustrative system is clear upon considering the following mathematical analysis. The system requires at most N adapters such as816, 818, 820 and 822 for
N software articles hub software article 806 can interconnect many software articles simultaneously, the advantage is in fact even greater, because the number of connectors necessary to interconnect three software articles simultaneously (e.g., allowing software articles A, B, and C to communicate pairwise) is given by the number N*(N−1)*(N−2)/6. The illustrative system of the invention requires only N adapters to connect N software articles in a manner in which any software article can communicate with any other software article where such communication is required. By comparison, a method in which a hub is not employed requires of the order of N2 adapters for communication between two software articles, and of the order of N3 adapters for communication between three software articles. The amount of additional programming that would be required for N2 or N3 adapters, as compared to N adapters, when N is even moderately large (N>5) is daunting. Furthermore, the additional number of unique adapters required for accommodating a new software article incompatible with any existing adapter in the system of the invention is only one, independent of how many adapters already are in use in the system (e.g., N−(N−1)=1 for any N). While in a system of direct two-way or three-way communication, going from 5 accommodated software articles to 6 accommodated software articles requires 6*5/2−5*4/2−15−10−5 new adapters and 6*5*4/6−5*4*3/6=20 10=10 new adapters, respectively. When N is larger, the situation favors the system of the invention even more strongly as to accommodating new formats of software articles. A connection may specify process flow or timing relative to the activities of software article A to software article B. For example, communication between software articles may occur manually when a user wants to control data transmission, (e.g., when debugging an application). Communication may also occur automatically. For example, the system may initiate communication when the state of an output port changes. - Software article wrappers such as816, 818, 820, and 822 may be manually programmed or automatically generated. For example, many commercial programs, such as MatLab™, provide “header” files describing input and output parameters for different procedures. These header files can be parsed to identify the different input and output ports to use and the type of data that is to be transmitted or received (e.g., bit, byte, integer, long integer, floating point, double floating point, character, and the like). Additionally, an entity creating the wrapper can elect to “hide” different parameters or functions from view. Though this may provide visual programmers with a subset of possible functions, this can also eliminate seldom used features from graphical representation, thereby, “uncluttering” the visual programming environment.
- A variety of different platforms can provide the visual programming system features described above. These platforms include televisions; personal computers; laptop computers; wearable computers; personal digital assistants; wireless telephones; kiosks; key chain displays; watch displays; touch screens; aircraft, watercraft, and/or automotive displays; video game displays; and the like.
- FIG. 9 is a diagram900 of an embodiment of a computer network upon which the invention can be practiced. In FIG. 9, there a plurality of
computers computers network 902, which can be a local network, a wide area network, or a world-wide network, such as the Web. Each computer preferably has adisplay keyboard mouse floppy disk drive - The techniques described here are not limited to any particular hardware or software configuration. The techniques are applicable in any computing or processing environment. In different embodiments, the techniques can be implemented in hardware, software, or firmware or a combination of the three. Preferably, the techniques are implemented in computer programs executing on one or more programmable computers that each include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code is applied to data entered using the input device to perform the functions described and to generate output information. The output information is applied to one or more output devices.
- Each program is preferably implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the programs can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language.
- Each such computer program is preferably stored on a storage medium or device (e.g., CD-ROM, hard disk or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer to perform the procedures described in this document. The system can be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner.
- We now describe the technology of a system for implementing the zooming and panning, and the various hierarchical features relating to both the visualization systems and methods, and the hierarchical nature of the RadPop menus, data, and the like. Although described below in terms of various products and/or services, skilled artisans will appreciate that the operations described below are equally applicable to radpops, software abstractions, and the user's viewing and interacting therewith.
- FIG. 10 is a schematic diagram depicting an exemplary viewing system. The
viewing system 1000 includes anextractor module 1002, astylizer module 1004, aprotocalizer 1006,user controls 1007, and adisplay 1008, which presents data objects to the user in a virtual threedimensional space 1010. The data source or sources 1012 may be external to theviewing system 1000, or in some embodiments may be internal to theviewing system 1000. Theextractor 1002, stylizer 1004 andprotocolizer 1006 operate in conjunction to organize data objects from the data source 1012 and to locate for display those data objects in the virtual three-dimensional space 1010. The virtual three-dimensional space 1010 is depicted to the user as thedisplay space 103 of FIG. 1. Exemplary displayed data objects are shown at 1014 a-1014 h. The data objects 1014 a-1014 h can be. For example, the software articles displayed in FIG. 1. Described first below is an illustrative embodiment of the invention from the point of view of the user viewing thedata objects 1014 a-1014 h from an adjustable viewing perspective. Following that description, and beginning with FIG. 19, is an illustrative description of the operation of theextractor module 1002, thestylizer module 1004, theprotocolizer 1006, theuser controls 1007, and thedisplay 1008. - In the
virtual space 1010, the adjustable user viewing perspective is represented by the position of acamera 1016. The user manipulates thecontrols 1007 to change the viewing perspective, and thus the position of thecamera 1016. Through such manipulations, the user can travel throughout thevirtual space 1010, and view, search through, and interact with, thedata objects 1014 a-1014 g. Theillustrative viewing system 1000 enables the user to change the viewing perspective of thecamera 1016 in an unrestricted fashion to provide the user with the feeling of traveling anywhere within thevirtual space 1010. In the embodiment of FIG. 10, thevirtual space 1010 is modeled as a Cartesian, three-dimensional coordinate system. However, other embodiments may include more dimensions. Additionally, theviewing system 1000 may employ other three dimensional coordinate systems, such as cylindrical and spherical coordinate systems may be employed. Further, as discussed below, such as with respect to FIG. 11, thedata objects 1014 a-1014 h may be organized in thevirtual space 1010 in a variety of manners. - In one embodiment, the
camera 1016 does not rotate, but moves freely along any of the three axes (i.e., i, j, k). By disabling rotation, it becomes easier for the user to remain oriented, and simpler to display thedata objects 1014 a-1014 g. Disabling rotation also reduces the necessary computations and required display information details, which reduces data transfer bandwidths, processor and/or memory performance requirements. In other embodiments thecamera 1016 can move rotationally. - As the user adjusts the viewing perspective of the
camera 1016, theviewing system 1000 changes the appearance of the data objects 101 4 a-101 4 g accordingly. For example, as the user moves thecamera 1016 closer to a data object (e.g., 1014 a), theviewing system 1000 expands the appearance of the displayed image of the data object (e.g., 1014 a). Similarly, as the user moves thecamera 1016 farther away from a data object (e.g., 1014 a), theviewing system 1000 contracts the image of the data object 1014 a. Also, theviewing system 1000 displays the data object closest to thecamera 1016 as the largest data object and with the most detail. Alternatively, theviewing system 1000 displays data objects that are relatively farther away from thecamera 1016 as smaller and with less detail, with size and detail being a function of the virtual distance from thecamera 1016. In this way, theviewing system 1000 provides the user with an impression of depth of the fields. In the Cartesian, three-dimensional coordinate system model of thevirtual space 1010, theviewing system 1000 calculates the virtual distance from the camera to each data object using conventional mathematical approaches. In a further embodiment discussed in more detail below, theviewing system 1000 defines the smallest threshold virtual distance, less than which theviewing system 1000 defines as being behind the position of thecamera 1016. Theviewing system 1000 removes from view those data objects determined to be virtually behind thecamera 1016. According to another feature, data objects can be hidden from view by other data objects determined to be virtually closer to thecamera 1016. - FIG. 11 provides a diagram that illustrates one way the
viewing system 1000 can conceptually organize data objects, such as the data objects 101 4 a-101 4 h, depicted in FIG. 10. As depicted in FIG. 11, theviewing system 1000 conceptually organizes the data objects 1102 a-1102 e on virtual plates 1104 a-1104 c in thevirtual space 1010. As in FIG. 10, thevirtual space 1010 is modeled as a three axis (i.e., i, j, k) coordinate system. Again, the position 1106 of avirtual camera 1016 represents the user's viewing perspective. Although not required, to simplify the example, thecamera 1016 is fixed rotationally and free to move translationally. The data objects 1102 a-1102 e are organized on the virtual plates 1104 a-1104 c in a hierarchical fashion. In the example of FIG. 11, the data objects represent items of clothing and the template employed relates to a clothing store. However, data objects can also represent, for example, various software abstractions, and templates relating to the display of those software abstractions can also be employed as the user views information in the virtual space, as indicated by position “a” 1106 a of thecamera 1016, theviewing system 1000 illustratively presents an icon or graphical representation for “women's clothes” (data object 1102 a). However, as the user visually zooms into the displayed clothing store, as represented by position “b-d” 1106 b-1106 d of thecamera 1016, theviewing system 1000 presents the user increasing detail with regard to specific items sold at the store. As the user virtually navigates closer to a particular plate, thesystem 100 displays less of the information contained on the particular plate to the user, but displays that portion within view of the user in greater detail. As the user virtually navigates farther away, thesystem 100 displays more of the information contained on the plate, but with less detail. - As described below, and as shown on the
plate 1104 c, the same plates may contain multiple data objects, thus enabling the user to pan across data objects on the same plate and zoom in and out to view data objects on other plates. In other embodiments, plates can be various sizes and shapes. Conceptually, each plate 1104 a-1104 c has a coordinate along the k-axis, and as the user's virtual position, represented by the position 1106 of thecamera 1016, moves past the k-axis coordinate for a particular plate, theviewing system 1000 determines that the particular plate is located virtually behind the user, and removes the data objects on that plate from the user's view. Another way to model this is to represent the closest plate, for example the plate 1104 a, as a lid and as the user's viewing position 1106 moves past the plate 1104 a theviewing system 1000 “removes the lid” (i.e. plate 1104 a) to reveal theunderlying plates viewing system 1000 may display to the user a plurality of European countries organized on a plurality of smaller plates. Alternatively, theviewing system 1000 may display a plurality of European countries organized on a single plate. - As an alternative to the Cartesian coordinate system of FIGS. 10 and 11, the
virtual space 1010 in which theviewing system 1000 hierarchically organizes the data objects to spatially relate to each other based on a physical paradigm can also be conceptualized as a node tree. FIGS. 12A-12C illustrate such a conceptualization. More particularly, FIG. 12A depicts anode tree 1200 that defines hierarchical relationships between thedata nodes 1202 a-1202 h. FIG. 12B depicts atree structure 1204 that provides potential visual display representations 1206 a-1206 h for each of thedata objects 1202 a-1202 h. FIG. 12C provides atree structure 1208 illustrative of how the user may navigate a displayed virtual representation 1206 a-1206 h of thedata objects 1202 a-1202 h. The nodes of the node tree are representative of data objects and/or the appearance of those data objects. - FIG. 12C also illustrates one method by which the
viewing system 1000 enables the user to navigate thedata objects 1202 a-1202 h in an unrestricted manner. As indicated by the dashed connections 1210 a -1210 d, theviewing system 1000 enables the user to virtually pan across any data object on a common hierarchical level. By the way of example, the user may virtually navigate into a clothing store, graphically represented by thegraphic appearance 1206 a and then navigate to women's clothing represented by thegraphic appearance 1206 b. However, theviewing system 1000, based on a template related to a clothing store, has hierarchically organized men's clothing, represented by thegraphic appearance 1206 c, to be at an equivalent hierarchical location to women'sclothing 1206 b. Thus, theviewing system 1000 enables the user to pan visually from the women's clothinggraphic appearance 1206 b to the men's clothinggraphic appearance 1206 c, via thecontrols 1007, to view men's clothing. - FIG. 12C also illustrates how the
viewing system 1000 enables the user to virtually travel through hierarchical levels. By way of example, and as indicated by thelinks 1212 a 1212 b, the user can virtually navigate from any data object, such as theobject 1202 a, assigned to a parent node in thetree structures objects viewing system 1000 also enables the user to navigate visually for example, from a hierarchically superior data object, such as theobject 1202 a, through data objects, such as the data object 1202 c along thepaths viewing system 1000 displays thegraphic appearance 1206 c as being larger with more detail and then, as disappearing from view as it moves to a virtual position behind the user's viewing position. - FIG. 12C also illustrates how the
viewing system 1000 enables the user to navigate between data objects, without regard for hierarchical connections between thedata objects 1202 a-1202 h. More particularly, as indicated by theillustrative paths 1214 a and 1214 b, the user can navigate directly between the data object 1202 a and the data object 1202 g. As described in detail below, with respect to FIGS. 16-18 theviewing system 1000 provides such unrestricted navigation using a variety of methods including by use of “wormholing,” “warping,” and search terms. In the node tree model of FIGS. 12A-12C, theviewing system 1000 displays a graphical representation of data objects to the user in a similar fashion to the coordinate-based system of FIGS. 10 and 11. More specifically, data objects located at nodes that are hierarchically closer to the user's virtual viewing position are displayed as being larger and with more detail than data objects located at nodes that are hierarchically farther away from the user's virtual viewing position. By way of example, in response to the user having a virtual viewing position indicated by the camera 1216 b, theviewing system 1000 displays thegraphic appearance 1206 a to the user with greater detail and at a larger size than, for example, thegraphic appearances 1206 b-1206 h. Similarly, theviewing system 1000 displays thegraphic appearances graphic appearances 1206 d -1206 h. Theviewing system 1000 employs a variety of methods for determining virtual distance for the purpose of providing a display to the user that is comparable to a physical paradigm, such as for example, the clothing store of FIGS. 12A-12C. - In the embodiment of FIG. 12A, the
viewing system 1000 determines the user's virtual viewing position, indicated at 1216 a. Then, theviewing system 1000 determines which data object 1202 a-1202 h is closest to the user's virtual position and defines a plurality of equidistant concentric radii 1218 a-1218 c extending from the closest data object, 1212 c in the example of FIG. 12A. Since thedata node 1202 c is closest to the user's virtual position, theviewing system 1000 displays the data object 1202 c with the most prominence (e.g., largest and most detailed). Alternatively, the data objects 1202 a, 1202 b, 1202 d and 1202 e, which are located equidistant from thedata node 1202 c are displayed similarly with respect to each other, but smaller and with less detail than theclosest data node 1202 c. - In another embodiment, the virtual distance calculation between nodes is also based on the hierarchical level of the data node that is closest to the user's virtual position. The nodes on the same hierarchical level are displayed as being the same size and with the same detail. Those nodes that are organized, hierarchically lower than the node closest to the user are displayed smaller and with less detail. Even though some nodes may be an equal radial distance with respect to the closest node, they may yet be assigned a greater virtual distance based on their hierarchical position in the
tree 1200. - In a physical paradigm, such as the retail clothing store of FIGS.12A-12C, the user is less likely to be interested in data objects located at nodes on the same hierarchical level. By way of example, the user browsing men's clothing at the
node 1206 c is more likely to navigate to men's pants at thenode 1206 e than to navigate to women's clothing at thenode 1212 a. Thus, in another embodiment, theviewing system 1000 includes the number ofhierarchical links 1212 a-1212 g between nodes in the virtual distance calculation. For example, if the user's virtual location is atnode 1206 e (e.g., pants), the radial distance fornodes 1206 d (e.g., shirts), 1206 g (e.g., type of shirt) and 1206 h (e.g., type of pants) may be equal. However, the calculated virtual distance tonode 1206 h (e.g., type of pants) is less then than the calculated virtual distance tonodes 1206 d (e.g., shirts) and 1206 g (e.g., type of shirt), since thenode 1206 h (e.g., type of pants) is only onelink 1212 g from thenode 1206 e (e.g., pants). Nodes separated by a single hierarchical link, such as thenodes related nodes viewing system 1000 displays those nodes as being smaller and with less detail. When discussed in terms of the physical paradigm, the user is more likely to want to know about a type ofpants 1206 h when at thepants node 1206 e than a type ofshirt 1206 g. - In another embodiment, the
viewing system 1000 gives equal weight to the direct relationship basis and the same hierarchical level basis in the virtual distance calculation. With this method, theviewing system 1000 considers thenodes node 1206 e, and thenode 1206 g to be farther away from thenode 1206 e. Other embodiments may weight variables such as directness of relationship and hierarchical level differently when calculating virtual distance. Again, discussing in terms of the physical paradigm, the user may be equally interested inshirts 1206 d or a type ofpants 1206 h when at thepants node 1206 e. Theviewing system 1000 assumes that the user is less likely to want to know about a type ofshirt 1206 g and thus, theviewing system 1000 sets the virtual distance greater for thatnode 1206 g than the other twonodes nodes - In grouping hierarchically lower data nodes under a hierarchically higher data node, the
viewing system 1000 conceptually drapes a grouping sheet over the hierarchically lower data nodes to form a category of data nodes. Theviewing system 1000 then conceptually drapes larger grouping sheets over the first grouping sheets, thus grouping the data objects into greater categories. Such groupings are also evident in the hierarchical node structures of FIGS. 12A-12C. - FIG. 13 depicts a block diagram1300 illustrating the use of multiple templates in combination. In this illustration, four templates 1303, 1305, 1307 and 1309 represent four different transportation services;
car rentals 1302,buses 1304, taxies 1306, andsubways 1308. Illustratively, thebus 1304 andsubway 1308 templates contain map and schedule information, and fares are based on the number of stops between which a rider travels. Thetaxi template 1306 has fare information based on mileage and can contain map information for calculating mileage and/or fares. Thecar rental template 1302 contains model/size information for various vehicles available for rent, and fares are based on time/duration of rental. The hierarchical layout for each template 1302-1308 is organized in accord with the invention to provide an intuitive virtual experience to the user navigating the information. As depicted in FIG. 13, the templates 1302-1308 can themselves be hierarchically organized (i.e., a top-level hierarchical relationship) through the use of the super templates 1310-1314. More specifically, in one example, theviewing system 1000 organizes the templates 1302-1308 using amenu super template 1310. Themenu super template 1310 relates the templates 1302-1308 on a common hierarchical level, showing that all four transportation services 1302-1308 are available. Illustratively, thesuper template 1310 organizes the templates 1302-1308 alphabetically. - In another example, the
viewing system 1000 organizes the templates 1302-1308 using amap super template 1312. Themap super template 1312 relates to a geographical location physical paradigm. Themap super template 1312 relates the four templates 1302-1308 in accordance with the geographical relationship between the represented transportation services (i.e. car rental, bus, taxi and subway). Themap super template 1312 can be used, for example, when the user wants to know which transportation services are available at a particular geographical location. For example, the user may be trying to decide into which airport to fly in acertain state 1316, and wants to locate information about transportation services available at the different airports within the state. - In a further example, the
viewing system 1000 organizes the templates 1304-1308 using astreet super template 1314. Thestreet super template 1314 relates to a street layout physical paradigm. Thestreet super template 1314 spatially relates the templates 1304-1308 to each other in terms of their street location. Thesuper template 1314 can be used, for example, when the user has a street address and wants to know which transportation services are available nearby. In a further embodiment, the user can begin with the map super template 1313 to find a general location and then pan and zoom to the street level using thestreet super template 1314. - The
viewing system 1000 may additionally employ irregular display shapes for advanced visual recognition. For example, the graphic appearance associated with each data node can be defined to have a unique shape such as star, pentagon, square, triangle, or the like. With a conventional desktop metaphor, display area availability is at a premium, thus rendering it impractical to employ irregular shapes. However, the panning and zooming features of theviewing system 1000 render display space essentially infinite. Thus, the display of virtually any client can be configured in favor of readability and an overall user experience. An aspect of theillustrative viewing system 1000 provides the user with the sense of real-time control of the displayed data objects. Rather than a stop and go display/interactive experience, theviewing system 1000 provides an information flow, a revealing and folding away of information, as the user requires information. Accordingly, the state of theviewing system 1000 is a function of time. The user adjusts the virtual viewing position over time to go from one data object to the next. Therefore, a command for the virtual viewing position of the user, represented in FIGS. 10 and 11 by the position of thecamera 1016, is of the form f(x, y, z), where (x, y, z) is a function of time, f(t). The appearance of data objects that theviewing system 1000 displays to the user is a function of time as well as position. - According to the illustrative embodiment, as the user changes viewing perspective, the
viewing system 1000 changes the appearance of a graphical representation of the data objects in a smooth, continuous, physics-based motion. According to one embodiment, the motion between viewing perspective positions, whether panning (e.g., translational motion along the i and j axes of FIGS. 10 and 11) or zooming (e.g., motion along the k axis, with increasing detail of closest data objects), is performed smoothly. Preferably, theviewing system 1000 avoids generating discrete movements between locations. This helps ensure that the user experiences smooth, organic transitions of data object graphical appearances and maintains context of the relationship between proximal data objects in the virtual space, and between the displayed data objects and a particular physical paradigm being mimicked by theviewing system 1000. - In one embodiment, the
viewing system 1000 applies a sine transformation to determine the appropriate display. For example, the virtual motion of the user can be described as linear change from t=0 to t=1. Theviewing system 1000 applies a sine transform function to the discrete change, for example t_smooth=sin(t*pi/2) where t changes from 0 to 1. The discrete transition is changed to a smoother, rounded transition. - One way to model the motion for adjustments of the user viewing perspective is to analogize the user to a driver of a car. The car and driver have mass, so that any changes in motion are continuous, as the laws of physics dictate. The car can be accelerated with a gas pedal or decelerated with brakes. Shock absorbers keep the ride smooth. In terms of this model, the user controls107 of
system 100 are analogously equipped with these parts of the car, such as a virtual mass, virtual shocks, virtual pedals and a virtual steering wheel. The user's actions can be analogized to the driving of the car. When the user is actuating a control, such as a key, a joy stick, a touch-screen button, a voice command or a mouse button, this is analogous to actuating the car's accelerator. When the user deactuates the control and/or actuates an alternative control, this is analogous to releasing the accelerator and/or actuating the car's brakes. Thus, theillustrative viewing system 1000 models adjusting of the user viewing perspective as movement of thecamera 1016. The system assigns a mass, a position, a velocity and an acceleration to thecamera 1016. - In another embodiment, the
viewing system 1000 models the user's virtual position logarithmically, that is, for every virtual step the user takes closer to a data object (e.g., zooms in), theviewing system 1000 displays to the user a power more detail of that data object. Similarly, for every virtual step the user takes farther away from a data object (e.g., zooms out), theviewing system 1000 displays to the user a power less detail for that data object. For example, the following illustrative code shows how exemplary exp( ) and log( ) functions are used:// returns the conversion factor of world width to screen width static double world_screen_cfactor(double camera_z) { return exp(camera_z); } static double world_width_and_screen_width_to_camera_z(double world_dx, int screen_dx) { if(world_dx==0) return 1;return log(((double)screen_dx)/world_dx); } - FIG. 14 provides a simplified flow diagram1400 depicting operation of the
viewing system 1000 when determining how much detail of a particular data object to render for the user. This decision process can be performed by a client, such as theclient 1014 depicted in FIG. 19 or by thestylizer module 1004. As thedecision block 1402 illustrates, theviewing system 1000 determines the virtual velocity of the change in the user's virtual position, and employs the virtual velocity as a factor in determining how much detail to render for the data objects. Theviewing system 1000 also considers the display area available on the client to render the appearance of the data objects (e.g., screen size of client 1014). As indicated insteps viewing system 1000 renders successively less detail. Similarly, as also indicated bysteps system 100 also renders less detail as the available display area at theclient 1014 decreases. Alternatively, as indicated bysteps client 1014 increases, theviewing system 1000 renders more detail. Thus, theviewing system 1000 makes efficient use of display area and avoids wasting time rendering unnecessary details for fast-moving data objects that appear to pass by the user quickly. - FIG. 15 illustrates various
potential appearances 1502 a-1502 c for atextual data object 1502, along with variouspotential appearances 1504 a-1504 c for animage data object 1504. Theaxis 1506 indicates that as virtual velocity increases and/or as client display area decreases, theviewing system 1000 decreases the amount of detail in the appearance. At the “full” end of theaxis 1506, the virtual velocity is the slowest and /or the client display area is the largest, and thus, theviewing system 1000 renders thetextual data object 1502 and theimage data object 1504 with relatively more detail, shown at 1502 a and 1504 a. At the “box outline” end of theaxis 1506, the velocity is the greatest and /or the client display area is the smallest, and thus theviewing system 1000 renders the appearance of the same data objects 1502 and 1504 with nodetail viewing system 1000 renders the data objects 1502 and 1504 simply as boxes to represent to the user that a data object does exist at that point in thevirtual space 1000, even though because of velocity or display area, the user cannot see the details. In the middle of theaxis 1506 is the “line art” portion. In response to the virtual velocity and/or the available client display area being within particular parameters, theviewing system 1000 renders the data objects 1502 and 1504 as line drawings, such as that depicted at 1502 b and 1504 b, respectively. - In the illustrative embodiment, the
viewing system 1000 transmits and stores images in two formats. The two formats are raster graphic appearances and vector graphic appearances. The trade-off between the two is that raster graphic appearances provide more detail while vector graphic appearances require less information. In one embodiment, raster graphic appearances are used to define the appearance of data objects. Raster graphic appearances defines graphic appearances by the bit. Since every bit is definable, raster graphic appearances enable theviewing system 1000 to display increased detail for each data object. However, since every bit is definable, a large amount of information is needed to define data objects that are rendered in a large client display area. - In another embodiment, the raster graphic appearances, which require large size data words even when compressed, are omitted and instead the
viewing system 1000 employs vector graphic appearances and text, which require a smaller size data word than raster graphic appearances, to define the appearance of data objects. Vector graphic appearances define the appearance of data objects as coordinates of lines and shapes, using x, y coordinates. A rectangle can be defined with four x, y coordinates, instead of the x times y bits necessary to define the rectangle in a raster format. For example, the raster graphic appearance of the country of England, which is in .gif form, highly compressed, is over three thousand bytes. However, the equivalent vector version is roughly seventy x, y points, where each x, y double is eight bytes for a total of five hundred sixty bytes. A delivery of text and vector images creates a real-time experience for users, even on a 14.4 kilobyte per second modem connection. - FIG. 16 illustrates various embodiments of visual indicators employed by the
viewing system 1000. In addition to displaying data objects to the user, theviewing system 1000 also displays visual indicators to provide the user an indication of the hierarchical path the user has virtually navigated through in thevirtual space 1008. This is sometimes referred to as a breadcrumb trail. In one embodiment, theviewing system 1000 provides atext breadcrumb bar 1602. The illustrativetext breadcrumb bar 1602 is a line of text that concatenates each hierarchical level visited by the user. For example, referring back to FIG. 12, thegraphic appearance 1206 a is the “home” level, thegraphic appearance 1206 c islevel 1, thegraphic appearance 1206 e islevel 2 and, thegraphic appearance 1206 h is the “leaves” level. The associated text breadcrumb trail is thus, “store.mensclothing.menspants.” This represents the selections (e.g., plates, data nodes) that the user virtually traveled through (e.g., by way of zooming and panning) to arrive at the “leaves” level display. - According to another embodiment, the
viewing system 1000 provides a text and imagebread crumb bar 1604. Like thetext breadcrumb trail 1602, the text andimage breadcrumb trail 1604 is a concatenation of each hierarchical level through which the user virtually travels. However, in addition to text, thetrail 1604 also includesthumbnail images 1604 a-1604 c to give the user a further visual indication of the contents of each hierarchical level. In another embodiment, theviewing system 1000 provides a trail of nestedscreens 1606. Each nestedscreen 1606 a -1606 c corresponds to a hierarchical level navigated through by the user. In another embodiment, theviewing system 1000 provides a series ofboxes 1608 in a portion of the display. Eachbox 1608 a -1608 c represents a hierarchical level that the user has navigated through and can include, for example, mini screen shots (e.g., vector condensation), text and/or icons. According to a further feature, selecting any particular hierarchical level on a breadcrumb trail causes the user to automatically virtually navigate to the selected hierarchical level. - According to another feature, the
viewing system 1000 enables the user to preview data objects without having to zoom to them. According to one embodiment, in response to the user moving a cursor over a region of the display, theviewing system 1000 reveals more detail about the data object(s) over which the cursor resides. By way of example, referring to the plates 1104 a-1104 c of FIG. 11, in response to the user placing the cursor in a particular location, theviewing system 1000 displays data objects on one or more plates behind the plate in current view. The term “fisheye” refers to a region, illustratively circular, in the display that acts conceptually as a magnified zoom lens. According to a fisheye feature, theviewing system 1000 expands and shows more detail of the appearance of the data objects within the fisheye region. In one embodiment, these concepts are used in combination with a breadcrumb trail. For example, in response to the user locating the cursor or moving the “fisheye” on a particular hierarchical level of a breadcrumb trail theviewing system 1000 displays the contents of that particular hierarchical level. According to one embodiment, theviewing system 1000 displays such contents via a text information bar. Thus, these functions enable the user to preview a data object on a different hierarchical level, without actually having to change the viewing perspective to that level, and to make enhanced usage of the breadcrumb trails illustrated in FIG. 16. - FIG. 17 provides a conceptual diagram1700 illustrating two methods by which the user can virtually navigate to any available data object, or hierarchical level. The two illustrative methods are “warping” and search terms. An exemplary use of search terms and warping is as follows. Referring also back to FIG. 12A-12C, from the home
graphic appearance 1206 a, the user can input a search term, such as “menspants” 1702. In response, theviewing system 1000 automatically changes the user's virtual location (and thus, hierarchical level), and displays thegraphic appearance 1206 e, whereby the user can zoom into any of thegraphic appearance 1206 h to revealavailable products 1206 h. As illustrated by the user ‘flight’ path 1706, the virtual motion of the viewing perspective is a seemingly continuous motion from a startinghierarchical level 1704 a at the data objectgraphic appearance 1206 a to thehierarchical level 1704 c of the data objectgraphic appearance 1206 e corresponding to the enteredsearch term 1704 b. As the user virtually travels through the intermediatehierarchical levels 1704 b associated with the data object 1206 c theviewing system 1000 also renders the data objects that are virtually and/or hierarchically proximate to the intermediate data object 1206 c. This provides the user with an experience comparable of traveling through the virtual,multi-dimensional space 1010 in which the data objects are located. However, very little detail is used, as the velocity of the automatic change of location of the viewing perspective is very fast. - According to another embodiment, the
viewing system 1000 enables the user to warp from one data object to another through the use of a visual “wormhole.” FIG. 18 illustrates the use of awormhole 1806 within thegraphic appearance 1808. In thegraphic appearance 1808, there are two data objects identified, thedocument 1810 and a reducedversion 1812 a of adocument 1812. In the spatial hierarchical relationship in thevirtual space 1010, thedocument 1808 is located virtually far away from thedocument 1812. However, thetemplate 1005 provides a connection (e.g., a hyperlink) between the twodocuments viewing system 1000 creates awormhole 1806. Since a wormhole exists, theviewing system 1000 displays the reducedversion 1812 a (e.g., thumbnail) of the data object graphic appearance associated with thedocument 1812 withindocument 1808 to indicate to the user that the wormhole (e.g., hyperlink) exists.. In response to the user selecting the data object 1812 a, theviewing system 1000 warps the user to thedata object 1812. As described above with respect to FIG. 17, when warping, theviewing system 1000 displays to the user a continuous, virtual motion through all of the existing data objects between thedocument 1808 and thedocument 1812. However, the virtual path is direct and the user does not navigate, theviewing system 1000 automatically changes the user's viewing perspective. Of course, the user is always free to navigate to thedocument 1812 manually. Illustratively, warping is employed to provide the automatic breadcrumb navigation discussed above with respect to FIG. 16. - FIG. 19 is a schematic view depicting another exemplary implementation of the
viewing system 1000. As discussed with respect to FIG. 10, theviewing system 1000 includes anextractor module 1002, astylizer module 1004, aprotocolizer module 1006, one ormore templates 1005,user controls 1007 and adisplay 1008. FIG. 12 depicts eachcomponent viewing system 1000. In one embodiment, for example, thecomponents components client 1014. In another embodiment, for example, all of thecomponents - The
extractor module 1002 is in communication with a data source 1012 (e.g., a database) from which theextractor module 1002 extracts data objects. Theextractor module 1002 converts, if necessary, the data objects into a W3C standard language format (e.g., extended markup language “XML™”). Theextractor module 1002 uses amapping module 1016 to relate each of the data objects to each of the other data objects. In one embodiment, themapping module 1016 is an internal sub-process of theextractor module 1002. Theextractor module 1002 is also in communication with thestylizer module 1004. Theextractor module 1002 transmits the data objects to thestylizer module 1004. - The
stylizer module 1004 converts the data objects from their W3C standard language format (e.g., XML™) into a virtual space language format (e.g., ZML™, SZML™, referred to generally as ZML™). The ZML™ format enables the user to view the data objects from an adjustable viewing perspective in the multi-dimensional,virtual space 1010, instead of the two-dimensional viewing perspective of a typical Web page. Thestylizer module 1004 uses one ormore templates 1005 to aid in the conversion. The one or more templates, 1005 hereinafter referred to as thetemplate 1005 include two sub-portions, a spatiallayout style portion 1005 a and acontent style portion 1005 b. The spatiallayout style portion 1005 a relates the data objects in a hierarchical fashion according a physical paradigm. Thecontents style portion 1005 b defines how the data object are rendered to the user. Thestylizer module 1004 is also in communication with theprotocolizer module 1006. Thestylizer module 1004 transmits the data objects, now in ZML™ format, to theprotocolizer module 1006. - The
protocolizer module 1006 converts the data objects to established protocols (e.g., WAP, HTML, GIF, Macromedia FLASH™) to communicate with a plurality of available clients 1014 (e.g., televisions; personal computers; laptop computers; wearable computers; personal digital assistants; wireless telephones; kiosks; key chain displays; watch displays; touch screens; aircraft; watercraft; and/or automotive displays) and browsers 1018 (e.g., Microsoft Internet Explorer™, Netscape Navigator™) to display the data objects from the user's viewing perspective in a navigable, multi-dimensionalvirtual space 1010. Thebrowser 1018 is hardware and/or software for navigating, viewing and interacting with local and/or remote information. Theviewing system 1000 also includes a zoom renderer 1020. The zoom renderer 1020 is software that renders the graphic appearances to the user. This can be, for example, a stand-alone component or a plug-in to thebrowser 1018, if thebrowser 1018 does not have the capability to display the ZML™ formatted data objects. Throughout the specification, the term “client” 1014 is used to represent both the hardware and the software needed to view information although the hardware is not necessarily considered part of theviewing system 1000. Theprotocolizer module 1006 communicates with theclient 1014 via acommunication channel 1022. Since theprotocolizer module 1006 converts the ZML™ format into an established protocol, thecommunication channel 1022 can be any channel supporting the protocol to which the protocolizer module 106 converts the ZML™ format. For example, thecommunication channel 1022 can be a LAN, WAN, intranet, Internet, cellular telephone network, wireless communication network (including third generation wireless devices), infrared radiation (“IR”) communication channel, PDA cradle, cable television network, satellite television network, and the like. - The data source1012, at the beginning of the process, provides the content (i.e., data objects). The content of the data source 1012 can be of any type. For example, the content can take the form of a legacy database (e.g., Oracle™, Sybase™, Microsoft Excel™, Microsoft Access™), a live information feed, a substantially real-time data source and/or an operating system file structure (e.g., MAC™ OS , UNIX™ and variations of UNIX™, Microsoft™ Windows™ and variations of Windows™). In another embodiment, the data source 112 can be a Web server and the content can include, for example, an HTML page, a page written in Coldfusion™ Markup Language (“CFM”) by Allaire, an Active Server Page (“ASP”) and/or a page written for a Macromedia FLASH™ player. In these cases, the content typically is not stored in the ZML™ format (i.e., “zoom-enabled”). If the content is not stored in a ZML™ format, the
extractor module 1002 andstylizer module 1004 convert the content into the ZML™format. In another embodiment, the content can be one or more of an algorithm, a simulation, a model, a file, and a storage device. - In other embodiments, the stored content is in the ZML™ format. In these embodiments, the
viewing system 1000 transfers the content from the data source 1012 to theextractor module 1002, thestylizer module 1004 andprotocolizer module 1006, without any additional processing. For example, if the content of the data source 1012 is already in ZML™ format, thestylizer module 1004 does not need to take any action and can transmit the content directly to theprotocolizer module 1006. - The types of transactions processed by the data source1012 are transactions for obtaining the desired content. For example, for a legacy database, a representative input can be “get record” and the corresponding output is the requested record itself For a file system, a representative input can be “get file(dir)” and the corresponding output is the content information of the “file/dir.” For a Web site, a representative input can be “get page/part” and the corresponding output is the requested page/part itself. The
viewing system 1000 transfers the output from the data source 1012 to theextractor module 1002. - As briefly mentioned above, the
extractor module 1002 receives the content from the data source 1012. Theextractor module 1002 separates the content into pieces referred to as data objects. Theextractor module 1002 converts the content into a hierarchical relationship between the data objects within the content. In one embodiment, the hierarchical data structure is one that follows a common language standard (e.g., XML™). - FIG. 20 is a
schematic view 2000 depicting an illustrative conversion of a filesystem directory tree 2002 to ahierarchical structure 2004 of data objects by theextractor module 1002. The extractor module 1012 relates each of the data objects, consisting of the directories 2006 a-2006 d and thefiles 2008 a-2008 c, to each other in thehierarchical data structure 2004, illustratively represented as a node tree. In this embodiment, relationships between the nodes 2006 a-2006 d and 2008 a-2008 h of thehierarchical data structure 2004 follow the relationships depicted in thedirectory tree 2002. - The types of transactions processed by the
extractor module 1002 are transactions for converting the obtained content to data objects in a hierarchical data structure, for example, XML™. For example, for a legacy database, representative inputs to theextractor module 1002 can be data record numbers and mapping, if the data base already contains a mapping of the data -15 objects. A representative command can be, for example, “get_record(name)|get_record(index).” The corresponding output from the extractor module 102 is an XML™ data structure of the data objects. For a file system, for example, a representative input can be filename(s), with representative commands such as “get_file(directory, name)” and “get_file_listing(directory).” For a Web site, a representative input can be Web pages/parts, with a representative command such as “get_Web_content(URL, start tag, end tag).” - By way of further example, the
extractor module 1002 analyzes the content to convert the content to create an exemplary structure such as:struct { void* data... node* parent node* child[ren] }node; - As mentioned above, to create the exemplary structure, the
illustrative extractor module 1002 uses themapping module 1016. Operation of themapping module 1016 depends on the type of content received by theextractor module 1002. For example, for a file structure, themapping module 1016 traverses the directory tree until it creates a node for each file (i.e., data object) and each directory (i.e., data object) and creates the appropriate parent-child relationship between the nodes (i.e., data objects). FIG. 20 illustrates how themapping module 1016 follows thedirectory tree 2002 when creating thehierarchical data structure 2004. For some databases, themapping module 1016 keeps the hierarchical relationships of data objects as they are in the data source. For example, a retail store might organize its contents in, for example, an Oracle™ database and into logical categories and sub-categories forming a hierarchical data structure that themapping module 1016 can copy. Another database might be, for example, a list of geographic points. Themapping module 1016 can use geographical relationship to create the hierarchical relationship between the points. - In other databases, there are no hierarchical relationships between data objects. In that case, the
mapping module 1016 creates them. In other data bases, such as for example, a flat list of names and personal information, the hierarchical structure may be less evident. In that case, themapping module 1016, preferably, creates the relationships using some predetermined priorities (e.g., parent nodes are state of residence first, then letters of the alphabet). - If the content is Web-related content, the
mapping module 1016 extracts the vital information by first determining the flow or order of the Web site. To zoom enable a typical Web site, themapping module 1016 extracts from the Web site a data hierarchy. HTML pages are a mix of data and formatting instructions for that data. HTML pages also include links to data, which may be on the same page or a different page. In one embodiment, themapping module 1016 “crawls” a Web site and identifies a “home” data node (for example, on the home page) and the name of the company or service. Next, themapping module 1016 identifies the primary components of the service such as, for example, a table of contents, along with the main features such as “order,” “contact us,” “registration,” “about us,” and the like. Then themapping module 1016 recursively works through the sub-sections and sub-subsections, until it reaches “leaf nodes” which are products, services, or nuggets of information (i.e., ends of the node tree branches). - This process determines critical data and pathways, stripping away non-essential data and creating a hierarchical tree to bind the primary content. This stripping down creates a framework suitable for zooming, provides the user with a more meaningful focused experience, and reduces strain on the client/server connection bandwidth.
- FIG. 21 is a flow diagram2100 illustrating operation of an exemplary embodiment of the
extractor module 1002 process for converting a Web page to a hierarchical XML™ data structure 2102. Theextractor module 1002 downloads (step 2104) the Web page (e.g., HTML document). From the contents between the Web page <head></head>tags, themapping module 1016 obtains (step 2106) the title and URL information and uses this information as thehome node 2102 a (i.e., the root node). Theextractor module 1002 also obtains (step 2108) the contents between the Web page <body></body>tags. Themapping module 1016 processes (step 2110) the HTML elements (e.g., 2102 b-2102 c) to create thehierarchical structure 2102. For example, as shown, the first HTML element encountered is a table 2102 b. The table 2102 b includes afirst row 2102 c. Thefirst row 2102 c includes afirst cell 2102 d. Thefirst cell 2102 d includes a table 2102 e, alink 2102 f and sometext 2102 g. As themapping module 1016 traverses (step 2110) the HTML elements, it forms thehierarchical structure 2102. Any traversal algorithm can be used. For example, themapping module 1016 can proceed, after obtaining all of thecontents 2102 e -2102 g of thefirst cell 2102 d of thefirst row 2102 c, to a second cell (not shown) of thefirst row 2102 c. This traversal is repeated until all of the HTML elements of the Web page have been processed (step 2110) and mapped into thehierarchical structure 2102. - In another embodiment, the
extractor module 1002 extracts each displayable element from a Web page. Each element becomes a data object. Themapping module 1016, preferably, creates a hierarchical relationship between the data objects based on the value of the font size of the element. Themapping module 1016 positions those data objects (e.g., HTML elements) with a larger value font size higher in the hierarchical relationship than those data objects with a smaller value font size. Additionally, themapping module 1016, preferably, uses the location of each element in the Web page as a factor in creating the hierarchical relationship. More particularly, themapping module 1016 locates those elements that are next to each other on the Web page, near each other in the hierarchical relationship. - In another embodiment, to further help extract the vital information from Web sites, the
mapping module 1016 uses techniques such as traversing the hyperlinks, the site index, the most popular paths traveled and/or the site toolbar, and parsing the URL. FIG. 22 is a diagram 2200 illustrating two of these techniques; traversing thehyperlinks 2202 and thesite index 2204. In the illustrative example, themapping module 1016 traverses thehyperlinks 2202 to help create a hierarchy. During this process, themapping module 1016 tracks how eachpage 2206 relates to eachlink 2208, and essentially maps a spider web ofpages 2206 andlinks 2208, from which themapping module 1016 creates a hierarchy. Themapping module 1016 can also use thesite map 2204 and tool bars when those constructs reveal the structure of a Web site. As discussed above, themapping module 1016 can also use the size of the font of the elements of thesite map 2204 along with their relative position to each other to create a hierarchy. - Additionally, the
mapping module 1016 can parse the URL to obtain information about the Web site. Typically, URLs are in the form http://www.name.com/dir1/dir2/file.html. The name.com field generally indicates the name of the organization and the type of the organization (.com company, .cr for Costa Rica, .edu for education and the like). The dir1 and dir2 fields provide hierarchical information. The file.html field can also reveal some information, if the file name is descriptive in nature, about the contents of the file. - The
mapping module 1016 can also access information from Web sites that track the popularity of URL paths traveled. Such sites track which links and pages are visited most often, and weights paths based on the number of times they are traversed. Theillustrative mapping module 1016 uses the information obtained from such sites, alone or in combination with other relationship information gained with other techniques, to create the hierarchical relationships between extracted data objects. - Once the
mapping module 1016, working in conjunction with theextractor module 1002, creates a hierarchical data structure for the extracted data objects, theextractor module 1002 processes the data objects of the content in terms of their relationship in the hierarchy. In one embodiment, a W3C standard language data structure (e.g. XML™) is used to create a platform and vendor independent data warehouse, so that the rest of thesystem 100 can read the source content and relate the data objects of the content in thevirtual space 1010. - The types of transactions processed by the
extractor module 1002 are transactions relating to obtaining the hierarchical relationships between data objects. For example, for node information, a representative input can be “get node[x]” and the corresponding output is the requested node[x] itself. For data information, a representative input can be “get data” and the corresponding output is the requested data itself. For parent information, a representative input can be “get parent” and the corresponding output is the requested parent itself. For child information, a representative input can be “get child[x]” and the corresponding output is the requested child[x] itself. Theextractor module 1002 provides the output (i.e., the XML™ data structure) to thestylizer module 1004. - As mentioned briefly above, the
stylizer module 1004 converts the data objects from theextractor module 1002 into ZML™ format. The stylizer module uses one ormore templates 1005, which are related to one or more physical paradigms, to aid in the conversion. Thetemplate 1005 includes two sub-portions, the spatiallayout style portion 1005 a and thecontents style portion 1005 b. The spatiallayout style portion 1005 a relates the data objects in a hierarchical fashion according to a physical paradigm. Thecontents style portion 1005 b defines how the data objects are rendered to the user. - The
stylizer module 1004 can be implemented using any of a plurality of languages, including but not limited to JAVA™, C, XML™ related software, layout algorithms, GUI-based programs, and C and Macromedia FLASH™ compatible programs. Thestylizer module 1004 receives data objects from theextractor module 1002 and converts the data objects from an XML™ format to the ZML™ format. The ZML™ format generated by thestylizer 1004 is analogous to HTML, except designed for the multi-dimensionalvirtual space 1010. The ZML™ format employs a mark up language that describes one or more of the data objects organized within the virtual space. Like HTML, ZML™ format uses tags to describe the attributes of, for example, the conceptual plates 1104 a-1104 c discussed above with respect to FIG. 11. Illustratively:<Tags> Attributes <plate> x, y, z, width, height, depth <raster> URL <vector> URL <text> font, size, justify <link> URL - The
stylizer module 1006 uses one ormore templates 1005 to generate ZML™ formatted data objects. As discussed above, templates describe how data objects from a data source are arranged in the multi-dimensionalvirtual space 1010. Templates include a plurality of properties relating to a physical paradigm. - The following list contains some exemplary properties of a template relating to a financial paradigm. Specifically, the list of properties is for a section of the
template 1005 for viewing a stock quote including historical data, news headlines and full text.p=parent j=justify n=name ab=all_borders cx=children_x bb=bottom_border tb=top_border lb=left_border rb=right_border cb=cell_border fow=fade_out_on_width fiw=fade_in_on_width zt=zoom_to bt=border_thickness t=title lbi=left_border_internal rbi=right_border_internal w=wrap pv=plot_val pyl=plot_y_label pmx=plot_min_x pxx=plot_max_x pmy=plot_min_j pxy=plot_max_j - Each property in the list is limited to a few letters to save memory for use in handheld devices and/or other low capacity (e.g. bandwidth, processor and/or memory limited) devices.
- The template properties listed above describe characteristics of the information relating to the exemplary financial paradigm and displayed to the user in the
virtual space 1010. Some properties describe visibility. For example, the fade properties describe when the appearance of data objects on a hierarchical plate comes within the viewing perspective (e.g., becomes visible to the user). Properties can also describe the appearance of included text. For example, some properties describe how the text appears, whether the text is wrapped, how the text is justified and/or whether the text is inverted. Properties can also describe dimensions of the data objects on the plate. For example, some properties describe whether the data object of the focus node has any borders and/or how the data objects corresponding to any children nodes are arranged. Properties can further describe the appearance of the data object on the hierarchical plate. For example, some properties describe whether the hierarchical plate contains charts and/or maps and/or images. - Templates also contain a plurality of placeholders for input variables. The following list includes illustrative input variables for the exemplary financial template. The input variables describe parameters, such as high price, low price, volume, history, plots and labels, news
$q$ (name) $s_td$ (last) $o_td$ (open) $v_td$ (volume) $h_td$ (high) $l_td$ (low) $c_td$ (change) $b_td$ (bid) $a_td$ (ask) $pv_td$ (today's prices) $pmx_3m$ (3 month t0) $pxx_3m$ (3 month t1) $h_3m$ (3 month price high) $l_3m$ (3 month price low) $pv_3m$ (3 month prices) $pmx_3m$ (6 month t0) $pxx_3m$ (6 month t1) $h_3m$ (6 month price high) $l_3m$ (6 month price low) $pv_3m$ (6 month prices) $pmx_1y$ (1 year t0) $pxx_1y$ (1 year t1) $h_1y$ (1 year price high) $l_1y$ (1 year price low) $pv_1y$ (1 year prices) $pmx_5y$ (5 year t0) $pxx_5y$ (5 year t1) $h_5y$ (5 year price high) $l_5y$ (5 year price low) $pv_5y$ (5 year prices) $nzh1$ (new headline 1) $nzh2$ (new headline 2) $nzh3$ (new headline 3) $nzd1$ (new detail 1) $nzd2$ (new detail 2) $nzd3$ (new detail 3) - The SZML™ format is similar to ZML™ format, except instead of plates, the SZML™ format describes attributes of the appearance in terms of a screen display. The SZML™ format is the ZML™ format processed and optimized for display on a reduced sized screen. One advantage of the SZML™ format is that when zooming and panning, the user tends to focus on certain screen-size quantities of information, regardless of what level of abstraction the user is viewing. In other words, when the user wants to look at something, the user wants it to be the full screen. For example, in a calendar program the user may want to concentrate on a day, a week, or a year. The user wants the screen to be at the level on which the user wants to concentrate.
- The SZML™ format is vector based. Vector graphic appearances enable the appearance of data objects to be transmitted and displayed quickly and with little resources. Using the SZML™ format gives the user a viewing experience like they are looking at a true three dimensional ZML™ formatted environment, while in reality the user is navigating a graphical presentation optimized for a reduced size, two-dimensional display. In the illustrative embodiment, the SZML™ format provides the content author ultimate and explicit control of what the appearance user sees on the screen. In the illustrative embodiment, the SZML™ format is based on ‘Screens’ described by a series of vector graphic appearance elements such as rectangles, text, axes, and polygons. The SZML™ elements are described by a mark-up language, and as such, uses tags to describe attributes. For example:
<text> title=$str$ justify=int wrap=int format=int <axes> x_label=$str$ x_low=$str$ x_high=$str$ y_label=$str$ y_low=$str$ y_high=$str$ <polygon> points=$int$ values=‘$int$,$int$ $int$,$int$ $int$,$int$...’//$int$=0...99 <void> <rect> coordinates=‘$int$,$int$ $int$,$int$ $int$,$int$ $int$,$int$’ <*all*> name=$str$ zoom_to=$str$ coordinates=‘$int$,$int$ $int$,$int$ $int$,$int$ $int$,$int$’ - The <*all*> tag is not a separate tag, but shows attributes for each element, regardless of the type of the element. Each element has a name, rectangular bounds, and potentially a ‘zoom to’ attribute, which when clicked will transport the user to another screen.
- To increase the speed at which the data is transmitted, to decrease the bandwidth requirements, and to decrease the storage capacity needed, the SZML™ tags can be reduced to one or two characters. The attributes listed above, for example, can be reduced as follows:
T = text t = title j = justify f = format w = wrap mode A = axes x = x_label xl = x_low xh = x_high y = y_label yl = y_low yh = y_high P = Polygon s = points v = values R = rect c = coordinates All n = name z = zoom_to c = coordinates - To further improve data transmission, SZML™ formatted text may be compressed before10 transmission and decompressed after reception. Any known compression/decompression algorithms suffice.
- The SZML™ format stores and relates data objects as screens, and stores a plurality of full screens in memory. In SZML™ formatted information, each screen has travel regions (e.g. ‘click-regions’) which are ‘zoom-links’ to other screens. When the user clicks on the ‘click region’, the viewing perspective zooms from the currently viewed screen to the “zoom_to” screen indicated in the attributes of the screen. For zooming, in one embodiment, screens can be thought of as containing three states; small (e.g., 25% of normal display), normal (e.g., 100%) and large (e.g., 400% of normal display).
- When zooming in, the focus screen (e.g., the screen currently being displayed) transitions from normal to large (e.g., from 100% of normal display to 400% of normal display).
- Subsequently, approximately when the ‘click region’ reaches its small state (e.g., 25% of normal display), the “zoom-to” screen is displayed and transitions from small to normal (e.g., 25% of normal display to 100% of normal display). Subsequently, approximately when the focus screen reaches its large state and prior to the clicked screen reaching its normal state, the focus screen is no longer displayed in the appearance. This gives the appearance to the user of zooming into the ‘click region’ (which expands) through the focus screen (which also expands). Illustratively, the expansion is linear, but this need not be the case.
- When zooming out, the focus screen (e.g., the screen currently being displayed) transitions from normal to small (e.g., from 100% of normal display to 25% of normal display). Subsequently, the parent screen transitions from large to normal (e.g., 400% of normal display to 100% of normal display) and at some point in time, the focus screen is no longer displayed. This gives the appearance to the user of zooming out of the focus screen (which contracts) to the parent screen (which also contracts). Illustratively, the contraction is also linear. However, it also need not be the case. There is no need for a three-dimensional display engine since the graphic appearances can be displayed using a two-dimensional display engine. Yet, the user still receives a three-dimensional viewing experience.
- When panning, screens are modeled as a pyramidal structure based on hierarchy level and relative position of parent screens within the pyramid. For example, each screen can have a coordinate (x, y, z) location. The z coordinate corresponds to the hierarchical level of the screen.
- The x, y coordinates are used to indicated relative position to each other, base on where the parent screen is. For example, refer to the appearances of
data objects parent screen 2604, the “news” data object element is to the right of the “charts” data object element. The user changes the viewing perspective to the hierarchical level corresponding with theappearance 2610. The user can pan at this level. When panning right at this lower hierarchical level, the screen to the right is a more detailed screen, at that particular hierarchical level, of the travel region of the “news” data object element. - One embodiment of the
viewing system 1000 addresses low bandwidth, memory and processorlimited clients 1014. With high bandwidth and performance, these features become somewhat less critical, but are still very useful. Described above is an illustrative embodiment of the SZML™ format, which is essentially the ZML™ format transformed and optimized for direct screen display. The SZML™ format defines graphic appearances as vectors. The SZML™ format is much faster and simpler to render than the ZML™ format. - As mentioned above, the
stylizer module 1004 employs thetemplate 1005 having aspatial layout portion 1005 a and acontents style portion 1005 b. FIG. 23 is a block diagram 2300 illustrating how the spatiallayout style portion 1005 a and thecontents style portion 1005 b of thetemplate 1005 operate to enable thestylizer module 1004 to convert an XML™ source content data structure extracted from adata source 2304 into ZML™ formatted data objects. The spatiallayout style portion 1005 a arranges a plurality of data records 2306 a-2306 e in the multi-dimensional,virtual space 1010 independent of thedetails 2305 in each of the records 2306 a-2306 e. For example, if thesource content 2304 is a list of a doctor's patients, the spatiallayout style portion 1005 a arranges the records 2306 a-2306 e, relative to each other, in thevirtual space 1010 based on the person's name or some other identifying characteristic. The spatiallayout style portion 1005 a generally does not deal with how thedata 2305 is arranged within each record 2306 a-2306 e. As previously discussed, the nature of the arrangement (e.g., mapping) is variable, and relates to a particular physical paradigm. This mapping, in one embodiment, translates to a function wherein the three-dimensional coordinates of the data objects are a function of the one-dimensional textual list of the data objects and thetemplate 1005. - After the spatial
layout style portion 1005 a assigns the records 2306 a-2306 e to locations in thevirtual space 1010, thecontents style portion 1005 b determines how to render eachrecord detail 2305 individually. A shoe store, a Web search engine, and a hotel travel site, for example, typically would all display their individual products and/or services and thus,record detail 2305, differently. Thecontents style portion 1005 b creates the user-friendly style, and arranges thedata 2305 within each record 2306 a-2306 e. Referring back to the patient example, thecontents style portion 1005 b arranges the patient's information within a region 2316 (e.g., a plate), placing the title A1 on top, the identification number B1 of the patient over to the left, charts in the middle and other information D1 in the bottom right corner. - The
viewing system 1000, optionally, provides agraphical interface 2312 for enabling the user to easily modify thetemplate 1005. As depicted theinterface 2312 includes a display screen 2313. The display screen 2313 includes a portion 2314 a that enables the user modify hierarchical connections. The display screen 2313 also includes aportion 2314 b that enables the user to change the content of particular data nodes, and aportion 2314 c that enables the user to change the display layout of particular data nodes. - Once the
stylizer module 1004 has arranged all of the data objects spatially using thetemplate 1005, the data objects are now in ZML™ format, and the have a location in the multi-dimensional,virtual space 1010. Thestylizer module 1004 transfers the data objects in ZML™format to the protocolizer module 0106 for further processing. - The
protocolizer module 1006 receives the data objects in the ZML™ format and transforms the data objects to a commonly supported protocol such as, for example, WAP, HTML, GIF, Macromedia FLASH™ and/or JAVA™. Theprotocolizer module 1006 converts the data objects to established protocols to communicate with a plurality ofavailable clients 1014 andbrowsers 1018 to display the data objects from an adjustable viewing perspective in the navigable, multi-dimensional,virtual space 1010. For example, a Macromedia FLASH™ player/plug-in is available in many browsers and provides a rich graphical medium. By translating the ZML™ formatted data objects to Macromedia FLASH™ compatible code, the data objects in the spatial hierarchy (i.e., ZML™ format) can be browsed by any browsers with a Macromedia FLASH™ player/plug-in, without any additional software. - In one embodiment, the
protocolizer module 1006 is implemented as a servlet utilizing JAVA™, C, WAP and/or ZML™ formatting. Theprotocolizer module 1006 intelligently delivers ZML™ formatted data objects as needed toclient 1014. Theprotocolizer module 1006 preferably receives information regarding the bandwidth of thecommunication channel 1022 used to communicate with theclient 1014. In the illustrative embodiment, the protocolizer module 106 delivers those data objects that are virtually closest to the user's virtual position. Upon request from the zoom renderer 1020, theprotocolizer module 1006 transmits the data objects over thecommunication channel 1022 to theclient 1014. - An example illustrating operation of the
protocolizer module 1006 involves data objects relating to clothing and atemplate 1005 relating to the physical paradigm of a clothing store. Due to the number of data objects involved, it is unrealistic to consider delivering all the data objects at once. Instead, theprotocolizer module 1006 delivers a virtual representation of each data object in a timely manner, based at least in part on the virtual location and/or viewing perspective of the user. For example, if the user is currently viewing data objects relating to men's clothing, then theprotocolizer module 1006 may deliver virtual representations of all of the data objects relating to men's pants and shirts, but not women's shoes and accessories. In a model of the data objects as a node tree, such as depicted in FIGS. 12A-12C, thefocus node 1202 c is the node corresponding to thedata object appearance 1206 c displayed from the current viewing perspective shown by the camera 1216 a. Theprotocolizer module 1006 delivers to theclient 1014 those data objects that correspond to the nodes virtually closest the user's focus node 402 c and progressively delivers data that are virtually further away. As discussed with regard to FIGS. 12A-12C, theviewing system 1000 employs a variety of methods to determine relative nodal proximity. - For example, referring once again to FIG. 12A, while the user is viewing the data object of node402 c, the
protocolizer module 1006 delivers those nodes that are within a certain radial distance from thefocus node 1202 c. If the user is not moving, theprotocolizer module 1006 deliversnodes communication channel 1022. - The zoom renderer1020 on the
client 1014 receives the data transmitted by theprotocolizer module 1006 authenticates data via checksum and other methods, and caching the data as necessary. The zoom renderer 1020 also tracks the location of the user's current viewing perspective and any predefined user actions indicating a desired change to the location of the current viewing perspective, and relays this information back to theprotocolizer module 1006. In response to the viewing perspective location information and user actions from the zoom renderer 1020, theprotocolizer module 1006 provides data objects, virtually located at particular nodes or coordinates, to theclient 1014 for display. More particularly, the zoom renderer 1020 tracks the virtual position of the user in thevirtual space 1010. According to our feature, if the user is using amobile client 1014, the zoom renderer 1020 orients the user's viewing perspective in relation to the physical space of the user's location (e.g., global positioning system (“GPS”) coordinates). - The user can influence which data objects the
protocolizer module 1006 provides to theclient 1014 by operating theuser controls 1007 to change virtual position/viewing perspective. As discussed above, delivery of data objects is a function of virtual direction (i.e. perspective) and the velocity with which the user is changing virtual position. Theprotocolizer module 1006 receives user position, direction and velocity information from the zoom renderer 1020, and based on this information, transmits the proximal data node(s). For example, in FIG. 12A, if the user is atnode 1202 c and virtually traveling towardnodes protocolizer module 1006 delivers those nodes first. - As previously mentioned, the
client 1014 can be any device with a display including, for example, televisions, a personal computers, laptop computers, wearable computers, personal digital assistants, wireless telephones, kiosks, key chain displays, watch displays, touch screens, aircraft watercraft or automotive displays, handheld video games and/or video game systems. In another embodiment, the kiosk does not contain a display. The kiosk only includes a transmitter (e.g., an IR transmitter) that sends targeted information to a user's client as the user travels within a close vicinity of the kiosk transmitter, whether or not the user requests data. Theviewing system 1000 can accommodate any screen size. For example,clients 1014 such as personal digital assistants, a wireless telephones, key chain displays, watch displays, handheld video games, and wearable computers typically have display screens which are smaller and more bandwidth limited than, for example, typical personal or laptop computers. However, thestylizer module 1004 addresses these limitations by relating data objects in the essentially infinitevirtual space 1010. The essentially infinitevirtual space 1010 enables the user to view information at a macro level in the restricted physical display areas to pan through data objects at the same hierarchical level, and to zoom into data objects to view more detail when the desired data object(s) has been found. Bandwidth constraints are also less significant since theprotocolizer module 1006 transfers data objects to theclient 1014 according to the current location and viewing perspective of the user. - The zoom renderer1020 processes user input commands from the
user controls 1007 to calculate how data objects are displayed and how to change the users position and viewing perspective. Commands from theuser controls 1007 can include, for example, mouse movement, button presses, keyboard input, voice commands, touch screen inputs, and joystick commands. The user can enter commands to pan (dx, dy), to (dz), and in some embodiments to rotate. The user can also directly select items or various types of warping links to data objects whereupon the user automatically virtually travels to selected destination. - The zoom render1020 and the
browser 1018 can be implemented in a variety of ways, depending on the client platform. By way of example, for PCs and Kiosks, JAVA™ can be used with, for example, graphic appearance libraries or a custom library with or without the JAVA Graphics™ API to create the zoom renderer 1020 and/or thebrowser 1018 for displaying the ZML™ formatted data objects in thevirtual viewing space 1010. Alternatively, a custom C library can be used to create a stand-alone browser or plug-in. In another embodiment, Macromedia FLASH™ compatible code can be employed. For the PALM™ handheld, C software, the PALM™ Development Environment and PALM OS™ software can be employed. For wireless telephones, the zoom renderer 1020 and/or thebrowser 1018 can be implemented in the language of the telephone manufacturer. For televisions, the zoom renderer 1020 and/or thebrowser 1018 can be implemented within a cable receiver or an equivalent service. - The zoom renderer1020 may reside on devices that are limited in capacity such as vehicle computers, key chains, and PDAs with limited memory and processing capabilities. Such devices often have limited and strained network bandwidth and are not designed for complicated graphic appearances. They may not have a
typical browser 1018 that a personal computer would have. The following techniques help provide a high bandwidth experience over a low bandwidth connection (i.e., expensive experience over inexpensive capabilities). One goal of the following techniques are to keep the size of the code small, including a small stack and a small heap, using the heap over the stack. Another goal is to provide rapid graphic appearances with simple routines and small memory requirements. The following techniques can be variously combined to achieve desired goals. - One technique is for use with the ZML™ format. This technique uses parent-child relationships as impetus to minimize the need to specify explicit coordinates. It can be accomplished using a recursive table-like layout propagated over three-dimensional space. The table-like layout can contain n children per row, outside cell border percentages, intra cell border percentages, zoom-to, no screens. In the absence of explicit coordinates for ZML™ objects, a table layout may be employed within the ZML™ properties, such as children per row and outside, inside border percentages. Tables may be nested within tables. This method is analogous to HTML table layouts. The goal is to provide, at any given zoom level, a reasonable number of choices and a coherent display of information. Even though data objects are related to each other using a recursive, table-like layout, the coordinate system placement is not replaced entirely. This provides to the ability to place plates explicitly, independent of any parent or child.
- Another technique is to get as much as possible off of the end device (e.g., thin client) by performing these conversion steps on another, more powerful CPU, starting with the system storing, in ZML™ format, a collection of one or more data objects. Then the
viewing system 1000 takes the ZML™ format (ASCII) as an input and generates virtual plate structures from the ZML™ formatted data objects. Thesystem 100 generates screens structures from the hierarchical plates. Theviewing system 1000 generates, from screens, SZML™ formatted data objects (ASCII form) as output. The end result is a text file in SZML™ format that can be pasted into a PDA. This end result is a PDA application that does not have plate structures, screen structures, plate conversion function from ZML™ format, plate conversion functions to screens, and screen conversion functions to SZML™ format. Without these functions, the software is cheaper and faster. - Another technique is to truncate the ZML™ format to abbreviations (1-3 letters) to reduce characters as discussed above, for example:
- bt=border thickness
- n=name
- p=parent
- Another technique is to compress the ZML™/SZML™ format on the more powerful CPU and uncompress on the less powerful CPU. The
system 100 uses a compression algorithm to compress ZML™ or SZML™ into a CZML™ or CSZML™ format. Thesystem 100 decompresses to ZML™ or SZML™ format on the less powerful CPU. - Another technique is to preprocess the ZML™ to SZML™ format on another CPU format or store data objects in SZML™ format. However, there is a tradeoff. SZML™ formatted data objects have more characters because is the SZML™ format is essentially the ZML™ format expanded into its actual renderable elements, and thus it is larger. For example, it is one thing to describe the shape of a tea pot of size a, b, c and position x, y, z (i.e., ZML™ format) and it is another to describe every polygon in the tea pot (i.e., SZML™ format). The advantage is that SZML™ format is immediately ready for display. For example, where ZML™ format defines the existence of a rectangle in three-dimensional space and it is titled, located at this angle, and the like, SZML™ format explicitly commands the zoom renderer1020 to draw the rectangle at screen coordinates (10, 20, 50, 60). Thus, investment up front, has perpetual payoff.
- According to another technique, the
viewing system 1000 summarizes ZML™ formatted information into screens; that is a collection of M×N displays on which the user would typically focus. Each screen is a list of vector graphic appearances objects. Theviewing system 1000 then smoothly transitions between source and destination screens by linearly scaling the current view, followed by the destination view, as described above. This creates the effect of a true three-dimensional camera and graphic appearances engine (typically expensive) using primitive, inexpensive two-dimensional graphic appearances techniques. - According to another technique, the
viewing system 1000 does not download everything, at once. Instead, theviewing system 1000 downloads the template(s) once and then subsequently only downloads irreproducible data. For example, if an appearance is defined by the example list of input variable for the exemplary financial template above, only the data for each data object has to be transmittal for the zoom renderer 1020 to display the data object. The layout of the appearance, the template, remains the same and the zoom renderer 1020 only changes the displayed values associated with each data object. - In one embodiment, as described above with respect to FIG. 23, the
viewing system 1000 also includes a alteration program with agraphical user interface 2312 to enable the user to edit the definitions of data objects defined in ZML™ or SZML™ format, without the need of the user to understand those formats. Theinterface 2312 enables the user to manually change the zoomed layout of data objects like a paint program. The user selects graphic appearance tools and then edits the ZML™ or SZML™ formatted information manually using theinterface 2312. For example, if there was special of the week, the user manually selects that node corresponding to the data object representing the special winter jackets and using the tools 23 14 a-23 14 c makes modifications such as scaling, shrinking, growing, moving, adding, deleting, or otherwise modifying the data object. Also, the user can use theinterface 2312 to go beyond the layout algorithm and design the look and feel of the virtual space with greater control. The graphical alteration module operate in combination with the automated layout. - In addition to the software being ideal for many platforms, other hardware devices can augment the user experience. Since ZML™ and SZML™ formatted data can be lightweight (e.g., quick transmission and low memory requirements) using some of the compression/conversion algorithms, a PDA is an
applicable client device 1014 for theviewing system 1000. - FIG. 24 is a conceptual block diagram2400 depicting a
database server 2402 in communication with aPDA 2404 which is zoom enabled in accord with an illustrative embodiment of the invention. Thedatabase server 2402 contains the data objects 2406 a-2406 f stored in the SZML™ format. Thedatabase server 2402 first transmits the data objects 2406 a-2406 f for thehome screen 2412 via thecommunication channel 2408. As described above with regard to the downloading and compression, the data objects 2406 a-2406 f that are in the closest vicinity of the home screen in the spatial hierarchical relationship are downloaded next. ThePDA 1604 has asmall memory 2410 that can hold, for example, fifty kilobytes of information. Since the SZML™ formatted data objects 2406 a-2406 f are compact, thesmall memory cache 2410 can hold, for example, about one hundred SZML™ data objects 2406 a-2406. Illustratively, FIG. 25 depicts various hierarchically related graphic appearances 2502-2510 and 2510 a rendered on thePDA 2512. As depicted, the user navigates down through hierarchical levels of data objects from thegraphic appearance 2502 of aretail store 2502 to thegraphic appearance 2510 a of a particular product. - Since the ZML™ and SZML™ data can be lightweight (e.g., quick transmission and low memory requirements) using some of the compression/conversion algorithms, wireless telephony devices are applicable clients for the
viewing system 1000. FIG. 26 illustrates thetelephony devices 2602 a-2602 c displaying the SZML™ data objects using a financial template and the linear expansion and contraction algorithm described above. The telephony device 2602 a displays agraphic appearance 2604 of financial information for ABC Corp. Thescreen 2604 has a ‘click-region’ 2606 to expand the displayed chart to reveal to the user more detail of the chart. As described with regard to SZML™ format, the telephony device 2602 b employs the above discussed linear expansion technique to provide the user with thegraphic appearance 2608 and the feeling of zooming through the homegraphic appearance 2604 to thezoom_to screen 2610. Thetelephony device 2602 depicts thezoom_to screen 2610 at its normal state (i.e., 100%). - The user can virtually zoom through the data objects using the
keypad 2612 of thetelephony devices 2602. In another embodiment, the user uses a CellZoomPad™ (“CZP™”). The CZP™ device is a clip-on device for cellular telephony devices. The CZP™ device turns the cellular telephony device screen into a touch pad, similar to those found on portable PCs. Moving around the pad performs the zooming. The cell phone attachment covers the screen and keys of the cell phone. The user activates the touch pad by touching portions of the cell phone attachment, which activates certain of the cell phone keys in a manner that is read and understood by the zooming software on the cell phone, for example by using combinations of keys pressed simultaneously. Alternatively, a wire or custom plug-in interfaces directly with the cell phone. - Referring to FIG. 27, another device that can be used as a
user control 1007 in conjunction with theviewing system 1000 is ahandheld navigation device 2700. In one embodiment, thenavigation device 2700 is wireless. Thedevice 2700 is a handheld joystick-like device that is custom tailored for browsing invirtual space 1010. The MANO™ can be used across platforms and clients, for example a personal computer or a television. Thedevice 2700 has an analog three-dimensional joystick 2702, with aloop 2704 on the top. In response to the user actuating the joystick north, south, east or west, theviewing system 1000 pans. In response to the user pushing in or pulling out on theloop 2704 theviewing system 1000 zooms. Optionally, the user can rotate thejoystick 2702 to effectuate virtual rotational movement. Four buttons 2706-2712 can provide standard mouse functions, custom functions and/or redundant zooming functions. For example, the functions of the buttons can be to cause the system to take a snapshot of the virtual location of the viewing perspective, or a snapshot of the history (e.g., breadcrumb trail) of the user's trajectory. Other examples of functions can include purchasing an item, sending an email, synchronizing data to or from the client, transmitting information to/from a client device recording music and signaling an alarm (e.g. causing a system to dial 911). AnInfrared Sensor 2714 option replaces a wired connection. Additionally, thedevice 2700 can be configured to vibrate in relation to user virtual movement, to provide tactical feedback to the user. This feedback can be in synchronization with the user's virtual movements through themulti-dimensional zoom space 1010 to give the user an improved sensory enriching experience. In another embodiment, thedevice 2700 has a speaker and/or microphone to give and/or receive audio signals for interaction with the system. - While the invention has been particularly shown and described with reference to specific preferred embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (23)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/784,808 US20010045965A1 (en) | 2000-02-14 | 2001-02-14 | Method and system for receiving user input |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18232600P | 2000-02-14 | 2000-02-14 | |
US18236800P | 2000-02-14 | 2000-02-14 | |
US24028700P | 2000-10-13 | 2000-10-13 | |
US24941700P | 2000-11-16 | 2000-11-16 | |
US09/784,808 US20010045965A1 (en) | 2000-02-14 | 2001-02-14 | Method and system for receiving user input |
Publications (1)
Publication Number | Publication Date |
---|---|
US20010045965A1 true US20010045965A1 (en) | 2001-11-29 |
Family
ID=27497493
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/782,965 Expired - Lifetime US6751620B2 (en) | 2000-02-14 | 2001-02-14 | Apparatus for viewing information in virtual space using multiple templates |
US09/784,808 Abandoned US20010045965A1 (en) | 2000-02-14 | 2001-02-14 | Method and system for receiving user input |
US09/783,717 Expired - Lifetime US6785667B2 (en) | 2000-02-14 | 2001-02-14 | Method and apparatus for extracting data objects and locating them in virtual space |
US09/782,968 Abandoned US20020109680A1 (en) | 2000-02-14 | 2001-02-14 | Method for viewing information in virtual space |
US09/783,715 Abandoned US20020075331A1 (en) | 2000-02-14 | 2001-02-14 | Method and apparatus for addressing data objects in virtual space |
US09/782,967 Abandoned US20020105537A1 (en) | 2000-02-14 | 2001-02-14 | Method and apparatus for organizing hierarchical plates in virtual space |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/782,965 Expired - Lifetime US6751620B2 (en) | 2000-02-14 | 2001-02-14 | Apparatus for viewing information in virtual space using multiple templates |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/783,717 Expired - Lifetime US6785667B2 (en) | 2000-02-14 | 2001-02-14 | Method and apparatus for extracting data objects and locating them in virtual space |
US09/782,968 Abandoned US20020109680A1 (en) | 2000-02-14 | 2001-02-14 | Method for viewing information in virtual space |
US09/783,715 Abandoned US20020075331A1 (en) | 2000-02-14 | 2001-02-14 | Method and apparatus for addressing data objects in virtual space |
US09/782,967 Abandoned US20020105537A1 (en) | 2000-02-14 | 2001-02-14 | Method and apparatus for organizing hierarchical plates in virtual space |
Country Status (6)
Country | Link |
---|---|
US (6) | US6751620B2 (en) |
EP (2) | EP1287431A2 (en) |
JP (2) | JP2003529825A (en) |
AU (2) | AU2001238274A1 (en) |
CA (2) | CA2400330A1 (en) |
WO (2) | WO2001061483A2 (en) |
Cited By (95)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030043206A1 (en) * | 2001-09-06 | 2003-03-06 | Matias Duarte | Loop menu navigation apparatus and method |
US20030065440A1 (en) * | 2001-09-28 | 2003-04-03 | Pioneer Corporation | Navigation system, mobile navigation apparatus, communication navigation apparatus and information server apparatus, navigation method, mobile navigation method, communication navigation method and server processing method, navigation program, mobile navigation program, communication navigation program and server processing program, and information recording medium |
US20030231145A1 (en) * | 2002-06-18 | 2003-12-18 | Hitachi, Ltd | Display apparatus for displaying property of electronic appliance |
US20040157193A1 (en) * | 2003-02-10 | 2004-08-12 | Mejias Ulises Ali | Computer-aided design and production of an online learning course |
US20040268413A1 (en) * | 2003-05-29 | 2004-12-30 | Reid Duane M. | System for presentation of multimedia content |
US20050125147A1 (en) * | 2001-12-07 | 2005-06-09 | Guido Mueller | Method for displaying a hierarchically structure list and associated display unit |
US6966028B1 (en) * | 2001-04-18 | 2005-11-15 | Charles Schwab & Co., Inc. | System and method for a uniform website platform that can be targeted to individual users and environments |
US20050278647A1 (en) * | 2000-11-09 | 2005-12-15 | Change Tools, Inc. | User definable interface system and method |
US7010744B1 (en) * | 2001-05-14 | 2006-03-07 | The Mathworks, Inc. | System and method of navigating and creating electronic hierarchical documents |
US7046248B1 (en) * | 2002-03-18 | 2006-05-16 | Perttunen Cary D | Graphical representation of financial information |
US20060129914A1 (en) * | 2004-12-15 | 2006-06-15 | Microsoft Corporation | Filter and sort by color |
US20060150148A1 (en) * | 2004-12-16 | 2006-07-06 | Openspan, Inc. | System and method for non-programmatically constructing software solutions |
US20060171675A1 (en) * | 2004-08-26 | 2006-08-03 | Johannes Kolletzki | Vehicle multimedia system |
US20060190833A1 (en) * | 2005-02-18 | 2006-08-24 | Microsoft Corporation | Single-handed approach for navigation of application tiles using panning and zooming |
US20070180392A1 (en) * | 2006-01-27 | 2007-08-02 | Microsoft Corporation | Area frequency radial menus |
US20070180361A1 (en) * | 2001-07-11 | 2007-08-02 | International Business Machines Corporation | Method and system for dynamic web page breadcrumbing using javascript |
US20070198713A1 (en) * | 2002-08-06 | 2007-08-23 | Tsao Sheng T | Display multi-layers list item in web-browser with supporting of concurrent multi-users |
US20070243905A1 (en) * | 2004-06-12 | 2007-10-18 | Mobisol Inc. | Method and Apparatus for Operating user Interface of Mobile Terminal Having Pointing Device |
US20070266336A1 (en) * | 2001-03-29 | 2007-11-15 | International Business Machines Corporation | Method and system for providing feedback for docking a content pane in a host window |
US20080086464A1 (en) * | 2006-10-04 | 2008-04-10 | David Enga | Efficient method of location-based content management and delivery |
US20080126963A1 (en) * | 2006-06-30 | 2008-05-29 | Samsung Electronics Co., Ltd. | User terminal to manage driver and network port and method of controlling the same |
US20090063517A1 (en) * | 2007-08-30 | 2009-03-05 | Microsoft Corporation | User interfaces for scoped hierarchical data sets |
FR2924506A1 (en) * | 2007-12-03 | 2009-06-05 | Bosch Gmbh Robert | METHOD FOR ORGANIZING PRESSURE-SENSITIVE AREAS ON A PRESSURE-SENSITIVE DISPLAY DEVICE |
US20090172549A1 (en) * | 2007-12-28 | 2009-07-02 | Motorola, Inc. | Method and apparatus for transitioning between screen presentations on a display of an electronic device |
US20100064260A1 (en) * | 2007-02-05 | 2010-03-11 | Brother Kogyo Kabushiki Kaisha | Image Display Device |
US20100127978A1 (en) * | 2008-11-24 | 2010-05-27 | Peterson Michael L | Pointing device housed in a writing device |
US20100162319A1 (en) * | 2008-12-23 | 2010-06-24 | At&T Intellectual Property I, L.P. | Navigation Method and System to Provide a Navigation Interface |
US7895530B2 (en) | 2000-11-09 | 2011-02-22 | Change Tools, Inc. | User definable interface system, method, support tools, and computer program product |
US20120089903A1 (en) * | 2009-06-30 | 2012-04-12 | Hewlett-Packard Development Company, L.P. | Selective content extraction |
US20120110495A1 (en) * | 2010-10-29 | 2012-05-03 | International Business Machines Corporation | Controlling electronic equipment with a touching-type signal input device |
CN103227879A (en) * | 2012-01-26 | 2013-07-31 | 京瓷办公信息系统株式会社 | Operation device, image forming apparatus and image forming apparatus system |
US20140115532A1 (en) * | 2012-10-23 | 2014-04-24 | Nintendo Co., Ltd. | Information-processing device, storage medium, information-processing method, and information-processing system |
US8763052B2 (en) | 2004-10-29 | 2014-06-24 | Eat.Tv, Inc. | System for enabling video-based interactive applications |
US20150040075A1 (en) * | 2013-08-05 | 2015-02-05 | Samsung Electronics Co., Ltd. | Display apparatus and control method thereof |
US20150286391A1 (en) * | 2014-04-08 | 2015-10-08 | Olio Devices, Inc. | System and method for smart watch navigation |
CN105144150A (en) * | 2012-12-21 | 2015-12-09 | 电子湾有限公司 | Contextual breadcrumbs during navigation |
US20160103848A1 (en) * | 2007-05-09 | 2016-04-14 | Illinois Institute Of Technology | Collaborative and personalized storage and search in hierarchical abstract data organization systems |
US20160267063A1 (en) * | 2015-03-10 | 2016-09-15 | Microsoft Technology Licensing, Llc | Hierarchical navigation control |
DK201500575A1 (en) * | 2015-03-08 | 2016-09-26 | Apple Inc | Devices, Methods, and Graphical User Interfaces for Displaying and Using Menus |
USD769314S1 (en) * | 2015-06-30 | 2016-10-18 | Your Voice Usa Corp. | Display screen with icons |
US9495144B2 (en) | 2007-03-23 | 2016-11-15 | Apple Inc. | Systems and methods for controlling application updates across a wireless interface |
DK201500581A1 (en) * | 2015-03-08 | 2017-01-16 | Apple Inc | Devices, Methods, and Graphical User Interfaces for Displaying and Using Menus |
US9602729B2 (en) | 2015-06-07 | 2017-03-21 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9606776B2 (en) | 2014-03-07 | 2017-03-28 | Mitsubishi Electric Corporation | Programming device |
US9612741B2 (en) | 2012-05-09 | 2017-04-04 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US9619076B2 (en) | 2012-05-09 | 2017-04-11 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US9632664B2 (en) | 2015-03-08 | 2017-04-25 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9639184B2 (en) | 2015-03-19 | 2017-05-02 | Apple Inc. | Touch input cursor manipulation |
US9645732B2 (en) | 2015-03-08 | 2017-05-09 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US9674426B2 (en) | 2015-06-07 | 2017-06-06 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9753639B2 (en) | 2012-05-09 | 2017-09-05 | Apple Inc. | Device, method, and graphical user interface for displaying content associated with a corresponding affordance |
US9778771B2 (en) | 2012-12-29 | 2017-10-03 | Apple Inc. | Device, method, and graphical user interface for transitioning between touch input to display output relationships |
US9785305B2 (en) | 2015-03-19 | 2017-10-10 | Apple Inc. | Touch input cursor manipulation |
US9830048B2 (en) | 2015-06-07 | 2017-11-28 | Apple Inc. | Devices and methods for processing touch inputs with instructions in a web page |
US9880735B2 (en) | 2015-08-10 | 2018-01-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9886184B2 (en) | 2012-05-09 | 2018-02-06 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US9891811B2 (en) | 2015-06-07 | 2018-02-13 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9959025B2 (en) | 2012-12-29 | 2018-05-01 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US9967605B2 (en) * | 2011-03-03 | 2018-05-08 | Sony Corporation | Method and apparatus for providing customized menus |
US9990121B2 (en) | 2012-05-09 | 2018-06-05 | Apple Inc. | Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input |
US9990107B2 (en) | 2015-03-08 | 2018-06-05 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US9996231B2 (en) | 2012-05-09 | 2018-06-12 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US10037138B2 (en) | 2012-12-29 | 2018-07-31 | Apple Inc. | Device, method, and graphical user interface for switching between user interfaces |
US10042542B2 (en) | 2012-05-09 | 2018-08-07 | Apple Inc. | Device, method, and graphical user interface for moving and dropping a user interface object |
US10042898B2 (en) | 2007-05-09 | 2018-08-07 | Illinois Institutre Of Technology | Weighted metalabels for enhanced search in hierarchical abstract data organization systems |
US10048757B2 (en) | 2015-03-08 | 2018-08-14 | Apple Inc. | Devices and methods for controlling media presentation |
US10067653B2 (en) | 2015-04-01 | 2018-09-04 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10073615B2 (en) | 2012-05-09 | 2018-09-11 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10078442B2 (en) | 2012-12-29 | 2018-09-18 | Apple Inc. | Device, method, and graphical user interface for determining whether to scroll or select content based on an intensity theshold |
US10095391B2 (en) | 2012-05-09 | 2018-10-09 | Apple Inc. | Device, method, and graphical user interface for selecting user interface objects |
US10095396B2 (en) | 2015-03-08 | 2018-10-09 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US10126930B2 (en) | 2012-05-09 | 2018-11-13 | Apple Inc. | Device, method, and graphical user interface for scrolling nested regions |
US10162452B2 (en) | 2015-08-10 | 2018-12-25 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10175864B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for selecting object within a group of objects in accordance with contact intensity |
US10175757B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for touch-based operations performed and reversed in a user interface |
US10185480B1 (en) * | 2015-06-15 | 2019-01-22 | Symantec Corporation | Systems and methods for automatically making selections in user interfaces |
US10200598B2 (en) | 2015-06-07 | 2019-02-05 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10235035B2 (en) | 2015-08-10 | 2019-03-19 | Apple Inc. | Devices, methods, and graphical user interfaces for content navigation and manipulation |
US10248308B2 (en) | 2015-08-10 | 2019-04-02 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interfaces with physical gestures |
US10275087B1 (en) | 2011-08-05 | 2019-04-30 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10346030B2 (en) | 2015-06-07 | 2019-07-09 | Apple Inc. | Devices and methods for navigating between user interfaces |
US10416800B2 (en) | 2015-08-10 | 2019-09-17 | Apple Inc. | Devices, methods, and graphical user interfaces for adjusting user interface objects |
US10437333B2 (en) | 2012-12-29 | 2019-10-08 | Apple Inc. | Device, method, and graphical user interface for forgoing generation of tactile output for a multi-contact gesture |
US10484455B2 (en) | 2002-08-06 | 2019-11-19 | Sheng Tai (Ted) Tsao | Method and apparatus for information exchange over a web based environment |
US10496260B2 (en) | 2012-05-09 | 2019-12-03 | Apple Inc. | Device, method, and graphical user interface for pressure-based alteration of controls in a user interface |
US10620781B2 (en) | 2012-12-29 | 2020-04-14 | Apple Inc. | Device, method, and graphical user interface for moving a cursor according to a change in an appearance of a control icon with simulated three-dimensional characteristics |
US11003422B2 (en) | 2019-05-10 | 2021-05-11 | Fasility Llc | Methods and systems for visual programming using polymorphic, dynamic multi-dimensional structures |
US20210191585A1 (en) * | 2019-12-23 | 2021-06-24 | Fuji Xerox Co., Ltd. | Information processing apparatus and non-transitory computer readable medium storing computer program |
US11100075B2 (en) * | 2019-03-19 | 2021-08-24 | Servicenow, Inc. | Graphical user interfaces for incorporating complex data objects into a workflow |
US11449202B1 (en) * | 2012-06-01 | 2022-09-20 | Ansys, Inc. | User interface and method of data navigation in the user interface of engineering analysis applications |
US11637689B2 (en) | 2016-02-29 | 2023-04-25 | Craxel, Inc. | Efficient encrypted data management system and method |
US20230153347A1 (en) * | 2011-07-05 | 2023-05-18 | Michael Stewart Shunock | System and method for annotating images |
US11740788B2 (en) | 2022-01-18 | 2023-08-29 | Craxel, Inc. | Composite operations using multiple hierarchical data spaces |
US11853690B1 (en) | 2016-05-31 | 2023-12-26 | The Mathworks, Inc. | Systems and methods for highlighting graphical models |
US11880608B2 (en) | 2022-01-18 | 2024-01-23 | Craxel, Inc. | Organizing information using hierarchical data spaces |
Families Citing this family (377)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8352400B2 (en) | 1991-12-23 | 2013-01-08 | Hoffberg Steven M | Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore |
US8574074B2 (en) | 2005-09-30 | 2013-11-05 | Sony Computer Entertainment America Llc | Advertising impression determination |
US7895076B2 (en) | 1995-06-30 | 2011-02-22 | Sony Computer Entertainment Inc. | Advertisement insertion, profiling, impression, and feedback |
US7388092B2 (en) * | 1996-05-03 | 2008-06-17 | Applera Corporation | Oligonucleotides and analogs labeled with energy transfer dyes |
US6918096B2 (en) * | 1996-11-07 | 2005-07-12 | Thebrain Technologies, Corp. | Method and apparatus for displaying a network of thoughts from a thought's perspective |
US9183306B2 (en) | 1998-12-18 | 2015-11-10 | Microsoft Technology Licensing, Llc | Automated selection of appropriate information based on a computer user's context |
US6801223B1 (en) * | 1998-12-18 | 2004-10-05 | Tangis Corporation | Managing interactions between computer users' context models |
US8181113B2 (en) | 1998-12-18 | 2012-05-15 | Microsoft Corporation | Mediating conflicts in computer users context data |
US6791580B1 (en) | 1998-12-18 | 2004-09-14 | Tangis Corporation | Supplying notifications related to supply and consumption of user context data |
US7225229B1 (en) | 1998-12-18 | 2007-05-29 | Tangis Corporation | Automated pushing of computer user's context data to clients |
US7779015B2 (en) | 1998-12-18 | 2010-08-17 | Microsoft Corporation | Logging and analyzing context attributes |
US7046263B1 (en) | 1998-12-18 | 2006-05-16 | Tangis Corporation | Requesting computer user's context data |
US6842877B2 (en) | 1998-12-18 | 2005-01-11 | Tangis Corporation | Contextual responses based on automated learning techniques |
US7231439B1 (en) | 2000-04-02 | 2007-06-12 | Tangis Corporation | Dynamically swapping modules for determining a computer user's context |
US6513046B1 (en) | 1999-12-15 | 2003-01-28 | Tangis Corporation | Storing and recalling information to augment human memories |
US8225214B2 (en) | 1998-12-18 | 2012-07-17 | Microsoft Corporation | Supplying enhanced computer user's context data |
US6920616B1 (en) | 1998-12-18 | 2005-07-19 | Tangis Corporation | Interface for exchanging context data |
US7966078B2 (en) | 1999-02-01 | 2011-06-21 | Steven Hoffberg | Network media appliance system and method |
US7062456B1 (en) | 1999-02-09 | 2006-06-13 | The Chase Manhattan Bank | System and method for back office processing of banking transactions using electronic files |
US8160994B2 (en) * | 1999-07-21 | 2012-04-17 | Iopener Media Gmbh | System for simulating events in a real environment |
US20020002563A1 (en) * | 1999-08-23 | 2002-01-03 | Mary M. Bendik | Document management systems and methods |
US6754364B1 (en) * | 1999-10-28 | 2004-06-22 | Microsoft Corporation | Methods and systems for fingerprinting digital data |
US7555492B2 (en) * | 1999-11-05 | 2009-06-30 | The Board Of Trustees At The Leland Stanford Junior University | System and method for internet-accessible tools and knowledge base for protocol design, metadata capture and laboratory experiment management |
WO2007130681A2 (en) | 2006-05-05 | 2007-11-15 | Sony Computer Entertainment America Inc. | Advertisement rotation |
JP2003529825A (en) | 2000-02-14 | 2003-10-07 | ジオフェニックス, インコーポレイテッド | Method and system for graphical programming |
US20020075311A1 (en) * | 2000-02-14 | 2002-06-20 | Julian Orbanes | Method for viewing information in virtual space |
US20020080177A1 (en) * | 2000-02-14 | 2002-06-27 | Julian Orbanes | Method and apparatus for converting data objects to a custom format for viewing information in virtual space |
US6901403B1 (en) * | 2000-03-02 | 2005-05-31 | Quovadx, Inc. | XML presentation of general-purpose data sources |
JP2001344105A (en) * | 2000-03-31 | 2001-12-14 | Hitachi Software Eng Co Ltd | Web application developing method, development support system, and memory medium storing program related to this method |
AU2001249768A1 (en) | 2000-04-02 | 2001-10-15 | Tangis Corporation | Soliciting information based on a computer user's context |
US7464153B1 (en) | 2000-04-02 | 2008-12-09 | Microsoft Corporation | Generating and supplying user context data |
DE10018143C5 (en) * | 2000-04-12 | 2012-09-06 | Oerlikon Trading Ag, Trübbach | DLC layer system and method and apparatus for producing such a layer system |
US7200848B1 (en) * | 2000-05-09 | 2007-04-03 | Sun Microsystems, Inc. | Migrating processes using data representation language representations of the processes in a distributed computing environment |
US7343348B2 (en) * | 2000-05-19 | 2008-03-11 | First American Residential Group, Inc. | System for performing real-estate transactions over a computer network using participant templates |
US7210099B2 (en) * | 2000-06-12 | 2007-04-24 | Softview Llc | Resolution independent vector display of internet content |
US7269573B1 (en) * | 2000-07-13 | 2007-09-11 | Symbol Technologies, Inc. | Virtual-product presentation system |
US6859217B2 (en) * | 2000-07-19 | 2005-02-22 | Microsoft Corporation | System and method to display and manage data within hierarchies and polyarchies of information |
US6915484B1 (en) | 2000-08-09 | 2005-07-05 | Adobe Systems Incorporated | Text reflow in a structured document |
US7213064B2 (en) * | 2000-11-18 | 2007-05-01 | In2M Corporation | Methods and systems for job-based accounting |
KR20020017558A (en) * | 2000-08-31 | 2002-03-07 | 김종민 | System and method for book-marking on a cyber space |
US7190976B2 (en) * | 2000-10-02 | 2007-03-13 | Microsoft Corporation | Customizing the display of a mobile computing device |
US20020054130A1 (en) | 2000-10-16 | 2002-05-09 | Abbott Kenneth H. | Dynamically displaying current status of tasks |
US20020087630A1 (en) * | 2000-10-20 | 2002-07-04 | Jonathan Wu | Enhanced information and presence service |
US7035803B1 (en) * | 2000-11-03 | 2006-04-25 | At&T Corp. | Method for sending multi-media messages using customizable background images |
US6963839B1 (en) | 2000-11-03 | 2005-11-08 | At&T Corp. | System and method of controlling sound in a multi-media communication application |
US7091976B1 (en) | 2000-11-03 | 2006-08-15 | At&T Corp. | System and method of customizing animated entities for use in a multi-media communication application |
US7203648B1 (en) | 2000-11-03 | 2007-04-10 | At&T Corp. | Method for sending multi-media messages with customized audio |
US20080040227A1 (en) * | 2000-11-03 | 2008-02-14 | At&T Corp. | System and method of marketing using a multi-media communication system |
US6990452B1 (en) | 2000-11-03 | 2006-01-24 | At&T Corp. | Method for sending multi-media messages using emoticons |
US6976082B1 (en) | 2000-11-03 | 2005-12-13 | At&T Corp. | System and method for receiving multi-media messages |
US6957230B2 (en) | 2000-11-30 | 2005-10-18 | Microsoft Corporation | Dynamically generating multiple hierarchies of inter-object relationships based on object attribute values |
JP4677673B2 (en) * | 2000-12-28 | 2011-04-27 | ブラザー工業株式会社 | Electronic store management system |
US6985902B2 (en) * | 2001-02-05 | 2006-01-10 | Threewide.Com, Inc. | Method, system and apparatus for creating and accessing a hierarchical database in a format optimally suited to real estate listings |
US8751310B2 (en) | 2005-09-30 | 2014-06-10 | Sony Computer Entertainment America Llc | Monitoring advertisement impressions |
US7085736B2 (en) * | 2001-02-27 | 2006-08-01 | Alexa Internet | Rules-based identification of items represented on web pages |
US7047413B2 (en) * | 2001-04-23 | 2006-05-16 | Microsoft Corporation | Collusion-resistant watermarking and fingerprinting |
US7730401B2 (en) | 2001-05-16 | 2010-06-01 | Synaptics Incorporated | Touch screen with user interface enhancement |
US7962482B2 (en) * | 2001-05-16 | 2011-06-14 | Pandora Media, Inc. | Methods and systems for utilizing contextual feedback to generate and modify playlists |
US20030164827A1 (en) * | 2001-05-18 | 2003-09-04 | Asaf Gottesman | System and method for displaying search results in a three-dimensional virtual environment |
US7337396B2 (en) * | 2001-08-08 | 2008-02-26 | Xerox Corporation | Methods and systems for transitioning between thumbnails and documents based upon thumbnail appearance |
JP2003106862A (en) * | 2001-09-28 | 2003-04-09 | Pioneer Electronic Corp | Map plotting apparatus |
US7146576B2 (en) * | 2001-10-30 | 2006-12-05 | Hewlett-Packard Development Company, L.P. | Automatically designed three-dimensional graphical environments for information discovery and visualization |
US7671861B1 (en) | 2001-11-02 | 2010-03-02 | At&T Intellectual Property Ii, L.P. | Apparatus and method of customizing animated entities for use in a multi-media communication application |
US6799182B2 (en) | 2001-11-13 | 2004-09-28 | Quovadx, Inc. | System and method for data source flattening |
US7363311B2 (en) * | 2001-11-16 | 2008-04-22 | Nippon Telegraph And Telephone Corporation | Method of, apparatus for, and computer program for mapping contents having meta-information |
WO2003044693A1 (en) * | 2001-11-19 | 2003-05-30 | Fujitsu Limited | Information navigation system |
US7389335B2 (en) * | 2001-11-26 | 2008-06-17 | Microsoft Corporation | Workflow management based on an integrated view of resource identity |
US6952704B2 (en) | 2001-11-26 | 2005-10-04 | Microsoft Corporation | Extending a directory schema independent of schema modification |
US6944626B2 (en) | 2001-11-26 | 2005-09-13 | Microsoft Corp. | Dynamically generated schema representing multiple hierarchies of inter-object relationships |
US20050190199A1 (en) * | 2001-12-21 | 2005-09-01 | Hartwell Brown | Apparatus and method for identifying and simultaneously displaying images of musical notes in music and producing the music |
US6957392B2 (en) * | 2002-01-16 | 2005-10-18 | Laszlo Systems, Inc. | Interface engine providing a continuous user interface |
USRE48596E1 (en) * | 2002-01-16 | 2021-06-15 | Intel Corporation | Interface engine providing a continuous user interface |
JP2003216650A (en) * | 2002-01-28 | 2003-07-31 | Sony Corp | Graphical user interface for information intermediation system |
US20030174141A1 (en) * | 2002-03-14 | 2003-09-18 | Letellier Nolan Wayne | Sorting image primitives in generation of image page descriptions |
US7904826B2 (en) * | 2002-03-29 | 2011-03-08 | Microsoft Corporation | Peek around user interface |
US7987246B2 (en) * | 2002-05-23 | 2011-07-26 | Jpmorgan Chase Bank | Method and system for client browser update |
US7519918B2 (en) * | 2002-05-30 | 2009-04-14 | Intel Corporation | Mobile virtual desktop |
US7415452B1 (en) * | 2002-06-21 | 2008-08-19 | Adobe Systems Incorporated | Traversing a hierarchical layout template |
US20060084381A1 (en) * | 2002-07-01 | 2006-04-20 | Stephan Hartwig | Method for establishing a connection between a mobile device and a second device |
US7188092B2 (en) * | 2002-07-12 | 2007-03-06 | Chroma Energy, Inc. | Pattern recognition template application applied to oil exploration and production |
AU2002950261A0 (en) * | 2002-07-15 | 2002-09-12 | Peter Rowen | Software graphical interface |
US7783061B2 (en) | 2003-08-27 | 2010-08-24 | Sony Computer Entertainment Inc. | Methods and apparatus for the targeted sound detection |
US8073157B2 (en) * | 2003-08-27 | 2011-12-06 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection and characterization |
US7809145B2 (en) * | 2006-05-04 | 2010-10-05 | Sony Computer Entertainment Inc. | Ultra small microphone array |
US8947347B2 (en) | 2003-08-27 | 2015-02-03 | Sony Computer Entertainment Inc. | Controlling actions in a video game unit |
US20040034613A1 (en) * | 2002-07-23 | 2004-02-19 | Xerox Corporation | System and method for dynamically generating a style sheet |
US7107525B2 (en) | 2002-07-23 | 2006-09-12 | Xerox Corporation | Method for constraint-based document generation |
US7487445B2 (en) * | 2002-07-23 | 2009-02-03 | Xerox Corporation | Constraint-optimization system and method for document component layout generation |
US7803050B2 (en) | 2002-07-27 | 2010-09-28 | Sony Computer Entertainment Inc. | Tracking device with sound emitter for use in obtaining information for controlling game program execution |
US9174119B2 (en) | 2002-07-27 | 2015-11-03 | Sony Computer Entertainement America, LLC | Controller for providing inputs to control execution of a program when inputs are combined |
US8160269B2 (en) | 2003-08-27 | 2012-04-17 | Sony Computer Entertainment Inc. | Methods and apparatuses for adjusting a listening area for capturing sounds |
US8139793B2 (en) * | 2003-08-27 | 2012-03-20 | Sony Computer Entertainment Inc. | Methods and apparatus for capturing audio signals based on a visual image |
US8233642B2 (en) * | 2003-08-27 | 2012-07-31 | Sony Computer Entertainment Inc. | Methods and apparatuses for capturing an audio signal based on a location of the signal |
US7171617B2 (en) * | 2002-07-30 | 2007-01-30 | Xerox Corporation | System and method for fitness evaluation for optimization in document assembly |
US7246312B2 (en) * | 2002-07-30 | 2007-07-17 | Xerox Corporation | System and method for fitness evaluation for optimization in document assembly |
US9342459B2 (en) * | 2002-08-06 | 2016-05-17 | Qualcomm Incorporated | Cache management in a mobile device |
US7426532B2 (en) * | 2002-08-27 | 2008-09-16 | Intel Corporation | Network of disparate processor-based devices to exchange and display media files |
US20040044724A1 (en) * | 2002-08-27 | 2004-03-04 | Bell Cynthia S. | Apparatus and methods to exchange menu information among processor-based devices |
US7376696B2 (en) * | 2002-08-27 | 2008-05-20 | Intel Corporation | User interface to facilitate exchanging files among processor-based devices |
US20070135943A1 (en) * | 2002-09-18 | 2007-06-14 | Seiko Epson Corporation | Output service providing system that updates information based on positional information, terminal and method of providing output service |
AU2002951709A0 (en) * | 2002-09-26 | 2002-10-17 | Canon Information Systems Research Australia Pty Ltd | Efficient printing of frames pages |
CA2406131A1 (en) * | 2002-09-30 | 2004-03-30 | Idelix Software Inc. | A graphical user interface using detail-in-context folding |
US7613645B2 (en) * | 2002-10-15 | 2009-11-03 | Timothy Knight | Electronic interface configured for displaying and identifying mixed types of information |
US7103526B2 (en) * | 2002-10-16 | 2006-09-05 | Agilent Technologies, Inc. | Method and apparatus for adapting a simulation model to expose a signal internal to the model to a client application |
US20040085318A1 (en) * | 2002-10-31 | 2004-05-06 | Philipp Hassler | Graphics generation and integration |
WO2004047489A1 (en) * | 2002-11-20 | 2004-06-03 | Koninklijke Philips Electronics N.V. | Audio based data representation apparatus and method |
US20040103199A1 (en) * | 2002-11-22 | 2004-05-27 | Anthony Chao | Method and system for client browser update from a lite cache |
US7511710B2 (en) * | 2002-11-25 | 2009-03-31 | Microsoft Corporation | Three-dimensional program guide |
US20040100484A1 (en) * | 2002-11-25 | 2004-05-27 | Barrett Peter T. | Three-dimensional television viewing environment |
AU2003298730A1 (en) * | 2002-11-27 | 2004-06-23 | Amirsys, Inc. | An electronic clinical reference and education system and method of use |
US20040194017A1 (en) * | 2003-01-06 | 2004-09-30 | Jasmin Cosic | Interactive video interface |
US7038660B2 (en) * | 2003-03-13 | 2006-05-02 | Sony Corporation | Wheel motion control input device for animation system |
US20040193644A1 (en) * | 2003-03-31 | 2004-09-30 | Baker William P. | Exposing a report as a schematized queryable data source |
WO2004097585A2 (en) * | 2003-04-24 | 2004-11-11 | Stanford University | System and method for internet-accessible tools and knowledge base for protocol design, metadata capture and laboratory experiment management |
KR101157308B1 (en) | 2003-04-30 | 2012-06-15 | 디즈니엔터프라이지즈,인크. | Cell phone multimedia controller |
US7356332B2 (en) * | 2003-06-09 | 2008-04-08 | Microsoft Corporation | Mobile information system for presenting information to mobile devices |
US20040261031A1 (en) * | 2003-06-23 | 2004-12-23 | Nokia Corporation | Context dependent auxiliary menu elements |
US8126155B2 (en) * | 2003-07-02 | 2012-02-28 | Fuji Xerox Co., Ltd. | Remote audio device management system |
US7069278B2 (en) | 2003-08-08 | 2006-06-27 | Jpmorgan Chase Bank, N.A. | System for archive integrity management and related methods |
US20050166180A1 (en) * | 2003-08-15 | 2005-07-28 | Lemon Scott C. | Web services enablement and deployment substrate |
US7236982B2 (en) * | 2003-09-15 | 2007-06-26 | Pic Web Services, Inc. | Computer systems and methods for platform independent presentation design |
US7516139B2 (en) * | 2003-09-19 | 2009-04-07 | Jp Morgan Chase Bank | Processing of tree data structures |
US20050125437A1 (en) * | 2003-12-08 | 2005-06-09 | Cardno Andrew J. | Data analysis system and method |
US7975240B2 (en) * | 2004-01-16 | 2011-07-05 | Microsoft Corporation | Systems and methods for controlling a visible results set |
JP4110105B2 (en) * | 2004-01-30 | 2008-07-02 | キヤノン株式会社 | Document processing apparatus, document processing method, and document processing program |
US8863277B2 (en) | 2004-04-07 | 2014-10-14 | Fortinet, Inc. | Systems and methods for passing network traffic content |
DE502004006663D1 (en) * | 2004-05-03 | 2008-05-08 | Siemens Ag | Graphical user interface for representing multiple hierarchically structured sets |
GB2414369B (en) * | 2004-05-21 | 2007-08-01 | Hewlett Packard Development Co | Processing audio data |
US8477331B2 (en) * | 2004-05-27 | 2013-07-02 | Property Publications Pte Ltd. | Apparatus and method for creating an electronic version of printed matter |
US7283848B1 (en) * | 2004-05-27 | 2007-10-16 | Autocell Laboratories, Inc. | System and method for generating display objects representing areas of coverage, available bandwidth and channel selection for wireless devices in a wireless communication network |
US20050289466A1 (en) * | 2004-06-24 | 2005-12-29 | Kaihu Chen | Multimedia authoring method and system using bi-level theme templates |
US20050289031A1 (en) * | 2004-06-28 | 2005-12-29 | Campbell David H | Computerized method of processing investment data and associated system |
US20060020561A1 (en) * | 2004-07-20 | 2006-01-26 | Toshiba Corporation | System for generating a user interface and service cost display for mobile document processing services |
US8763157B2 (en) | 2004-08-23 | 2014-06-24 | Sony Computer Entertainment America Llc | Statutory license restricted digital media playback on portable devices |
US7366974B2 (en) * | 2004-09-03 | 2008-04-29 | Jp Morgan Chase Bank | System and method for managing template attributes |
US20060059210A1 (en) * | 2004-09-16 | 2006-03-16 | Macdonald Glynne | Generic database structure and related systems and methods for storing data independent of data type |
US7483880B2 (en) * | 2004-09-30 | 2009-01-27 | Microsoft Corporation | User interface for database display |
US20060084502A1 (en) * | 2004-10-01 | 2006-04-20 | Shuffle Master, Inc. | Thin client user interface for gaming systems |
US20060074897A1 (en) * | 2004-10-04 | 2006-04-06 | Fergusson Iain W | System and method for dynamic data masking |
US20090132466A1 (en) * | 2004-10-13 | 2009-05-21 | Jp Morgan Chase Bank | System and method for archiving data |
US7747970B2 (en) * | 2004-12-03 | 2010-06-29 | Microsoft Corporation | Previews of information for selected download on auxiliary display |
US20060232578A1 (en) * | 2004-12-21 | 2006-10-19 | Silviu Reinhorn | Collapsible portable display |
US20060234784A1 (en) * | 2004-12-21 | 2006-10-19 | Silviu Reinhorn | Collapsible portable display |
US20060168561A1 (en) * | 2005-01-24 | 2006-07-27 | Integrated Marketing Technologies Inc. | Method and apparatus for enabling live selection of content for print on demand output |
JP4674311B2 (en) | 2005-02-21 | 2011-04-20 | 株式会社リコー | Content browsing system, content browsing method and program |
US20070234232A1 (en) * | 2006-03-29 | 2007-10-04 | Gheorghe Adrian Citu | Dynamic image display |
US7725837B2 (en) * | 2005-03-31 | 2010-05-25 | Microsoft Corporation | Digital image browser |
US7657579B2 (en) * | 2005-04-14 | 2010-02-02 | Emc Corporation | Traversing data in a repeatable manner |
US20080065637A1 (en) * | 2005-04-14 | 2008-03-13 | Emc Corporation | Locating last processed data |
US20080065663A1 (en) * | 2005-04-14 | 2008-03-13 | Emc Corporation | Reestablishing process context |
US7577671B2 (en) * | 2005-04-15 | 2009-08-18 | Sap Ag | Using attribute inheritance to identify crawl paths |
WO2006119632A1 (en) * | 2005-05-13 | 2006-11-16 | Imbibo Incorporated | Method for customizing cover for electronic device |
WO2006126205A2 (en) * | 2005-05-26 | 2006-11-30 | Vircomzone Ltd. | Systems and uses and methods for graphic display |
JP4732029B2 (en) * | 2005-06-29 | 2011-07-27 | キヤノン株式会社 | Layout determining method, information processing apparatus, and layout determining program |
JP4669912B2 (en) * | 2005-07-08 | 2011-04-13 | 株式会社リコー | Content browsing system, program, and content browsing method |
DE102005037841B4 (en) * | 2005-08-04 | 2010-08-12 | Gesellschaft zur Förderung angewandter Informatik e.V. | Method and arrangement for determining the relative position of a first object with respect to a second object, and a corresponding computer program and a corresponding computer-readable storage medium |
US7916157B1 (en) * | 2005-08-16 | 2011-03-29 | Adobe Systems Incorporated | System and methods for selective zoom response behavior |
US7295719B2 (en) * | 2005-08-26 | 2007-11-13 | United Space Alliance, Llc | Image and information management system |
US8065606B1 (en) | 2005-09-16 | 2011-11-22 | Jpmorgan Chase Bank, N.A. | System and method for automating document generation |
US8626584B2 (en) | 2005-09-30 | 2014-01-07 | Sony Computer Entertainment America Llc | Population of an advertisement reference list |
US20070088735A1 (en) * | 2005-10-17 | 2007-04-19 | International Business Machines Corporation | Optimization-based visual context management |
US10657538B2 (en) | 2005-10-25 | 2020-05-19 | Sony Interactive Entertainment LLC | Resolution of advertising rules |
US11004089B2 (en) | 2005-10-25 | 2021-05-11 | Sony Interactive Entertainment LLC | Associating media content files with advertisements |
US8676900B2 (en) | 2005-10-25 | 2014-03-18 | Sony Computer Entertainment America Llc | Asynchronous advertising placement based on metadata |
US20070118425A1 (en) | 2005-10-25 | 2007-05-24 | Podbridge, Inc. | User device agent for asynchronous advertising in time and space shifted media network |
AU2005239672B2 (en) * | 2005-11-30 | 2009-06-11 | Canon Kabushiki Kaisha | Sortable collection browser |
US9817831B2 (en) * | 2005-12-30 | 2017-11-14 | Microsoft Technology Licensing, Llc | Monetization of multimedia queries |
US20070162848A1 (en) * | 2006-01-09 | 2007-07-12 | Apple Computer, Inc. | Predictive styling |
JP4650635B2 (en) * | 2006-02-13 | 2011-03-16 | 株式会社ソニー・コンピュータエンタテインメント | Content and / or service guidance device, guidance method, and program |
US8081827B2 (en) * | 2006-02-28 | 2011-12-20 | Ricoh Co., Ltd. | Compressed data image object feature extraction, ordering, and delivery |
US20090100339A1 (en) * | 2006-03-09 | 2009-04-16 | Hassan Hamid Wharton-Ali | Content Acess Tree |
JP2007265221A (en) * | 2006-03-29 | 2007-10-11 | Sanyo Electric Co Ltd | Multiple image display device and onboard navigation system |
US20110014981A1 (en) * | 2006-05-08 | 2011-01-20 | Sony Computer Entertainment Inc. | Tracking device with sound emitter for use in obtaining information for controlling game program execution |
JP2007328510A (en) * | 2006-06-07 | 2007-12-20 | Ricoh Co Ltd | Content conversion device, content display device, content browsing device, content conversion method, content browsing method and program |
JP4960024B2 (en) * | 2006-06-07 | 2012-06-27 | オリンパスメディカルシステムズ株式会社 | Medical image management method and medical image management apparatus using the same |
US8260689B2 (en) * | 2006-07-07 | 2012-09-04 | Dollens Joseph R | Method and system for managing and displaying product images |
US11049175B2 (en) | 2006-07-07 | 2021-06-29 | Joseph R. Dollens | Method and system for managing and displaying product images with progressive resolution display with audio commands and responses |
US8554639B2 (en) * | 2006-07-07 | 2013-10-08 | Joseph R. Dollens | Method and system for managing and displaying product images |
US11481834B2 (en) | 2006-07-07 | 2022-10-25 | Joseph R. Dollens | Method and system for managing and displaying product images with progressive resolution display with artificial realities |
US10614513B2 (en) | 2006-07-07 | 2020-04-07 | Joseph R. Dollens | Method and system for managing and displaying product images with progressive resolution display |
US9691098B2 (en) | 2006-07-07 | 2017-06-27 | Joseph R. Dollens | Method and system for managing and displaying product images with cloud computing |
US20080016023A1 (en) * | 2006-07-17 | 2008-01-17 | The Mathworks, Inc. | Storing and loading data in an array-based computing environment |
US8172577B2 (en) * | 2006-07-27 | 2012-05-08 | Northeastern University | System and method for knowledge transfer with a game |
US8259132B2 (en) * | 2006-08-29 | 2012-09-04 | Buchheit Brian K | Rotationally dependent information in a three dimensional graphical user interface |
WO2008039866A2 (en) * | 2006-09-26 | 2008-04-03 | Accoona Corp. | Apparatuses, methods and systems for an information comparator interface |
WO2008047246A2 (en) * | 2006-09-29 | 2008-04-24 | Peter Salemink | Systems and methods for managing information |
US8660635B2 (en) * | 2006-09-29 | 2014-02-25 | Medtronic, Inc. | Method and apparatus for optimizing a computer assisted surgical procedure |
US8104076B1 (en) | 2006-11-13 | 2012-01-24 | Jpmorgan Chase Bank, N.A. | Application access control system |
US20080120115A1 (en) * | 2006-11-16 | 2008-05-22 | Xiao Dong Mao | Methods and apparatuses for dynamically adjusting an audio signal based on a parameter |
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
JP4296521B2 (en) * | 2007-02-13 | 2009-07-15 | ソニー株式会社 | Display control apparatus, display control method, and program |
JP4561766B2 (en) * | 2007-04-06 | 2010-10-13 | 株式会社デンソー | Sound data search support device, sound data playback device, program |
US8832557B2 (en) * | 2007-05-04 | 2014-09-09 | Apple Inc. | Adjusting media display in a personal display system based on perspective |
US8605008B1 (en) | 2007-05-04 | 2013-12-10 | Apple Inc. | Head-mounted display |
US8090046B2 (en) * | 2007-06-01 | 2012-01-03 | Research In Motion Limited | Interactive compression with multiple units of compression state information |
JP4929061B2 (en) * | 2007-06-04 | 2012-05-09 | 株式会社コナミデジタルエンタテインメント | GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM |
US8589874B2 (en) | 2007-06-11 | 2013-11-19 | Microsoft Corporation | Visual interface to represent scripted behaviors |
US7760405B2 (en) * | 2007-08-30 | 2010-07-20 | Business Objects Software Ltd | Apparatus and method for integrating print preview with data modeling document editing |
US8010910B2 (en) * | 2007-09-04 | 2011-08-30 | Microsoft Corporation | Breadcrumb list supplementing for hierarchical data sets |
US8026933B2 (en) * | 2007-09-27 | 2011-09-27 | Rockwell Automation Technologies, Inc. | Visualization system(s) and method(s) for preserving or augmenting resolution and data associated with zooming or paning in an industrial automation environment |
US20090089402A1 (en) * | 2007-09-28 | 2009-04-02 | Bruce Gordon Fuller | Graphical display system for a human-machine interface |
WO2009046342A1 (en) * | 2007-10-04 | 2009-04-09 | Playspan, Inc. | Apparatus and method for virtual world item searching |
US8416247B2 (en) | 2007-10-09 | 2013-04-09 | Sony Computer Entertaiment America Inc. | Increasing the number of advertising impressions in an interactive environment |
JP5148967B2 (en) * | 2007-10-22 | 2013-02-20 | 株式会社ソニー・コンピュータエンタテインメント | Data management apparatus and method |
US9015633B2 (en) * | 2007-10-22 | 2015-04-21 | Sony Corporation | Data management apparatus and method for organizing data elements into multiple categories for display |
JP5059545B2 (en) * | 2007-10-23 | 2012-10-24 | 株式会社リコー | Image processing apparatus and image processing method |
US8397168B2 (en) | 2008-04-05 | 2013-03-12 | Social Communications Company | Interfacing with a spatial virtual communication environment |
EP2223239A4 (en) * | 2007-11-07 | 2012-08-22 | Skinit Inc | Customizing print content |
US8081186B2 (en) * | 2007-11-16 | 2011-12-20 | Microsoft Corporation | Spatial exploration field of view preview mechanism |
US20090132967A1 (en) * | 2007-11-16 | 2009-05-21 | Microsoft Corporation | Linked-media narrative learning system |
US8584044B2 (en) * | 2007-11-16 | 2013-11-12 | Microsoft Corporation | Localized thumbnail preview of related content during spatial browsing |
US20090128581A1 (en) * | 2007-11-20 | 2009-05-21 | Microsoft Corporation | Custom transition framework for application state transitions |
US20090177538A1 (en) * | 2008-01-08 | 2009-07-09 | Microsoft Corporation | Zoomable advertisements with targeted content |
US20090183068A1 (en) * | 2008-01-14 | 2009-07-16 | Sony Ericsson Mobile Communications Ab | Adaptive column rendering |
US8769558B2 (en) | 2008-02-12 | 2014-07-01 | Sony Computer Entertainment America Llc | Discovery and analytics for episodic downloaded media |
US7506259B1 (en) | 2008-02-14 | 2009-03-17 | International Business Machines Corporation | System and method for dynamic mapping of abstract user interface to a mobile device at run time |
US8775960B1 (en) | 2008-03-10 | 2014-07-08 | United Services Automobile Association (Usaa) | Systems and methods for geographic mapping and review |
US20090251459A1 (en) * | 2008-04-02 | 2009-10-08 | Virtual Expo Dynamics S.L. | Method to Create, Edit and Display Virtual Dynamic Interactive Ambients and Environments in Three Dimensions |
US8756304B2 (en) * | 2010-09-11 | 2014-06-17 | Social Communications Company | Relationship based presence indicating in virtual area contexts |
DE102008021183B4 (en) * | 2008-04-28 | 2021-12-30 | Volkswagen Ag | Method and device for displaying information in a vehicle |
US8949233B2 (en) * | 2008-04-28 | 2015-02-03 | Alexandria Investment Research and Technology, Inc. | Adaptive knowledge platform |
US9117007B2 (en) * | 2008-05-14 | 2015-08-25 | Microsoft Technology Licensing, Llc | Visualization of streaming real-time data |
US20090327969A1 (en) * | 2008-06-27 | 2009-12-31 | Microsoft Corporation | Semantic zoom in a virtual three-dimensional graphical user interface |
US9110970B2 (en) * | 2008-07-25 | 2015-08-18 | International Business Machines Corporation | Destructuring and restructuring relational data |
US8943087B2 (en) * | 2008-07-25 | 2015-01-27 | International Business Machines Corporation | Processing data from diverse databases |
US8972463B2 (en) * | 2008-07-25 | 2015-03-03 | International Business Machines Corporation | Method and apparatus for functional integration of metadata |
JP2010033296A (en) * | 2008-07-28 | 2010-02-12 | Namco Bandai Games Inc | Program, information storage medium, and image generation system |
US8170968B2 (en) * | 2008-08-15 | 2012-05-01 | Honeywell International Inc. | Recursive structure for diagnostic model |
US8495007B2 (en) * | 2008-08-28 | 2013-07-23 | Red Hat, Inc. | Systems and methods for hierarchical aggregation of multi-dimensional data sources |
US8463739B2 (en) * | 2008-08-28 | 2013-06-11 | Red Hat, Inc. | Systems and methods for generating multi-population statistical measures using middleware |
US20100064253A1 (en) * | 2008-09-11 | 2010-03-11 | International Business Machines Corporation | Providing Users With Location Information Within a Virtual World |
US9898173B2 (en) * | 2008-10-07 | 2018-02-20 | Adobe Systems Incorporated | User selection history |
WO2010056427A1 (en) * | 2008-11-14 | 2010-05-20 | Exxonmobil Upstream Research Company | Forming a model of a subsurface region |
US9020882B2 (en) | 2008-11-26 | 2015-04-28 | Red Hat, Inc. | Database hosting middleware dimensional transforms |
JP5232619B2 (en) * | 2008-12-18 | 2013-07-10 | 株式会社ソニー・コンピュータエンタテインメント | Inspection apparatus and inspection method |
JP4848001B2 (en) * | 2008-12-18 | 2011-12-28 | 株式会社ソニー・コンピュータエンタテインメント | Image processing apparatus and image processing method |
WO2010073616A1 (en) * | 2008-12-25 | 2010-07-01 | パナソニック株式会社 | Information displaying apparatus and information displaying method |
US8176097B2 (en) * | 2009-01-02 | 2012-05-08 | International Business Machines Corporation | Maintaining data coherency within related multi-perspective user interfaces via session-less queries |
FR2942091A1 (en) * | 2009-02-10 | 2010-08-13 | Alcatel Lucent | MULTIMEDIA COMMUNICATION IN A VIRTUAL ENVIRONMENT |
CN102414649B (en) | 2009-04-30 | 2015-05-20 | 辛纳普蒂克斯公司 | Operating a touch screen control system according to a plurality of rule sets |
EP2251823A1 (en) * | 2009-05-11 | 2010-11-17 | Hasso-Plattner-Institut für Softwaresystemtechnik GmbH | Business object based navigation |
CN101887444B (en) * | 2009-05-15 | 2012-12-19 | 国际商业机器公司 | Navigation method and system for webpage |
US8417739B2 (en) * | 2009-05-29 | 2013-04-09 | Red Hat, Inc. | Systems and methods for object-based modeling using hierarchical model objects |
US8930487B2 (en) * | 2009-05-29 | 2015-01-06 | Red Hat, Inc. | Object-based modeling using model objects exportable to external modeling tools |
US9009006B2 (en) | 2009-05-29 | 2015-04-14 | Red Hat, Inc. | Generating active links between model objects |
US8606827B2 (en) * | 2009-05-29 | 2013-12-10 | Red Hat, Inc. | Systems and methods for extracting database dimensions as data modeling object |
US9292592B2 (en) * | 2009-05-29 | 2016-03-22 | Red Hat, Inc. | Object-based modeling using composite model object having independently updatable component objects |
US9292485B2 (en) | 2009-05-29 | 2016-03-22 | Red Hat, Inc. | Extracting data cell transformable to model object |
US9105006B2 (en) | 2009-05-29 | 2015-08-11 | Red Hat, Inc. | Generating floating desktop representation of extracted model object |
WO2010150545A1 (en) * | 2009-06-24 | 2010-12-29 | パナソニック株式会社 | Graphics drawing device, graphics drawing method, graphics drawing program, storage medium having graphics drawing program stored, and integrated circuit |
KR101809884B1 (en) * | 2009-06-30 | 2017-12-15 | 나이키 이노베이트 씨.브이. | Design of consumer products |
US8763090B2 (en) | 2009-08-11 | 2014-06-24 | Sony Computer Entertainment America Llc | Management of ancillary content delivery and presentation |
JP5744869B2 (en) * | 2009-08-11 | 2015-07-08 | サムワンズ グループ インテレクチュアル プロパティー ホールディングス プロプライエタリー リミテッドSomeones Group Intellectual Property Holdings Pty Ltd. | Choice network navigation |
US9152944B2 (en) | 2009-08-31 | 2015-10-06 | Red Hat, Inc. | Generating rapidly rotatable dimensional view of data objects |
US9152435B2 (en) * | 2009-08-31 | 2015-10-06 | Red Hat, Inc. | Generating a set of linked rotational views of model objects |
US20110054854A1 (en) * | 2009-08-31 | 2011-03-03 | Eric Williamson | Systems and methods for generating dimensionally altered model objects |
US8417734B2 (en) * | 2009-08-31 | 2013-04-09 | Red Hat, Inc. | Systems and methods for managing sets of model objects via unified management interface |
US8365195B2 (en) * | 2009-08-31 | 2013-01-29 | Red Hat, Inc. | Systems and methods for generating sets of model objects having data messaging pipes |
US8909678B2 (en) * | 2009-09-30 | 2014-12-09 | Red Hat, Inc. | Conditioned distribution of data in a lattice-based database using spreading rules |
US9031987B2 (en) * | 2009-09-30 | 2015-05-12 | Red Hat, Inc. | Propagation of data changes in distribution operations in hierarchical database |
US8996453B2 (en) * | 2009-09-30 | 2015-03-31 | Red Hat, Inc. | Distribution of data in a lattice-based database via placeholder nodes |
US8970669B2 (en) | 2009-09-30 | 2015-03-03 | Rovi Guides, Inc. | Systems and methods for generating a three-dimensional media guidance application |
US20110078199A1 (en) * | 2009-09-30 | 2011-03-31 | Eric Williamson | Systems and methods for the distribution of data in a hierarchical database via placeholder nodes |
US8984013B2 (en) * | 2009-09-30 | 2015-03-17 | Red Hat, Inc. | Conditioning the distribution of data in a hierarchical database |
FR2952775B1 (en) * | 2009-11-19 | 2012-01-13 | Infovista Sa | PERFORMANCE MEASURING SERVER AND QUALITY OF SERVICE MONITORING USING A CONTROL LINE INTERFACE. |
US8589344B2 (en) * | 2009-11-30 | 2013-11-19 | Red Hat, Inc. | Systems and methods for generating iterated distributions of data in a hierarchical database |
US8396880B2 (en) * | 2009-11-30 | 2013-03-12 | Red Hat, Inc. | Systems and methods for generating an optimized output range for a data distribution in a hierarchical database |
US8315174B2 (en) * | 2009-12-31 | 2012-11-20 | Red Hat, Inc. | Systems and methods for generating a push-up alert of fault conditions in the distribution of data in a hierarchical database |
US9977472B2 (en) * | 2010-03-19 | 2018-05-22 | Nokia Technologies Oy | Method and apparatus for displaying relative motion of objects on graphical user interface |
US8957920B2 (en) | 2010-06-25 | 2015-02-17 | Microsoft Corporation | Alternative semantics for zoom operations in a zoomable scene |
US8319772B2 (en) * | 2010-07-23 | 2012-11-27 | Microsoft Corporation | 3D layering of map metadata |
US9483175B2 (en) * | 2010-07-26 | 2016-11-01 | Apple Inc. | Device, method, and graphical user interface for navigating through a hierarchy |
US8959454B2 (en) * | 2010-08-09 | 2015-02-17 | International Business Machines Corporation | Table management |
US20120042282A1 (en) * | 2010-08-12 | 2012-02-16 | Microsoft Corporation | Presenting Suggested Items for Use in Navigating within a Virtual Space |
CN103154982A (en) | 2010-08-16 | 2013-06-12 | 社会传播公司 | Promoting communicant interactions in network communications environment |
US9342793B2 (en) | 2010-08-31 | 2016-05-17 | Red Hat, Inc. | Training a self-learning network using interpolated input sets based on a target output |
US10353891B2 (en) | 2010-08-31 | 2019-07-16 | Red Hat, Inc. | Interpolating conformal input sets based on a target output |
US9355383B2 (en) | 2010-11-22 | 2016-05-31 | Red Hat, Inc. | Tracking differential changes in conformal data input sets |
US8364687B2 (en) | 2010-11-29 | 2013-01-29 | Red Hat, Inc. | Systems and methods for binding multiple interpolated data objects |
US10366464B2 (en) | 2010-11-29 | 2019-07-30 | Red Hat, Inc. | Generating interpolated input data sets using reduced input source objects |
US8346817B2 (en) | 2010-11-29 | 2013-01-01 | Red Hat, Inc. | Systems and methods for embedding interpolated data object in application data file |
US9038177B1 (en) | 2010-11-30 | 2015-05-19 | Jpmorgan Chase Bank, N.A. | Method and system for implementing multi-level data fusion |
US20120150633A1 (en) * | 2010-12-08 | 2012-06-14 | Microsoft Corporation | Generating advertisements during interactive advertising sessions |
US8589822B2 (en) | 2010-12-10 | 2013-11-19 | International Business Machines Corporation | Controlling three-dimensional views of selected portions of content |
US10783008B2 (en) | 2017-05-26 | 2020-09-22 | Sony Interactive Entertainment Inc. | Selective acceleration of emulation |
WO2018217377A1 (en) * | 2010-12-16 | 2018-11-29 | Sony Computer Entertainment Inc. | Selective acceleration of emulation |
US8290969B2 (en) | 2011-02-28 | 2012-10-16 | Red Hat, Inc. | Systems and methods for validating interpolation results using monte carlo simulations on interpolated data inputs |
US9489439B2 (en) | 2011-02-28 | 2016-11-08 | Red Hat, Inc. | Generating portable interpolated data using object-based encoding of interpolation results |
US8768942B2 (en) | 2011-02-28 | 2014-07-01 | Red Hat, Inc. | Systems and methods for generating interpolated data sets converging to optimized results using iterative overlapping inputs |
US8862638B2 (en) | 2011-02-28 | 2014-10-14 | Red Hat, Inc. | Interpolation data template to normalize analytic runs |
US9128587B1 (en) * | 2011-03-15 | 2015-09-08 | Amdocs Software Systems Limited | System, method, and computer program for presenting service options to a user utilizing a three-dimensional structure |
US8810598B2 (en) | 2011-04-08 | 2014-08-19 | Nant Holdings Ip, Llc | Interference based augmented reality hosting platforms |
KR20120123198A (en) * | 2011-04-19 | 2012-11-08 | 삼성전자주식회사 | Apparatus and method for editing a virtual space in a terminal |
US20120290612A1 (en) * | 2011-05-10 | 2012-11-15 | Ritoe Rajan V | N-dimensional data searching and display |
US8788203B2 (en) | 2011-05-23 | 2014-07-22 | Microsoft Corporation | User-driven navigation in a map navigation tool |
US9323871B2 (en) | 2011-06-27 | 2016-04-26 | Trimble Navigation Limited | Collaborative development of a model on a network |
US9280273B2 (en) * | 2011-06-30 | 2016-03-08 | Nokia Technologies Oy | Method, apparatus, and computer program for displaying content items in display regions |
AU2012281160B2 (en) | 2011-07-11 | 2017-09-21 | Paper Software LLC | System and method for processing document |
US10592593B2 (en) | 2011-07-11 | 2020-03-17 | Paper Software LLC | System and method for processing document |
AU2012281151B2 (en) | 2011-07-11 | 2017-08-10 | Paper Software LLC | System and method for searching a document |
AU2012282688B2 (en) * | 2011-07-11 | 2017-08-17 | Paper Software LLC | System and method for processing document |
US9292588B1 (en) | 2011-07-20 | 2016-03-22 | Jpmorgan Chase Bank, N.A. | Safe storing data for disaster recovery |
US8930385B2 (en) * | 2011-11-02 | 2015-01-06 | Alexander I. Poltorak | Relevance estimation and actions based thereon |
US9886495B2 (en) * | 2011-11-02 | 2018-02-06 | Alexander I. Poltorak | Relevance estimation and actions based thereon |
US9218692B2 (en) | 2011-11-15 | 2015-12-22 | Trimble Navigation Limited | Controlling rights to a drawing in a three-dimensional modeling environment |
EP2780816B1 (en) * | 2011-11-15 | 2018-03-21 | Trimble Inc. | Providing a real-time shared viewing experience in a three-dimensional modeling environment |
WO2013074568A1 (en) | 2011-11-15 | 2013-05-23 | Trimble Navigation Limited | Browser-based collaborative development of a 3d model |
WO2013078041A1 (en) | 2011-11-22 | 2013-05-30 | Trimble Navigation Limited | 3d modeling system distributed between a client device web browser and a server |
US9430119B2 (en) | 2011-11-26 | 2016-08-30 | Douzen, Inc. | Systems and methods for organizing and displaying hierarchical data structures in computing devices |
US9197925B2 (en) | 2011-12-13 | 2015-11-24 | Google Technology Holdings LLC | Populating a user interface display with information |
EP2605129B1 (en) * | 2011-12-16 | 2019-03-13 | BlackBerry Limited | Method of rendering a user interface |
US9799136B2 (en) * | 2011-12-21 | 2017-10-24 | Twentieth Century Fox Film Corporation | System, method and apparatus for rapid film pre-visualization |
US11232626B2 (en) | 2011-12-21 | 2022-01-25 | Twenieth Century Fox Film Corporation | System, method and apparatus for media pre-visualization |
US20130246037A1 (en) * | 2012-03-15 | 2013-09-19 | Kenneth Paul Ceglia | Methods and apparatus for monitoring operation of a system asset |
US9607424B2 (en) * | 2012-06-26 | 2017-03-28 | Lytro, Inc. | Depth-assigned content for depth-enhanced pictures |
US10129524B2 (en) | 2012-06-26 | 2018-11-13 | Google Llc | Depth-assigned content for depth-enhanced virtual reality images |
US9858649B2 (en) | 2015-09-30 | 2018-01-02 | Lytro, Inc. | Depth-based image blurring |
RU124014U1 (en) * | 2012-09-12 | 2013-01-10 | Арташес Валерьевич Икономов | PERSONALIZED INFORMATION SEARCH SYSTEM |
US20140101608A1 (en) | 2012-10-05 | 2014-04-10 | Google Inc. | User Interfaces for Head-Mountable Devices |
EP2911581B1 (en) * | 2012-10-26 | 2022-10-05 | Koninklijke Philips N.V. | Diagnostic representation and interpretation of ecg leads on a digital display |
US8655970B1 (en) * | 2013-01-29 | 2014-02-18 | Google Inc. | Automatic entertainment caching for impending travel |
US10540373B1 (en) | 2013-03-04 | 2020-01-21 | Jpmorgan Chase Bank, N.A. | Clause library manager |
US9465513B2 (en) * | 2013-04-11 | 2016-10-11 | General Electric Company | Visual representation of map navigation history |
US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
US9786075B2 (en) * | 2013-06-07 | 2017-10-10 | Microsoft Technology Licensing, Llc | Image extraction and image-based rendering for manifolds of terrestrial and aerial visualizations |
US20150007031A1 (en) * | 2013-06-26 | 2015-01-01 | Lucid Global, Llc. | Medical Environment Simulation and Presentation System |
US9704192B2 (en) | 2013-09-30 | 2017-07-11 | Comenity Llc | Method for displaying items on a 3-D shape |
US9710841B2 (en) | 2013-09-30 | 2017-07-18 | Comenity Llc | Method and medium for recommending a personalized ensemble |
US9582516B2 (en) | 2013-10-17 | 2017-02-28 | Nant Holdings Ip, Llc | Wide area augmented reality location-based services |
US20150127624A1 (en) * | 2013-11-01 | 2015-05-07 | Google Inc. | Framework for removing non-authored content documents from an authored-content database |
US20150220984A1 (en) * | 2014-02-06 | 2015-08-06 | Microsoft Corporation | Customer engagement accelerator |
KR101929372B1 (en) | 2014-05-30 | 2018-12-17 | 애플 인크. | Transition from use of one device to another |
US9753620B2 (en) | 2014-08-01 | 2017-09-05 | Axure Software Solutions, Inc. | Method, system and computer program product for facilitating the prototyping and previewing of dynamic interactive graphical design widget state transitions in an interactive documentation environment |
KR20160037508A (en) * | 2014-09-29 | 2016-04-06 | 삼성전자주식회사 | Display apparatus and displaying method of thereof |
US10354311B2 (en) | 2014-10-07 | 2019-07-16 | Comenity Llc | Determining preferences of an ensemble of items |
US9953357B2 (en) | 2014-10-07 | 2018-04-24 | Comenity Llc | Sharing an ensemble of items |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
US10085005B2 (en) | 2015-04-15 | 2018-09-25 | Lytro, Inc. | Capturing light-field volume image and video data using tiled light-field cameras |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US20160335343A1 (en) * | 2015-05-12 | 2016-11-17 | Culios Holding B.V. | Method and apparatus for utilizing agro-food product hierarchical taxonomy |
US9979909B2 (en) | 2015-07-24 | 2018-05-22 | Lytro, Inc. | Automatic lens flare detection and correction for light-field images |
KR102214319B1 (en) * | 2015-08-05 | 2021-02-09 | 한국전자통신연구원 | Device for converting ship information and method thereof |
US9639945B2 (en) | 2015-08-27 | 2017-05-02 | Lytro, Inc. | Depth-based application of image effects |
US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
US10637986B2 (en) | 2016-06-10 | 2020-04-28 | Apple Inc. | Displaying and updating a set of application views |
WO2018017071A1 (en) * | 2016-07-20 | 2018-01-25 | Hitachi, Ltd. | Data visualization device and method for big data analytics |
US20180068329A1 (en) * | 2016-09-02 | 2018-03-08 | International Business Machines Corporation | Predicting real property prices using a convolutional neural network |
US10846779B2 (en) | 2016-11-23 | 2020-11-24 | Sony Interactive Entertainment LLC | Custom product categorization of digital media content |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
US10860987B2 (en) | 2016-12-19 | 2020-12-08 | Sony Interactive Entertainment LLC | Personalized calendar for digital media content-related events |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
CN107053863B (en) * | 2017-04-17 | 2019-07-05 | 京东方科技集团股份有限公司 | Labeling method and device |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
US10521106B2 (en) * | 2017-06-27 | 2019-12-31 | International Business Machines Corporation | Smart element filtering method via gestures |
US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
US10997541B2 (en) * | 2017-09-22 | 2021-05-04 | 1Nteger, Llc | Systems and methods for investigating and evaluating financial crime and sanctions-related risks |
US11948116B2 (en) * | 2017-09-22 | 2024-04-02 | 1Nteger, Llc | Systems and methods for risk data navigation |
US20190163777A1 (en) * | 2017-11-26 | 2019-05-30 | International Business Machines Corporation | Enforcement of governance policies through automatic detection of profile refresh and confidence |
US10931991B2 (en) | 2018-01-04 | 2021-02-23 | Sony Interactive Entertainment LLC | Methods and systems for selectively skipping through media content |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
US11061919B1 (en) * | 2018-07-13 | 2021-07-13 | Dhirj Gupta | Computer-implemented apparatus and method for interactive visualization of a first set of objects in relation to a second set of objects in a data collection |
JP2020126571A (en) * | 2019-02-05 | 2020-08-20 | 利晃 山内 | Program software or script software to computer that is made computer-executable by graphic in which command, processing, instruction, programming language, or script language of program software or script software to computer is originally character but compared to thing |
US11907605B2 (en) | 2021-05-15 | 2024-02-20 | Apple Inc. | Shared-content session user interfaces |
US20220368548A1 (en) | 2021-05-15 | 2022-11-17 | Apple Inc. | Shared-content session user interfaces |
CN113505137B (en) * | 2021-07-27 | 2022-07-08 | 重庆市规划和自然资源信息中心 | Real estate space graph updating method |
US20230127460A1 (en) * | 2021-10-22 | 2023-04-27 | Ebay Inc. | Digital Content View Control System |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5081592A (en) * | 1987-08-05 | 1992-01-14 | Tektronix, Inc. | Test system for acquiring, calculating and displaying representations of data sequences |
US5204947A (en) * | 1990-10-31 | 1993-04-20 | International Business Machines Corporation | Application independent (open) hypermedia enablement services |
US5220675A (en) * | 1990-01-08 | 1993-06-15 | Microsoft Corporation | Method and system for customizing a user interface in an integrated environment |
US5596699A (en) * | 1994-02-02 | 1997-01-21 | Driskell; Stanley W. | Linear-viewing/radial-selection graphic for menu display |
US5625783A (en) * | 1994-12-13 | 1997-04-29 | Microsoft Corporation | Automated system and method for dynamic menu construction in a graphical user interface |
US5689667A (en) * | 1995-06-06 | 1997-11-18 | Silicon Graphics, Inc. | Methods and system of controlling menus with radial and linear portions |
US5745717A (en) * | 1995-06-07 | 1998-04-28 | Vayda; Mark | Graphical menu providing simultaneous multiple command selection |
US5874954A (en) * | 1996-04-23 | 1999-02-23 | Roku Technologies, L.L.C. | Centricity-based interface and method |
Family Cites Families (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5262761A (en) | 1987-09-08 | 1993-11-16 | Intelligent Micro Systems, Inc. | Displaying hierarchical tree-like designs in windows |
JPH01250129A (en) | 1988-03-02 | 1989-10-05 | Hitachi Ltd | Display screen operating system |
US5321800A (en) | 1989-11-24 | 1994-06-14 | Lesser Michael F | Graphical language methodology for information display |
US5341466A (en) | 1991-05-09 | 1994-08-23 | New York University | Fractal computer user centerface with zooming capability |
JP3586472B2 (en) | 1991-06-25 | 2004-11-10 | 富士ゼロックス株式会社 | Information display method and information display device |
US5555388A (en) | 1992-08-20 | 1996-09-10 | Borland International, Inc. | Multi-user system and methods providing improved file management by reading |
US5623588A (en) | 1992-12-14 | 1997-04-22 | New York University | Computer user interface with non-salience deemphasis |
US5555354A (en) * | 1993-03-23 | 1996-09-10 | Silicon Graphics Inc. | Method and apparatus for navigation within three-dimensional information landscape |
US5583984A (en) | 1993-06-11 | 1996-12-10 | Apple Computer, Inc. | Computer system with graphical user interface including automated enclosures |
JP2813728B2 (en) | 1993-11-01 | 1998-10-22 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Personal communication device with zoom / pan function |
DE69514820T2 (en) | 1994-10-26 | 2000-05-31 | Boeing Co | METHOD FOR CONTROLLING THE LEVEL OF DETAILS IN A COMPUTER-GENERATED SCREEN DISPLAY OF A COMPLEX STRUCTURE |
US5623589A (en) | 1995-03-31 | 1997-04-22 | Intel Corporation | Method and apparatus for incrementally browsing levels of stories |
US5761656A (en) * | 1995-06-26 | 1998-06-02 | Netdynamics, Inc. | Interaction between databases and graphical user interfaces |
US5838317A (en) | 1995-06-30 | 1998-11-17 | Microsoft Corporation | Method and apparatus for arranging displayed graphical representations on a computer interface |
US6037939A (en) | 1995-09-27 | 2000-03-14 | Sharp Kabushiki Kaisha | Method for enabling interactive manipulation of data retained in computer system, and a computer system for implementing the method |
US5889518A (en) | 1995-10-10 | 1999-03-30 | Anysoft Ltd. | Apparatus for and method of acquiring, processing and routing data contained in a GUI window |
US5903269A (en) | 1995-10-10 | 1999-05-11 | Anysoft Ltd. | Apparatus for and method of acquiring processing and routing data contained in a GUI window |
JP3176541B2 (en) | 1995-10-16 | 2001-06-18 | シャープ株式会社 | Information retrieval device and information retrieval method |
US5889951A (en) | 1996-05-13 | 1999-03-30 | Viewpoint Corporation | Systems, methods, and computer program products for accessing, leasing, relocating, constructing and modifying internet sites within a multi-dimensional virtual reality environment |
US5838326A (en) | 1996-09-26 | 1998-11-17 | Xerox Corporation | System for moving document objects in a 3-D workspace |
US5716546A (en) | 1996-10-23 | 1998-02-10 | Osram Sylvania Inc. | Reduction of lag in yttrium tantalate x-ray phosphors |
US6037944A (en) | 1996-11-07 | 2000-03-14 | Natrificial Llc | Method and apparatus for displaying a thought network from a thought's perspective |
US6166739A (en) | 1996-11-07 | 2000-12-26 | Natrificial, Llc | Method and apparatus for organizing and processing information using a digital computer |
US6122634A (en) | 1996-11-12 | 2000-09-19 | International Business Machines Corporation | Fractal nested layout for hierarchical system |
US6222547B1 (en) * | 1997-02-07 | 2001-04-24 | California Institute Of Technology | Monitoring and analysis of data in cyberspace |
US6137499A (en) * | 1997-03-07 | 2000-10-24 | Silicon Graphics, Inc. | Method, system, and computer program product for visualizing data using partial hierarchies |
US6034661A (en) | 1997-05-14 | 2000-03-07 | Sony Corporation | Apparatus and method for advertising in zoomable content |
US5912668A (en) | 1997-05-30 | 1999-06-15 | Sony Corporation | Controlling a screen display of a group of images represented by a graphical object |
US6025844A (en) | 1997-06-12 | 2000-02-15 | Netscape Communications Corporation | Method and system for creating dynamic link views |
US5999177A (en) | 1997-07-07 | 1999-12-07 | International Business Machines Corporation | Method and system for controlling content on a display screen in a computer system |
JPH1165430A (en) | 1997-08-19 | 1999-03-05 | Matsushita Electric Ind Co Ltd | Map zooming display method and its map zooming display device, and computer for map zooming display device |
US6278991B1 (en) | 1997-08-22 | 2001-08-21 | Sap Aktiengesellschaft | Browser for hierarchical structures |
US6133914A (en) | 1998-01-07 | 2000-10-17 | Rogers; David W. | Interactive graphical user interface |
US6192393B1 (en) * | 1998-04-07 | 2001-02-20 | Mgi Software Corporation | Method and system for panorama viewing |
US6052110A (en) | 1998-05-11 | 2000-04-18 | Sony Corporation | Dynamic control of zoom operation in computer graphics |
US6505202B1 (en) | 1998-08-04 | 2003-01-07 | Linda Allan Mosquera | Apparatus and methods for finding information that satisfies a profile and producing output therefrom |
US6289354B1 (en) | 1998-10-07 | 2001-09-11 | International Business Machines Corporation | System and method for similarity searching in high-dimensional data space |
US6449639B1 (en) * | 1998-12-23 | 2002-09-10 | Doxio, Inc. | Method and system for client-less viewing of scalable documents displayed using internet imaging protocol commands |
JP2000276474A (en) | 1999-03-24 | 2000-10-06 | Fuji Photo Film Co Ltd | Device and method for database retrieval |
GB9907490D0 (en) * | 1999-03-31 | 1999-05-26 | British Telecomm | Computer system |
US20020080177A1 (en) | 2000-02-14 | 2002-06-27 | Julian Orbanes | Method and apparatus for converting data objects to a custom format for viewing information in virtual space |
US20020089550A1 (en) | 2000-02-14 | 2002-07-11 | Julian Orbanes | Method and apparatus for organizing hierarchical screens in virtual space |
US20020085035A1 (en) | 2000-02-14 | 2002-07-04 | Julian Orbanes | Method and apparatus for creating custom formats for viewing information in virtual space |
JP2003529825A (en) | 2000-02-14 | 2003-10-07 | ジオフェニックス, インコーポレイテッド | Method and system for graphical programming |
US20020075311A1 (en) | 2000-02-14 | 2002-06-20 | Julian Orbanes | Method for viewing information in virtual space |
-
2001
- 2001-02-14 JP JP2001560805A patent/JP2003529825A/en not_active Withdrawn
- 2001-02-14 CA CA002400330A patent/CA2400330A1/en not_active Abandoned
- 2001-02-14 EP EP01910732A patent/EP1287431A2/en not_active Withdrawn
- 2001-02-14 US US09/782,965 patent/US6751620B2/en not_active Expired - Lifetime
- 2001-02-14 WO PCT/US2001/004847 patent/WO2001061483A2/en not_active Application Discontinuation
- 2001-02-14 WO PCT/US2001/004772 patent/WO2001061456A2/en not_active Application Discontinuation
- 2001-02-14 CA CA002400037A patent/CA2400037A1/en not_active Abandoned
- 2001-02-14 EP EP01910691A patent/EP1256046A2/en not_active Withdrawn
- 2001-02-14 US US09/784,808 patent/US20010045965A1/en not_active Abandoned
- 2001-02-14 AU AU2001238274A patent/AU2001238274A1/en not_active Abandoned
- 2001-02-14 US US09/783,717 patent/US6785667B2/en not_active Expired - Lifetime
- 2001-02-14 US US09/782,968 patent/US20020109680A1/en not_active Abandoned
- 2001-02-14 US US09/783,715 patent/US20020075331A1/en not_active Abandoned
- 2001-02-14 JP JP2001560783A patent/JP2004503839A/en not_active Withdrawn
- 2001-02-14 US US09/782,967 patent/US20020105537A1/en not_active Abandoned
- 2001-02-14 AU AU2001238311A patent/AU2001238311A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5081592A (en) * | 1987-08-05 | 1992-01-14 | Tektronix, Inc. | Test system for acquiring, calculating and displaying representations of data sequences |
US5220675A (en) * | 1990-01-08 | 1993-06-15 | Microsoft Corporation | Method and system for customizing a user interface in an integrated environment |
US5204947A (en) * | 1990-10-31 | 1993-04-20 | International Business Machines Corporation | Application independent (open) hypermedia enablement services |
US5596699A (en) * | 1994-02-02 | 1997-01-21 | Driskell; Stanley W. | Linear-viewing/radial-selection graphic for menu display |
US5625783A (en) * | 1994-12-13 | 1997-04-29 | Microsoft Corporation | Automated system and method for dynamic menu construction in a graphical user interface |
US5689667A (en) * | 1995-06-06 | 1997-11-18 | Silicon Graphics, Inc. | Methods and system of controlling menus with radial and linear portions |
US5745717A (en) * | 1995-06-07 | 1998-04-28 | Vayda; Mark | Graphical menu providing simultaneous multiple command selection |
US5874954A (en) * | 1996-04-23 | 1999-02-23 | Roku Technologies, L.L.C. | Centricity-based interface and method |
Cited By (215)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050278647A1 (en) * | 2000-11-09 | 2005-12-15 | Change Tools, Inc. | User definable interface system and method |
US7895530B2 (en) | 2000-11-09 | 2011-02-22 | Change Tools, Inc. | User definable interface system, method, support tools, and computer program product |
US9256356B2 (en) * | 2001-03-29 | 2016-02-09 | International Business Machines Corporation | Method and system for providing feedback for docking a content pane in a host window |
US20070266336A1 (en) * | 2001-03-29 | 2007-11-15 | International Business Machines Corporation | Method and system for providing feedback for docking a content pane in a host window |
US6966028B1 (en) * | 2001-04-18 | 2005-11-15 | Charles Schwab & Co., Inc. | System and method for a uniform website platform that can be targeted to individual users and environments |
US7010744B1 (en) * | 2001-05-14 | 2006-03-07 | The Mathworks, Inc. | System and method of navigating and creating electronic hierarchical documents |
US20070180361A1 (en) * | 2001-07-11 | 2007-08-02 | International Business Machines Corporation | Method and system for dynamic web page breadcrumbing using javascript |
US8539330B2 (en) * | 2001-07-11 | 2013-09-17 | International Business Machines Corporation | Method and system for dynamic web page breadcrumbing using javascript |
US7093201B2 (en) * | 2001-09-06 | 2006-08-15 | Danger, Inc. | Loop menu navigation apparatus and method |
US20030043206A1 (en) * | 2001-09-06 | 2003-03-06 | Matias Duarte | Loop menu navigation apparatus and method |
US20030065440A1 (en) * | 2001-09-28 | 2003-04-03 | Pioneer Corporation | Navigation system, mobile navigation apparatus, communication navigation apparatus and information server apparatus, navigation method, mobile navigation method, communication navigation method and server processing method, navigation program, mobile navigation program, communication navigation program and server processing program, and information recording medium |
US6856892B2 (en) * | 2001-09-28 | 2005-02-15 | Pioneer Corporation | Navigation apparatus and information server |
US20050125147A1 (en) * | 2001-12-07 | 2005-06-09 | Guido Mueller | Method for displaying a hierarchically structure list and associated display unit |
US7716582B2 (en) * | 2001-12-07 | 2010-05-11 | Robert Bosch Gmbh | Method for displaying a hierarchically structure list and associated display unit |
US9135659B1 (en) | 2002-03-18 | 2015-09-15 | Cary D. Perttunen | Graphical representation of financial information |
US7928982B1 (en) | 2002-03-18 | 2011-04-19 | Perttunen Cary D | Visible representation of stock market indices |
US8228332B1 (en) | 2002-03-18 | 2012-07-24 | Perttunen Cary D | Visible representation of a user's watch list of stocks and stock market indices |
US8456473B1 (en) | 2002-03-18 | 2013-06-04 | Cary D. Perttunen | Graphical representation of financial information |
US8659605B1 (en) | 2002-03-18 | 2014-02-25 | Cary D. Perttunen | Graphical representation of financial information |
US7830383B1 (en) | 2002-03-18 | 2010-11-09 | Perttunen Cary D | Determining related stocks based on postings of messages |
US7046248B1 (en) * | 2002-03-18 | 2006-05-16 | Perttunen Cary D | Graphical representation of financial information |
US20030231145A1 (en) * | 2002-06-18 | 2003-12-18 | Hitachi, Ltd | Display apparatus for displaying property of electronic appliance |
US7945652B2 (en) * | 2002-08-06 | 2011-05-17 | Sheng (Ted) Tai Tsao | Display multi-layers list item in web-browser with supporting of concurrent multi-users |
US10484455B2 (en) | 2002-08-06 | 2019-11-19 | Sheng Tai (Ted) Tsao | Method and apparatus for information exchange over a web based environment |
US20070198713A1 (en) * | 2002-08-06 | 2007-08-23 | Tsao Sheng T | Display multi-layers list item in web-browser with supporting of concurrent multi-users |
US20040157193A1 (en) * | 2003-02-10 | 2004-08-12 | Mejias Ulises Ali | Computer-aided design and production of an online learning course |
US8453175B2 (en) | 2003-05-29 | 2013-05-28 | Eat.Tv, Llc | System for presentation of multimedia content |
US20040268413A1 (en) * | 2003-05-29 | 2004-12-30 | Reid Duane M. | System for presentation of multimedia content |
US20070243905A1 (en) * | 2004-06-12 | 2007-10-18 | Mobisol Inc. | Method and Apparatus for Operating user Interface of Mobile Terminal Having Pointing Device |
US20060173594A1 (en) * | 2004-08-26 | 2006-08-03 | Johannes Kolletzki | Vehicle multimedia system |
US8161404B2 (en) | 2004-08-26 | 2012-04-17 | Harman Becker Automotive Systems Gmbh | Vehicle multimedia system |
US20100106367A1 (en) * | 2004-08-26 | 2010-04-29 | Harman Becker Automotive Systems Gmbh | Vehicle multimedia system |
US8694201B2 (en) | 2004-08-26 | 2014-04-08 | Harman International Industries, Incorporated | Vehicle multimedia system |
US20060171675A1 (en) * | 2004-08-26 | 2006-08-03 | Johannes Kolletzki | Vehicle multimedia system |
US7643917B2 (en) * | 2004-08-26 | 2010-01-05 | Harman Becker Automotive Systems Gmbh | Vehicle multimedia system |
US9031742B2 (en) | 2004-08-26 | 2015-05-12 | Harman Becker Automotive Systems Gmbh | Vehicle multimedia system |
US8763052B2 (en) | 2004-10-29 | 2014-06-24 | Eat.Tv, Inc. | System for enabling video-based interactive applications |
US7849395B2 (en) * | 2004-12-15 | 2010-12-07 | Microsoft Corporation | Filter and sort by color |
US9507496B2 (en) | 2004-12-15 | 2016-11-29 | Microsoft Technology Licensing, Llc | Filter and sort by format |
US20060129914A1 (en) * | 2004-12-15 | 2006-06-15 | Microsoft Corporation | Filter and sort by color |
US8745482B2 (en) | 2004-12-15 | 2014-06-03 | Microsoft Corporation | Sorting spreadsheet data by format |
US10169318B2 (en) | 2004-12-15 | 2019-01-01 | Microsoft Technology Licensing, Llc | Filter and sort by format |
US20100325526A1 (en) * | 2004-12-15 | 2010-12-23 | Microsoft Corporation | Filter and sort by format |
US10963317B2 (en) | 2004-12-16 | 2021-03-30 | Pegasystems Inc. | System and method for non-programmatically constructing software solutions |
US20060150148A1 (en) * | 2004-12-16 | 2006-07-06 | Openspan, Inc. | System and method for non-programmatically constructing software solutions |
US9766953B2 (en) * | 2004-12-16 | 2017-09-19 | Openspan, Inc. | System and method for non-programmatically constructing software solutions |
US10268525B2 (en) | 2004-12-16 | 2019-04-23 | Pegasystems Inc. | System and method for non-programmatically constructing software solutions |
US20060190833A1 (en) * | 2005-02-18 | 2006-08-24 | Microsoft Corporation | Single-handed approach for navigation of application tiles using panning and zooming |
US10282080B2 (en) | 2005-02-18 | 2019-05-07 | Apple Inc. | Single-handed approach for navigation of application tiles using panning and zooming |
US8819569B2 (en) * | 2005-02-18 | 2014-08-26 | Zumobi, Inc | Single-handed approach for navigation of application tiles using panning and zooming |
US9411505B2 (en) | 2005-02-18 | 2016-08-09 | Apple Inc. | Single-handed approach for navigation of application tiles using panning and zooming |
US20100077354A1 (en) * | 2006-01-27 | 2010-03-25 | Microsoft Corporation | Area Selectable Menus |
US8533629B2 (en) | 2006-01-27 | 2013-09-10 | Microsoft Corporation | Area selectable menus |
US7644372B2 (en) | 2006-01-27 | 2010-01-05 | Microsoft Corporation | Area frequency radial menus |
US20070180392A1 (en) * | 2006-01-27 | 2007-08-02 | Microsoft Corporation | Area frequency radial menus |
US20080126963A1 (en) * | 2006-06-30 | 2008-05-29 | Samsung Electronics Co., Ltd. | User terminal to manage driver and network port and method of controlling the same |
US20080086464A1 (en) * | 2006-10-04 | 2008-04-10 | David Enga | Efficient method of location-based content management and delivery |
US11341202B2 (en) * | 2006-10-04 | 2022-05-24 | Craxel, Inc. | Efficient method of location-based content management and delivery |
US20100064260A1 (en) * | 2007-02-05 | 2010-03-11 | Brother Kogyo Kabushiki Kaisha | Image Display Device |
US8296662B2 (en) * | 2007-02-05 | 2012-10-23 | Brother Kogyo Kabushiki Kaisha | Image display device |
US9495144B2 (en) | 2007-03-23 | 2016-11-15 | Apple Inc. | Systems and methods for controlling application updates across a wireless interface |
US10042898B2 (en) | 2007-05-09 | 2018-08-07 | Illinois Institutre Of Technology | Weighted metalabels for enhanced search in hierarchical abstract data organization systems |
US20160103848A1 (en) * | 2007-05-09 | 2016-04-14 | Illinois Institute Of Technology | Collaborative and personalized storage and search in hierarchical abstract data organization systems |
US9633028B2 (en) * | 2007-05-09 | 2017-04-25 | Illinois Institute Of Technology | Collaborative and personalized storage and search in hierarchical abstract data organization systems |
US20090063517A1 (en) * | 2007-08-30 | 2009-03-05 | Microsoft Corporation | User interfaces for scoped hierarchical data sets |
FR2924506A1 (en) * | 2007-12-03 | 2009-06-05 | Bosch Gmbh Robert | METHOD FOR ORGANIZING PRESSURE-SENSITIVE AREAS ON A PRESSURE-SENSITIVE DISPLAY DEVICE |
US20090172549A1 (en) * | 2007-12-28 | 2009-07-02 | Motorola, Inc. | Method and apparatus for transitioning between screen presentations on a display of an electronic device |
US20100127978A1 (en) * | 2008-11-24 | 2010-05-27 | Peterson Michael L | Pointing device housed in a writing device |
US8310447B2 (en) * | 2008-11-24 | 2012-11-13 | Lsi Corporation | Pointing device housed in a writing device |
US20100162319A1 (en) * | 2008-12-23 | 2010-06-24 | At&T Intellectual Property I, L.P. | Navigation Method and System to Provide a Navigation Interface |
US8973051B2 (en) | 2008-12-23 | 2015-03-03 | At&T Intellectual Property I, Lp | Navigation method and system to provide a navigation interface |
US8196174B2 (en) | 2008-12-23 | 2012-06-05 | At&T Intellectual Property I, L.P. | Navigation method and system to provide a navigation interface |
US20120089903A1 (en) * | 2009-06-30 | 2012-04-12 | Hewlett-Packard Development Company, L.P. | Selective content extraction |
US9032285B2 (en) * | 2009-06-30 | 2015-05-12 | Hewlett-Packard Development Company, L.P. | Selective content extraction |
US9348507B2 (en) * | 2010-10-29 | 2016-05-24 | International Business Machines Corporation | Controlling electronic equipment with a touching-type signal input device |
US20120110495A1 (en) * | 2010-10-29 | 2012-05-03 | International Business Machines Corporation | Controlling electronic equipment with a touching-type signal input device |
US10838581B2 (en) | 2010-10-29 | 2020-11-17 | International Business Machines Corporation | Controlling electronic equipment navigation among multiple open applications |
US9967605B2 (en) * | 2011-03-03 | 2018-05-08 | Sony Corporation | Method and apparatus for providing customized menus |
US20230153347A1 (en) * | 2011-07-05 | 2023-05-18 | Michael Stewart Shunock | System and method for annotating images |
US10338736B1 (en) | 2011-08-05 | 2019-07-02 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10386960B1 (en) | 2011-08-05 | 2019-08-20 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10365758B1 (en) | 2011-08-05 | 2019-07-30 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10345961B1 (en) | 2011-08-05 | 2019-07-09 | P4tents1, LLC | Devices and methods for navigating between user interfaces |
US10664097B1 (en) | 2011-08-05 | 2020-05-26 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10649571B1 (en) | 2011-08-05 | 2020-05-12 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10275087B1 (en) | 2011-08-05 | 2019-04-30 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10656752B1 (en) | 2011-08-05 | 2020-05-19 | P4tents1, LLC | Gesture-equipped touch screen system, method, and computer program product |
US10540039B1 (en) | 2011-08-05 | 2020-01-21 | P4tents1, LLC | Devices and methods for navigating between user interface |
EP2621157A3 (en) * | 2012-01-26 | 2014-05-07 | Kyocera Document Solutions Inc. | Operation device, image forming apparatus and image forming apparatus system |
CN103227879A (en) * | 2012-01-26 | 2013-07-31 | 京瓷办公信息系统株式会社 | Operation device, image forming apparatus and image forming apparatus system |
US9477377B2 (en) | 2012-01-26 | 2016-10-25 | Kyocera Document Solutions Inc. | Operation device, image forming apparatus and image forming apparatus system |
US10481690B2 (en) | 2012-05-09 | 2019-11-19 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for media adjustment operations performed in a user interface |
US10942570B2 (en) | 2012-05-09 | 2021-03-09 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface |
US11947724B2 (en) | 2012-05-09 | 2024-04-02 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface |
US11354033B2 (en) | 2012-05-09 | 2022-06-07 | Apple Inc. | Device, method, and graphical user interface for managing icons in a user interface region |
US9823839B2 (en) | 2012-05-09 | 2017-11-21 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US11314407B2 (en) | 2012-05-09 | 2022-04-26 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US11221675B2 (en) | 2012-05-09 | 2022-01-11 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface |
US11068153B2 (en) | 2012-05-09 | 2021-07-20 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US11023116B2 (en) | 2012-05-09 | 2021-06-01 | Apple Inc. | Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input |
US9886184B2 (en) | 2012-05-09 | 2018-02-06 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US11010027B2 (en) | 2012-05-09 | 2021-05-18 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US10996788B2 (en) | 2012-05-09 | 2021-05-04 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US10969945B2 (en) | 2012-05-09 | 2021-04-06 | Apple Inc. | Device, method, and graphical user interface for selecting user interface objects |
US9753639B2 (en) | 2012-05-09 | 2017-09-05 | Apple Inc. | Device, method, and graphical user interface for displaying content associated with a corresponding affordance |
US10908808B2 (en) | 2012-05-09 | 2021-02-02 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US9990121B2 (en) | 2012-05-09 | 2018-06-05 | Apple Inc. | Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input |
US10884591B2 (en) | 2012-05-09 | 2021-01-05 | Apple Inc. | Device, method, and graphical user interface for selecting object within a group of objects |
US9996231B2 (en) | 2012-05-09 | 2018-06-12 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US10782871B2 (en) | 2012-05-09 | 2020-09-22 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US10775999B2 (en) | 2012-05-09 | 2020-09-15 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10042542B2 (en) | 2012-05-09 | 2018-08-07 | Apple Inc. | Device, method, and graphical user interface for moving and dropping a user interface object |
US10775994B2 (en) | 2012-05-09 | 2020-09-15 | Apple Inc. | Device, method, and graphical user interface for moving and dropping a user interface object |
US10592041B2 (en) | 2012-05-09 | 2020-03-17 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US10496260B2 (en) | 2012-05-09 | 2019-12-03 | Apple Inc. | Device, method, and graphical user interface for pressure-based alteration of controls in a user interface |
US9612741B2 (en) | 2012-05-09 | 2017-04-04 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US9619076B2 (en) | 2012-05-09 | 2017-04-11 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US10073615B2 (en) | 2012-05-09 | 2018-09-11 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10191627B2 (en) | 2012-05-09 | 2019-01-29 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US10095391B2 (en) | 2012-05-09 | 2018-10-09 | Apple Inc. | Device, method, and graphical user interface for selecting user interface objects |
US10175757B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for touch-based operations performed and reversed in a user interface |
US10175864B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for selecting object within a group of objects in accordance with contact intensity |
US10114546B2 (en) | 2012-05-09 | 2018-10-30 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10126930B2 (en) | 2012-05-09 | 2018-11-13 | Apple Inc. | Device, method, and graphical user interface for scrolling nested regions |
US10168826B2 (en) | 2012-05-09 | 2019-01-01 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US11449202B1 (en) * | 2012-06-01 | 2022-09-20 | Ansys, Inc. | User interface and method of data navigation in the user interface of engineering analysis applications |
US10073609B2 (en) * | 2012-10-23 | 2018-09-11 | Nintendo Co., Ltd. | Information-processing device, storage medium, information-processing method and information-processing system for controlling movement of a display area |
US20140115532A1 (en) * | 2012-10-23 | 2014-04-24 | Nintendo Co., Ltd. | Information-processing device, storage medium, information-processing method, and information-processing system |
US9411899B2 (en) | 2012-12-21 | 2016-08-09 | Paypal, Inc. | Contextual breadcrumbs during navigation |
EP2936290A4 (en) * | 2012-12-21 | 2016-07-06 | Paypal Inc | Contextual breadcrumbs during navigation |
CN105144150A (en) * | 2012-12-21 | 2015-12-09 | 电子湾有限公司 | Contextual breadcrumbs during navigation |
US9959025B2 (en) | 2012-12-29 | 2018-05-01 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US10037138B2 (en) | 2012-12-29 | 2018-07-31 | Apple Inc. | Device, method, and graphical user interface for switching between user interfaces |
US10185491B2 (en) | 2012-12-29 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for determining whether to scroll or enlarge content |
US10078442B2 (en) | 2012-12-29 | 2018-09-18 | Apple Inc. | Device, method, and graphical user interface for determining whether to scroll or select content based on an intensity theshold |
US10437333B2 (en) | 2012-12-29 | 2019-10-08 | Apple Inc. | Device, method, and graphical user interface for forgoing generation of tactile output for a multi-contact gesture |
US10620781B2 (en) | 2012-12-29 | 2020-04-14 | Apple Inc. | Device, method, and graphical user interface for moving a cursor according to a change in an appearance of a control icon with simulated three-dimensional characteristics |
US10915243B2 (en) | 2012-12-29 | 2021-02-09 | Apple Inc. | Device, method, and graphical user interface for adjusting content selection |
US9965074B2 (en) | 2012-12-29 | 2018-05-08 | Apple Inc. | Device, method, and graphical user interface for transitioning between touch input to display output relationships |
US10101887B2 (en) | 2012-12-29 | 2018-10-16 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US9996233B2 (en) | 2012-12-29 | 2018-06-12 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US9857897B2 (en) | 2012-12-29 | 2018-01-02 | Apple Inc. | Device and method for assigning respective portions of an aggregate intensity to a plurality of contacts |
US10175879B2 (en) | 2012-12-29 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for zooming a user interface while performing a drag operation |
US9778771B2 (en) | 2012-12-29 | 2017-10-03 | Apple Inc. | Device, method, and graphical user interface for transitioning between touch input to display output relationships |
US20150040075A1 (en) * | 2013-08-05 | 2015-02-05 | Samsung Electronics Co., Ltd. | Display apparatus and control method thereof |
US9606776B2 (en) | 2014-03-07 | 2017-03-28 | Mitsubishi Electric Corporation | Programming device |
US20150286391A1 (en) * | 2014-04-08 | 2015-10-08 | Olio Devices, Inc. | System and method for smart watch navigation |
US10338772B2 (en) | 2015-03-08 | 2019-07-02 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9632664B2 (en) | 2015-03-08 | 2017-04-25 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9645732B2 (en) | 2015-03-08 | 2017-05-09 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US10067645B2 (en) | 2015-03-08 | 2018-09-04 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10095396B2 (en) | 2015-03-08 | 2018-10-09 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US10387029B2 (en) | 2015-03-08 | 2019-08-20 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US10180772B2 (en) | 2015-03-08 | 2019-01-15 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10402073B2 (en) | 2015-03-08 | 2019-09-03 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US11112957B2 (en) | 2015-03-08 | 2021-09-07 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US9645709B2 (en) | 2015-03-08 | 2017-05-09 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9990107B2 (en) | 2015-03-08 | 2018-06-05 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US10613634B2 (en) | 2015-03-08 | 2020-04-07 | Apple Inc. | Devices and methods for controlling media presentation |
DK201500581A1 (en) * | 2015-03-08 | 2017-01-16 | Apple Inc | Devices, Methods, and Graphical User Interfaces for Displaying and Using Menus |
US10860177B2 (en) | 2015-03-08 | 2020-12-08 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10268341B2 (en) | 2015-03-08 | 2019-04-23 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
DK201500575A1 (en) * | 2015-03-08 | 2016-09-26 | Apple Inc | Devices, Methods, and Graphical User Interfaces for Displaying and Using Menus |
US10048757B2 (en) | 2015-03-08 | 2018-08-14 | Apple Inc. | Devices and methods for controlling media presentation |
US10268342B2 (en) | 2015-03-08 | 2019-04-23 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10452755B2 (en) * | 2015-03-10 | 2019-10-22 | Microsoft Technology Licensing, Llc | Hierarchical navigation control |
US20160267063A1 (en) * | 2015-03-10 | 2016-09-15 | Microsoft Technology Licensing, Llc | Hierarchical navigation control |
US10222980B2 (en) | 2015-03-19 | 2019-03-05 | Apple Inc. | Touch input cursor manipulation |
US11550471B2 (en) | 2015-03-19 | 2023-01-10 | Apple Inc. | Touch input cursor manipulation |
US10599331B2 (en) | 2015-03-19 | 2020-03-24 | Apple Inc. | Touch input cursor manipulation |
US11054990B2 (en) | 2015-03-19 | 2021-07-06 | Apple Inc. | Touch input cursor manipulation |
US9639184B2 (en) | 2015-03-19 | 2017-05-02 | Apple Inc. | Touch input cursor manipulation |
US9785305B2 (en) | 2015-03-19 | 2017-10-10 | Apple Inc. | Touch input cursor manipulation |
US10067653B2 (en) | 2015-04-01 | 2018-09-04 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10152208B2 (en) | 2015-04-01 | 2018-12-11 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US9602729B2 (en) | 2015-06-07 | 2017-03-21 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US11240424B2 (en) | 2015-06-07 | 2022-02-01 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9706127B2 (en) | 2015-06-07 | 2017-07-11 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9860451B2 (en) | 2015-06-07 | 2018-01-02 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10455146B2 (en) | 2015-06-07 | 2019-10-22 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9674426B2 (en) | 2015-06-07 | 2017-06-06 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10303354B2 (en) | 2015-06-07 | 2019-05-28 | Apple Inc. | Devices and methods for navigating between user interfaces |
US11835985B2 (en) | 2015-06-07 | 2023-12-05 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10841484B2 (en) | 2015-06-07 | 2020-11-17 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US11681429B2 (en) | 2015-06-07 | 2023-06-20 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10200598B2 (en) | 2015-06-07 | 2019-02-05 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9830048B2 (en) | 2015-06-07 | 2017-11-28 | Apple Inc. | Devices and methods for processing touch inputs with instructions in a web page |
US9916080B2 (en) | 2015-06-07 | 2018-03-13 | Apple Inc. | Devices and methods for navigating between user interfaces |
US10346030B2 (en) | 2015-06-07 | 2019-07-09 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9891811B2 (en) | 2015-06-07 | 2018-02-13 | Apple Inc. | Devices and methods for navigating between user interfaces |
US10705718B2 (en) | 2015-06-07 | 2020-07-07 | Apple Inc. | Devices and methods for navigating between user interfaces |
US11231831B2 (en) | 2015-06-07 | 2022-01-25 | Apple Inc. | Devices and methods for content preview based on touch input intensity |
US10185480B1 (en) * | 2015-06-15 | 2019-01-22 | Symantec Corporation | Systems and methods for automatically making selections in user interfaces |
USD769314S1 (en) * | 2015-06-30 | 2016-10-18 | Your Voice Usa Corp. | Display screen with icons |
US9880735B2 (en) | 2015-08-10 | 2018-01-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10884608B2 (en) | 2015-08-10 | 2021-01-05 | Apple Inc. | Devices, methods, and graphical user interfaces for content navigation and manipulation |
US11182017B2 (en) | 2015-08-10 | 2021-11-23 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10698598B2 (en) | 2015-08-10 | 2020-06-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10754542B2 (en) | 2015-08-10 | 2020-08-25 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US11740785B2 (en) | 2015-08-10 | 2023-08-29 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10963158B2 (en) | 2015-08-10 | 2021-03-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US11327648B2 (en) | 2015-08-10 | 2022-05-10 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10203868B2 (en) | 2015-08-10 | 2019-02-12 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10209884B2 (en) | 2015-08-10 | 2019-02-19 | Apple Inc. | Devices, Methods, and Graphical User Interfaces for Manipulating User Interface Objects with Visual and/or Haptic Feedback |
US10162452B2 (en) | 2015-08-10 | 2018-12-25 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10416800B2 (en) | 2015-08-10 | 2019-09-17 | Apple Inc. | Devices, methods, and graphical user interfaces for adjusting user interface objects |
US10248308B2 (en) | 2015-08-10 | 2019-04-02 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interfaces with physical gestures |
US10235035B2 (en) | 2015-08-10 | 2019-03-19 | Apple Inc. | Devices, methods, and graphical user interfaces for content navigation and manipulation |
US11637689B2 (en) | 2016-02-29 | 2023-04-25 | Craxel, Inc. | Efficient encrypted data management system and method |
US11853690B1 (en) | 2016-05-31 | 2023-12-26 | The Mathworks, Inc. | Systems and methods for highlighting graphical models |
US11100075B2 (en) * | 2019-03-19 | 2021-08-24 | Servicenow, Inc. | Graphical user interfaces for incorporating complex data objects into a workflow |
US11003422B2 (en) | 2019-05-10 | 2021-05-11 | Fasility Llc | Methods and systems for visual programming using polymorphic, dynamic multi-dimensional structures |
US20210191585A1 (en) * | 2019-12-23 | 2021-06-24 | Fuji Xerox Co., Ltd. | Information processing apparatus and non-transitory computer readable medium storing computer program |
US11740788B2 (en) | 2022-01-18 | 2023-08-29 | Craxel, Inc. | Composite operations using multiple hierarchical data spaces |
US11880608B2 (en) | 2022-01-18 | 2024-01-23 | Craxel, Inc. | Organizing information using hierarchical data spaces |
Also Published As
Publication number | Publication date |
---|---|
WO2001061456A2 (en) | 2001-08-23 |
AU2001238274A1 (en) | 2001-08-27 |
US6751620B2 (en) | 2004-06-15 |
WO2001061483A2 (en) | 2001-08-23 |
US20020109680A1 (en) | 2002-08-15 |
EP1287431A2 (en) | 2003-03-05 |
US20020069215A1 (en) | 2002-06-06 |
US20020075331A1 (en) | 2002-06-20 |
JP2004503839A (en) | 2004-02-05 |
JP2003529825A (en) | 2003-10-07 |
WO2001061456A9 (en) | 2002-10-31 |
US6785667B2 (en) | 2004-08-31 |
CA2400037A1 (en) | 2001-08-23 |
AU2001238311A1 (en) | 2001-08-27 |
CA2400330A1 (en) | 2001-08-23 |
WO2001061483A3 (en) | 2002-12-05 |
EP1256046A2 (en) | 2002-11-13 |
WO2001061456A3 (en) | 2002-05-02 |
US20020083034A1 (en) | 2002-06-27 |
US20020105537A1 (en) | 2002-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20010045965A1 (en) | Method and system for receiving user input | |
US20020089541A1 (en) | System for graphically interconnecting operators | |
US20010052110A1 (en) | System and method for graphically programming operators | |
US20020089550A1 (en) | Method and apparatus for organizing hierarchical screens in virtual space | |
US20020075311A1 (en) | Method for viewing information in virtual space | |
US20020085035A1 (en) | Method and apparatus for creating custom formats for viewing information in virtual space | |
CN101300621B (en) | System and method for providing three-dimensional graphical user interface | |
Fairbairn et al. | Representation and its relationship with cartographic visualization | |
JP3646582B2 (en) | Electronic information display method, electronic information browsing apparatus, and electronic information browsing program storage medium | |
US20070226314A1 (en) | Server-based systems and methods for enabling interactive, collabortive thin- and no-client image-based applications | |
WO1996030846A1 (en) | An integrated development platform for distributed publishing and management of hypermedia over wide area networks | |
US20110099463A1 (en) | Structured documents and systems, methods and computer programs for creating, producing and displaying three dimensional objects and other related information in structured documents | |
US20020080177A1 (en) | Method and apparatus for converting data objects to a custom format for viewing information in virtual space | |
JP5185793B2 (en) | Map search system | |
Jern | Information drill-down using web tools | |
Hardie | The development and present state of web-GIS | |
CN102110166B (en) | Browser-based body 3D (3-demensional) visualizing and editing system and method | |
Brown | A 3D user interface for visualisation of Web-based data-sets | |
Loudon | Geoscience after IT Part E. Familiarization with IT background | |
Elson et al. | An example of a network-based approach to data access, visualization, interactive analysis, and distribution | |
Soetebier et al. | A VRML and Java-Based Interface for Retrieving VRML Content in Object-Oriented Databases | |
Maddirala | Developing a GIS-based geo-portal with scalable vector graphics (SVG) for accessing environmental information of baden-württemberg | |
Soetebier et al. | Seamless integration of databases in VR for constructing virtual environments | |
Torma et al. | Don't walk like an Egyptian: Coping with shared attention in a mobile 3D system | |
Jern | AVS/UNIRAS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TESTA HURWITZ & THIBEAULT LLP, MASSACHUSETTS Free format text: SECURITY AGREEMENT;ASSIGNORS:GEOPHOENIX, INC.;GUZMAN, ADRIANA T.;ORBANES, JULIAN E.;REEL/FRAME:013874/0786 Effective date: 20011022 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: KAPERNELLY ASSETS AG, LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GEOPHOENIX, INC.;REEL/FRAME:022548/0067 Effective date: 20081009 |
|
AS | Assignment |
Owner name: BUFFALO PATENTS, LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTELLECTUAL VENTURES ASSETS 90 LLC;REEL/FRAME:056979/0643 Effective date: 20210617 |