US20040233222A1 - Method and system for scaling control in 3D displays ("zoom slider") - Google Patents

Method and system for scaling control in 3D displays ("zoom slider") Download PDF

Info

Publication number
US20040233222A1
US20040233222A1 US10/725,773 US72577303A US2004233222A1 US 20040233222 A1 US20040233222 A1 US 20040233222A1 US 72577303 A US72577303 A US 72577303A US 2004233222 A1 US2004233222 A1 US 2004233222A1
Authority
US
United States
Prior art keywords
point
model
zoom
user
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/725,773
Inventor
Jerome Lee
Luis Serra
Ralf Kockro
Timothy Poston
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Volume Interactions Pte Ltd
Original Assignee
Volume Interactions Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volume Interactions Pte Ltd filed Critical Volume Interactions Pte Ltd
Priority to US10/725,773 priority Critical patent/US20040233222A1/en
Publication of US20040233222A1 publication Critical patent/US20040233222A1/en
Assigned to VOLUME INTERACTIONS PTE. LTD. reassignment VOLUME INTERACTIONS PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: POSTON, TIMOTHY, LEE, JEROME CHAN, KOCKRO, RALF ALFONS, SERRA, LUIS
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present invention relates to the field of computer graphics, and more particularly to user interaction with computer-generated displays of three-dimensional (3D) data structures.
  • Image viewing software may also support a directed magnification function whereby a user can specify a point in the original image which is used by the system as the center of a magnified or enlarged image. Sometimes this center is set by the position of a mouse controlled cursor. In such contexts, clicking on the mouse causes the view to jump to an enlarged one with its center at the selected point, or “jump to zoom.”
  • a desirable feature in image viewing is smooth zooming. Unlike the jump to zoom function described above, in smooth zooming a point in the image stays fixed in the display and other points in the image move outwards from it. However, this is not supported in conventional image viewing software. Thus, users simply tolerate the need to manually slide the view vertically and horizontally after sizing jumps.
  • a 3D display generally includes more empty space than a 2D image.
  • a 2D image can contain image content or detail at every point in the image. Since a 3D display must be looked at from a particular point in space, any detail between that spatial viewpoint and the object of interest obscures the view. As a result, empty space may be required in 3D displays. When a 3D image is enlarged, however, this otherwise useful empty space tends to fill the display volume with such vast expanses of empty space that a user may have no clue whether to slide left or right, up or down, or forward or back to orient herself and find a particular area of interest.
  • a point in 3D presents user interfacing complexities.
  • a user can move a stylus, pointer or other selector in three directions—horizontally across the display, vertically up and down the display, as well as along the direction into and out of the screen—and select a point. While this facilitates the “zoom from here” or close up mode, it is tedious to have to continually switch between overview and close up modes.
  • the more common mouse or other 2D interface only two factors can be changed at a time.
  • the interface can be set such that sideways motion of the interface produces a sideways motion of the cursor, and a vertical interface motion moves the cursor vertically, or, to adapt to 3D display control, the interface can be set (for example by depressing a mouse button) such that a sideways or vertical motion can be associated with the direction into/out of the screen (i.e., the depth dimension of a 3D display), or some fixed combination of these.
  • a two dimensional interface can control all three independent directions without added mode switching.
  • a further complexity of 3D displays is that it is common (in order to see past features not currently of interest) to set a crop box outside which nothing is shown. This is effectively a smaller display box within the volume of space visible in the display window. A user must therefore be able to switch between moving the displayed data—and with it the crop box—and moving the crop box across it. Distinct from the crop box, which is defined relative to the displayed model, is a clipping box which may exist in the same interface, and which typically has its size and location defined directly with reference to the display region, which, analogously to defining a subwindow in a 2D interface (usually done with its sides parallel to those of the main window) defines a subvolume within the viewing box.
  • a system and method for controlling the scaling of a 3D computer model in a 3D display system include activating a zoom mode, selecting a model zoom point and setting a zoom scale factor are presented.
  • a system in response to the selected model zoom point and the set scale factor, can implements a zoom operation and automatically move a model zoom point from its original position towards an optimum viewing point.
  • a system upon a user's activating a zoom mode, selecting a model zoom point and setting a zoom scale factor, a system can simultaneously move a model zoom point to an optimum viewing point.
  • a system can automatically identify a model zoom point by applying defined rules to visible points of a displayed model that lie in a central viewing area. If no such visible points are available the system can prompt a user to move the model until such points become available, or can select a model and a zoom point on that model by an automatic scheme.
  • FIG. 1 depicts an exemplary system of coordinates used in describing a three dimensional display space according to an exemplary embodiment of the present invention
  • FIGS. 2A-2B illustrate the effects of scaling an exemplary 3D object from different points according to an exemplary embodiment of the present invention
  • FIG. 3 illustrates the exemplary use of a crop box to display only a selected part of an object according to an exemplary embodiment of the present invention
  • FIGS. 4A-4B illustrate the exemplary effects of scaling a 3D object from points near to and distant from the boundary of a current display region or clipping box according to an exemplary embodiment of the present invention
  • FIG. 5 illustrates various exemplary options for the selection of a Model Zoom Point with reference to a current crop box as opposed to with reference to a model according to an exemplary embodiment of the present invention
  • FIG. 6 illustrates an exemplary Magnification Region defined by a planar Context Structure according to an exemplary embodiment of the present invention
  • FIGS. 7A-7D depict exemplary icons for a Model Zoom Point indicator according to an exemplary embodiment of the present invention
  • FIG. 8 depicts an exemplary slider control object used for zoom control according to an exemplary embodiment of the present invention
  • FIG. 9 illustrates an exemplary coordination of scaling with movement of the Model Zoom Point toward the Optimum Viewing Point according to a according to an exemplary embodiment of the present invention
  • FIG. 10 depicts an exemplary process flow according to an exemplary embodiment of the present invention
  • FIG. 11 is an exemplary modular software diagram according to an exemplary embodiment of the present invention.
  • FIGS. 12-18 depict an exemplary zooming in on an aneurysm according to an exemplary embodiment of the present invention.
  • the present invention comprises a user-controlled visual control interface that allows a user to manipulate scaling (either by jumps or smooth expansion) with an easily-learned sense of what will happen when particular adjustments are made, and without becoming lost in data regions where nothing is displayed.
  • an Optimum Viewing Point is fixed near the center of the screen, at a central depth in the display.
  • zooming control When zooming control is active, a visual display of a cross or other icon around the zoom center marks this point.
  • there is a larger contextual structure around the Optimum Viewing Point indicating to a user a Magnification Region in which a Model Zoom Point will be selected.
  • User controlled motion of the visible part of the model(s) in the display brings such model(s) into contact with the z-axis or the Magnification Region, and triggers the selection of a Model Zoom Point.
  • this fixed zoom center is not in line (from the user's viewpoint) with any point of the current crop box, the user is prompted to move the box together with its contents toward the center of the field of view.
  • the model space moves in the display such that the Model Zoom Point approaches the Optimum Viewing Point.
  • the system searches for and selects the model to be scaled and a candidate Model Zoom Point. This requires less effort from the user but correspondingly offers less detailed control.
  • 3D data display a system capable of displaying images of 3D objects which present one or more cues as to the depth (distance from the viewer) of points, such as but not restricted to perspective, occlusion of one element by another, parallax, stereopsis, and focus.
  • a preferred exemplary embodiment uses a computer monitor with shutter glasses that enable stereo depth perception, but the invention is equally applicable in the case of other display systems, such as, for example, a monitor without stereo, a head mounted display providing a screen for each eye, or a display screen emitting different views in different directions by means of prisms, aligned filters, holography or otherwise, as may be known in the art.
  • Display Space the 3D space whose origin is at the center of the display screen, used to orient points in the display screen. Points in Display space are denoted by co-ordinates (x, y, z).
  • Magnification Region a preferred 3D region displayed to a user by the system once zoom functionality is activated. Used by system to select a center of scaling point.
  • Model Space the 3D space used to describe the model or models displayed by the 3D display system. Points in Model space are denoted by co-ordinates (u, v, w), which related to display space by a co-ordinate transformation of the form specified in Equation 1. Model space is fixed relative to the model; display space is fixed relative to the display device.
  • Model Zoom Point the point on a model that remains fixed in a zoom operation, about which all other points are scaled.
  • Optimum Viewing Point the center or near center point of a display screen at the apparent depth of the display screen. For simplicity of discussion we assume that this point is also chosen as the origin (0, 0, 0) of the display coordinates (x, y, z), though this may be changed with trivial modifications to the algebra by one skilled in the art.
  • Scaling multiplying the size of an object by a given number. A number greater than one effects a “zoom in” or magnification operation, while a number less than one effects a “zoom out” or reduction operation.
  • Stereoscopic relating to a display system used to impart a three-dimensional effect by projecting two versions of a displayed scene or image from slightly different angles. There is a preferred viewing position relative to the display screen from where the stereoscopic effect is most correct, where the eye locations assumed in generating the separate visual signals coincide with actual locations of the user's eyes. (At other positions the stereoscopic effect is equally strong, but the perceived form is distorted relative to the intended form.)
  • volume rendering system allows for the visualization of volumetric data.
  • Volumetric data is digitized data obtained from some process or application, such as MR and CT scanners, ultrasound machines, seismic acquisition devices, high energy industrial CT scanners, radar and sonar systems, and other types of data input sources.
  • 3D display system is what is referred to herein as a fully functional 3D display environment (such as, e.g., that of the DextroscopeTM system of Volume Interactions Pte Ltd of Singapore, the assignee of the present application).
  • a fully functional 3D display environment such as, e.g., that of the DextroscopeTM system of Volume Interactions Pte Ltd of Singapore, the assignee of the present application.
  • Such systems allow for three-dimensional interactivity with the display.
  • a user generally holds in one hand, or in each hand, a device whose position is sensed by a computer or other data processing device.
  • the computer monitors the status of at least one control input, such as, e.g., a button, which the user may click, hold down, or release, etc.
  • Such devices may not be directly visible to the user, being hidden by a mirror; rather, in such exemplary systems, the user sees a virtual tool (a computer generated image drawn according to the needs of the application) co-located with the sensed device.
  • a virtual tool a computer generated image drawn according to the needs of the application
  • FIG. 1 depicts an exemplary co-ordinate system for 3D displays.
  • a plane 103 which represents the apparent physical display window at a preferred central depth in the display.
  • Display window 103 is generally a computer screen, which may be moved to a different apparent position by means of lenses or mirrors.
  • the referent of plane 103 is a pair of screens occupying similar apparent positions relative to the user's left and right eyes, via a lens or lenses that allow comfortable focus of the eyes.
  • the preferred central depth is at or near the distance at which the user sees the physical surface of the monitor, in some cases via a mirror. Around this distance the two depth cues of stereopsis and eye accommodation are most in agreement, thus leading to greater comfort in viewing.
  • an origin 102 of an orthogonal co-ordinate system having axes 107 , 108 and 109 respectively.
  • a schematic head and eye 115 are shown, representing the point from which the display is viewed by a user.
  • the following conventions will be used given the location of such user 115 : the x-axis, or horizontal axis, is 107 , the y-axis, or vertical axis, is 108 , and the z-axis, or depth axis, is 109 .
  • Positive directions along these axes are designated as rightward, upward and toward the user, respectively. ‘Greater depth’ thus refers to a greater value of ( ⁇ z).
  • the origin 102 as so defined has display space around it on all sides, rather than being near a boundary of the display device. Furthermore, for many users the agreement of optical accommodation with depth cues such as stereopsis (and in certain exemplary display systems parallax) causes an object at this depth to be most comfortably examined. Such an origin, therefore, is termed the Optimum Viewing Point.
  • control interfaces such as buttons, sliders, etc.
  • control system behavior typically retain their apparent size and position unless separately moved by the user
  • actual 3D models generated from some external process or application such as, e.g., computerized tomography, magnetic resonance, seismography, or other sensing modalities
  • the 3D models are equipped with attributes, such as, for example, colors as well as other necessary data, such as, for example, normal vectors, specularity, and transparency, as may be required or desirable to enable the system to render them visually in ways that have positions and orientations in a shared model space.
  • attributes such as, for example, colors as well as other necessary data, such as, for example, normal vectors, specularity, and transparency, as may be required or desirable to enable the system to render them visually in ways that have positions and orientations in a shared model space.
  • FIGS. 2A and 2B show two choices 201 of such an unmoving or fixed point denoted by a “+” icon, with the different effects of scaling an object 202 to be three times larger 203 along all three axes (and hence in all directions).
  • Optimal Viewing Point 201 is chosen to remain fixed. Thus, all points on the expanded object 203 remain centered about that point.
  • a point somewhat translated from (0, 0, 0) was chosen as the center of scaling.
  • the center of expanded object 203 has moved within the display space.
  • a model center of scaling is also known as a “zoom point” or “Model Zoom Point.”
  • the correspondence between display space coordinates and model space coordinates may also be modified by rotation, reflection and other geometric operations by multiplying the matrix [a i j ] by other appropriate matrices as may be known in the art.
  • the positions of models within the model space can be modifiable in an application.
  • the present description of the invention addresses primarily the common scaling of objects in a shared model space, the extension to the case of one or more such objects, each in a separate model space that is itself related to the main model space will be evident to one skilled in the art.
  • FIGS. 4 illustrate the effect of a zoom point near the boundary of the available display region (or current clipping box).
  • zoom point 401 is chosen.
  • Point 401 being centered with respect to the crop box boundary 450 , causes minimal loss of the enlarged object 405 from view.
  • FIG. 4A zoom point 401 is chosen.
  • Point 401 being centered with respect to the crop box boundary 450 , causes minimal loss of the enlarged object 405 from view.
  • zoom point 412 located near the left boundary of the crop box 450 , is chosen, upon magnification of the object 411 to the enlarged object 413 , large portions are lost from view.
  • This choice of zoom point makes more of model 411 move out of view and become undisplayable than occurs for the same model 410 using model zoom point 401 which is in a more central location.
  • a user desires to zoom an object using a model zoom point that is near a boundary (either crop box or viewing box).
  • the reduction of the resulting inconvenience to the user is a major aim of the present invention. (Note that a similar effect occurs if the model zoom point is near the surface of the current crop box, but since the user frequently manipulates the crop box this is less often a problem).
  • the present invention may be implemented in various display system environments. For illustration purposes its implementation is described herein in two such exemplary environments.
  • An exemplary preferred embodiment of the invention is in a DextroscopeTM-like environment, but the invention is understood to be fully capable of implementation using, for example, a standard mouse-equipped computer, or an interface using hardware not otherwise mentioned here such as a joystick or trackball, or using a head-mounted display, or any functional equivalent.
  • a standard mouse-equipped computer or an interface using hardware not otherwise mentioned here such as a joystick or trackball, or using a head-mounted display, or any functional equivalent.
  • Sections 1-4 below describe the successive steps of user interaction with a display according to the present invention.
  • a user signals that the system should enter a state (‘zoom mode’) in which signals are interpreted by the system as directed to the control of magnification rather than other features.
  • This may be done through, for example, a voice interface (speaking a command such as ‘engage zoom mode’ which is recognized by the system), by clicking on a 2D button with the mouse or on a 2D button with a DextroscopeTM-type stylus, by touching a button, or in a preferred exemplary embodiment, by merely touching (as opposed to clicking) a zoom slider interface object as described below (in connection with FIG. 8).
  • Model Zoom Point a point in the model space
  • Examples of such a Model Zoom Point are the points 201 shown in FIGS. 2.
  • This selection may be done in a number of ways. For example, in a mouse interface, the user may click on a point of the screen, and the Model Zoom Point selected will be the nearest visible (non-transparent) point on the model that is in line with that point from the user's viewpoint.
  • a moving “currently selected” point can be displayed that the user may select with some input interaction, such as, e.g., an additional click.
  • some input interaction such as, e.g., an additional click.
  • the user may click on a three dimensional point (on or off of the visible surface of the displayed object) which then becomes the Model Zoom Point.
  • Other such selection means may be utilized as may be known in the art.
  • the system uses a centering method for assisting a user in selecting the Model Zoom Point.
  • the system examines the z-axis of the display coordinate system (x, y, z) to find the nearest point in display space (0, 0, Z 0 ) at which the current display includes a visible point of some model in model space, in the current position of model space relative to the display coordinates. If such a point exists, by being visible it is necessarily inside the current crop box (if such a crop box is available and enabled, as is typical for, e.g., volume imaging but less so for, e.g., rendering a complex machine design or virtual film set.).
  • the reader is then prompted to move the crop box (with its contents, so that the change is of the transformation type quantified in Equation (1), as discussed above) until the crop box does meet the z-axis.
  • This may be accomplished, for example, in a DextroscopeTM-like display system by the user grasping the box with the workpiece hand, moving the sensor device, the tool (visible or logical) attached to the sensor device, and the box in coordination with the tool, until the box is sufficiently centered in the display for such a passing through of the z-axis to occur.
  • the screen position of the box may be dragged across the display in a standard ‘drag and drop’ action, since the z-component of the motion is not impacted in such an operation, and the object may be maintained through this step at constant z.
  • the system may determine Z 0 by a default rule involving box geometry as is illustrated in FIG. 5. For example, it may define (0,0, Z 0 ) as (a) the point nearest the user 501 (in FIG.
  • the user 500 views from the far left of the Figure) at which the z-axis 510 meets the crop box 520 ; (b) the point farthest from the user 502 at which the z-axis meets the box; (c) the mid-point 503 of the latter two points 501 , 502 ; (d) the point on the z-axis nearest the centroid of the box, (e) the z-value of the (x, y, z) position of the centroid of the box; or (f) such other rules as may be desirable or useful in a given design context. Alternatively, it may determine Z 0 by a default rule involving the crop box contents.
  • Z 0 may, as above, set Z 0 at the z value of the (x, y, z) position of the nearest point to the z-axis at which a visible point on a model exists, or it may define Z 0 as the z value of the (x, y, z) position of the centroid of the points in the box that are currently treated as visible rather than transparent.
  • Numerous other alternatives as may be known in the art may be implemented in various alternative exemplary embodiments.
  • the system may, in an exemplary embodiment, set the Model Zoom Point to be the center of the currently selected model, the origin of internal model coordinates (distinct from general model space coordinates (u, v, w)) which may or may not coincide with such a center, the center of the bounding box of the current model, the Optimum Viewing Point or the origin (0, 0, 0).
  • a second centering method for selecting the Model Zoom Point can be utilized, as illustrated in FIG. 6.
  • this method utilizes the concept of a Magnification Region 603 .
  • a Magnification Region is a central region (fixed by the system or adjusted by the user, and made visible to the user) of display space, within which a visible point of the model or its crop box may be selected.
  • this region is shown by displaying a translucent Context Plane 602 .
  • a Context Plane covers much of the screen, leaving a hole 601 (circular in the example depicted in FIG. 6, but other shapes, such as, for example, square, rectangle, ellipse, hexagon, etc.
  • Such a Context Plane 602 may preferably be drawn after all other rendering and with the depth buffer of the graphics rendering system turned off, so that the colors of images at all apparent depths are modified by it to thus highlight the hole.
  • Such color modification may, for example, comprise blending all pixels with gray, so that the modified parts are de-emphasized, and the parts within the hole 601 are emphasized.
  • the plane is physically rendered for each eye, so that it has an apparent depth. If this depth is set at that at which the user perceives the display screen to be (for example, through mirrors, lenses or other devices, such that it need not be the physical location of the display surface), it is rendered identically to each eye.
  • the apparent depth of the display screen is often most preferred for detailed examination, but other depths may be used as well.
  • a structure rendered translucently without reference to the depth buffer after other stereo elements have been rendered can be of use for any 3D-located feature of a display, not only as an icon marking a Model Zoom Point.
  • such translucency is utilized only for a context structure, such as the Context Plane 602 , to call attention to an opaquely rendered point marker.
  • the location of an object rendered in such translucent manner may appear very definite to the user, making this a useful technique for placing one object within another. It permits the user's visual system to construct (perceive) a consistent model of what is being seen.
  • two opaque objects are rendered, the first of which geometrically occludes the second by having parts which block lines of sight to parts of the second, but the second is rendered after the first, opaquely replacing it in the display. The user is faced with conflicting depth cues.
  • Stereopsis (plus parallax and perspective if available) indicate the second as more distant, while occlusion indicates that it is nearer.
  • the visual system can resolve the conflict by perceiving that the second is visible through the first, as though the first were translucent to light from (and only from) the second object. While this is not common with physical, non-virtual objects, the mental transfer of transparency to an object that was in fact opaquely rendered appears to be automatic and comfortable to users. Such technique is thus referred to as apparent transferred translucency.
  • the hole 601 in the Context Plane 602 defines the Magnification Region 603 (or, more precisely, the cross-section of the Magnification Region at the z-value of the Context Plane).
  • this region is the half-cone (or analogous geometric descriptor for a non-circular shape used for the hole 601 ) consisting of all points lying on straight lines that begin at the viewpoint of a user 610 (element 610 is a stylized circular “face” with an embedded eye, intended to schematically denote a user's viewpoint) and that pass through the hole 601 in the Context Plane 602 .
  • the Magnification Region may be defined in a variety of ways, such as, for example, the corresponding cone for the right eye's viewpoint, the cone for the left eye's viewpoint, the set of points that are in the cones for both eyes' viewpoints (i.e., the intersection of the two cones), or as the set of points that are in the cone for either eye's viewpoint (i.e., the union of the two cones), or any contextually reasonable alternative.
  • such regions are further truncated by a near and a far plane, such that points respectively nearer to or farther from the user are not included.
  • the system selects the nearest such point as a Model Zoom Point.
  • the system may select the nearest visible point of the model (or alternatively, of the crop box) within the Magnification Region which lies at a greater depth than the Context Structure. (In most 3D display environments it is unnecessary to examine all of the points in the model to find this point, inasmuch as a depth buffer generally records the depth of the nearest visible point along each line of sight).
  • the system may select the nearest such point, which lies on the viewer's line of sight through the center of the Context Structure (and this choice may also be subject to the condition that the point be at a greater depth than the Context Structure).
  • the system may select a point by minimizing the sum of squared depth beyond the Context Structure and squared distance from the viewer's line of sight through the center of the Context Structure, multiplied by some chosen coefficients to emphasize depth proximity or centrality respectively.
  • Many alternative selection rules can be implemented as desired in various embodiments of the invention.
  • the system may, in an exemplary embodiment, thus optionally provide for the user to identify one or more of the displayed models (by, for example, clicking on their images with a 3D stylus or 2D mouse, by calling out their names, or otherwise), and may then optionally select the center of a single or first-chosen or last-chosen model, the centroid of all chosen models, or other point selections as may be known in the art.
  • the system may prompt the user to select a Model Zoom Point by whatever means of 3D point selection is standard in the application, or may offer the opportunity to select such a point in replacement of the automatic selection just described.
  • these selections of model and Model Zoom Point can be, for example, automated and integrated, as described in the following pseudocode. It is noted that the system begins by testing whether the user has aligned a model with the z-axis or Magnification Region as described above, but if she has not the preferred exemplary embodiment defaults to an automatic scheme rather than prompting her to do so. In what follows the convention that text following a // is a comment descriptive of the active code is followed, though the ‘code’ itself is simplified for clarity. “Widget” refers to some interactive display object.
  • class ScalingControl ⁇ // defines the Control Widget for Scaling: typically a slider public: // functions callable by other parts of the program, described more fully below bool Contain (Point); // test for containing a given point void Update ( ); // modify graphics according to the scale value read from the widget void Active ( ); // use the fact that the user touches the widget (without depressing // a button, to drag and change its scale value) to engage zoom mode void Update_Model_Zoom_Point ( ); // modify Model zoom Point according to the state // of the widget void Render_Context_Plane ( ); // add Context Plane, appropriately placed, to display void Render_Model_Zoom_Point ( );// add Zoom Point icon to display private: Point mModelZoomPoint; // data accessible only to the functions of ScalingControl ⁇ ; // the following pseudodefintions describe the functions performed // by the functions introduced above.
  • model.size Get_Scale_Factor ( ); if (model.size > size_threshold) Show_Clipping_Box ( ); else Hide_Clipping_Box ( ); Draw_Model_Zoom_Point ( ); ⁇ bool ScalingControl::Update_Model_Zoom_Point ( ) ⁇ // Search for a Model Zoom Point using four methods: // Method 1 - select user-nearest point visible (if any along) z-axis.
  • Program Entry Point pseudocode illustrates the way in which the above exemplary functionality can be, for example, called by a functioning application.
  • void main ( ) // Set up variables and states, create objects Initialization ( ); Tool tool; // Create one 3D tool ScalingControl scalingControl; // Create one control widget while (true) ⁇ Render_Model ( ); if (scalingControl.Contain (tool.GetPosition ( )) // If 3D tool is inside control widget ⁇ if (tool.IsButtonPressed ( )) // If the 3D tool button is pressed scalingControl.Active ( ); // Bring Model Zoom Point to Optimum Viewing Point else scalingControl.Update ( ); // Update value of Model Zoom Point ⁇ UpdateSystem( ); // Update all display and system variables ⁇ ⁇
  • the Model Zoom Point is automatically selected, according to defined rules, as described above.
  • the user's control over this process consists of what she places within the Magnification Box, or what part of what model she was viewing prior to activating the zoom function, or what model is nearest the Optimum Viewing Point.
  • Such automatic selection of the Model Zoom Point relieves the user of ‘logistical’ concerns, thus allowing her to focus on the work at hand.
  • an exemplary embodiment may allow the user at any point to invoke the procedure described above, at the beginning of the current Section 2, ‘Selection Of Model Zoom Point’, to choose a different point and override the automatic selection.
  • the system displays an icon, such as for example, a cross, at the Model Zoom Point.
  • an icon such as for example, a cross
  • an everted cross 700 of four triangles 710 each pointing inward toward the Model Zoom Point 711 is utilized.
  • Any desirable icon form or design may be substituted, such as, for example, other flat patterns of polygons 715 , a flat cross 705 , or a three-dimensional surface structure 720 as depicted in FIGS. 7C, 7A and 7 D, respectively.
  • the icon is drawn at the depth of the Model Zoom Point, rather than as simply a marker on the display screen.
  • a stereo display it is drawn such that a user whose visual system is capable of stereo fusion will perceive it to be at the intended depth, and that it is drawn as an opaque object capable of hiding objects behind it and of being hidden by objects between it and the user's eye or eyes.
  • These different depth cues strengthen the user's sense of its three-dimensional location.
  • the displayed Context Plane is moved to lie at the same depth, with its hole centered on the Model Zoom Point.
  • the icon can be rendered with apparent transferred translucency, as discussed above.
  • Alternative exemplary implementations of the invention could function without context cues in locating the Model Zoom Point, or could use other cues such as, for example, a system of axes or other lines through it, with or without apparent transferred translucency.
  • zooming is enabled.
  • Zooming can be controlled in a variety of ways, such as, for example, (1) voice control (with the system recognizing commands such as (a) “Larger” or “Smaller” and responding with a step change in size, (b) “Reset” and restoring the previous size, (c) “Quit zoom mode”, etc.); or (2) step response to key strokes, mouse or tool clicks on an icon, or clicks on a particular sensor button, etc.
  • voice control with the system recognizing commands such as (a) “Larger” or “Smaller” and responding with a step change in size, (b) “Reset” and restoring the previous size, (c) “Quit zoom mode”, etc.
  • a middle mouse button click might automatically mean “Larger” while a right click is interpreted as “Smaller.”
  • a slider such as depicted in FIG. 8 is utilized.
  • Such a slider may also be used as a zoom mode trigger, when, for example, a user uses a stylus to touch its body 801 without clicking.
  • a user places the point of the 3D stylus in or near the slider bead 802 and holds down the sensor button while moving the sensor, this moves the stylus and drags the slider bead 802 along the slider bar 801 .
  • the distance moved is mapped by the system to a magnification factor, by an appropriate algorithm.
  • the algorithm assigns the minimum allowed value for the zoom factor ⁇ to the left end of the slider 810 , the maximum allowed value to the right end of the slider 811 , and interpolates linearly between such assignments.
  • Alternative exemplary embodiments include an exponential or logarithmic relation, or a function defined by assigning ⁇ values at certain positions for the slider bead 802 and interpolating in a piecewise linear, polynomial or rational B-spline manner between them, or a variety of other options as may be known in the art.
  • Other exemplary alternatives for the control of ⁇ include the use of a standard mouse-controlled slider, a control wheel dragged around by a DextroscopeTM-like system stylus, a physical scrolling wheel such as is known in certain mouse designs, etc.
  • the value range of ⁇ may run from some minimum value less than unity (maximum reduction factor) to some maximum value greater than unity (maximum magnification factor). The range need not be symmetric about unity, however, inasmuch as some embodiments utilize magnification to a greater extent than reduction, and vice versa.
  • the model space is moved in a manner intended to add to a user's comfort and convenience in examining a model, while avoiding the perceptive disjunction that would follow from large shifts. If the position in display coordinates (x, y, z) of the Model Zoom Point is (x 0 , y 0 , z 0 ) before zooming begins, depicted as the point 901 in FIG.
  • This will typically produce the most comfortable location of the zoomed model for detailed viewing and manipulation, while not losing the original layout of the larger context.
  • the Model Zoom Point moves back toward its original location 901 .
  • a load estimation function is attached to the crop box (such as, for example, a computation of the volume it currently occupies in the display space, which can be multiplied or otherwise modified by one or more factors specifying (a) the density of the rendering used: (b) the spacing of 3D texture slices; (c) the spacing of sample points within a 3D texture slice, where permitted by the hardware; (d) the spacing of the rays used (in embodiments utilizing a ray-casting rendering system); or such other quantities that can modify the quality and speed of rendering in a given exemplary system).
  • the system automatically activates the clipping box at a default or user-specified size and position values, without requiring any affirmative user intervention.
  • factors such as (a) to (d) just described may be automatically modified to reduce the load.
  • the system in an exemplary embodiment, may enlarge or remove the clipping box, or so modify the factors such as (a) to (d) as to improve the quality of the rendering while increasing the load within supportable limits.
  • FIG. 10 is a flowchart depicting process flow in an exemplary preferred embodiment of the invention.
  • process flow begins when a user indicates to the system that she desires to scale an object in a model.
  • the user may indicate such directive by, for example, moving an input device such that the tip of a displayed virtual tool is inside the bead 802 of a displayed control object, such as, for example, the zoom slider 801 depicted in FIG. 8.
  • numerous alternative embodiments may have numerous alternative methods of signaling the system that a zoom or scaling function is desired.
  • the system determines whether a visible object is inside the Magnification Region. If there is such a visible object, process flow moves to 1004 and selects the center of magnification/reduction as in the First or Second Method above. If there is no such object, then according to the method chosen from those described above the system shall, in an exemplary preferred embodiment, enter an automatic selection process such as that illustrated by the pseudocode above. Alternatively, as shown in this diagram 1003 , it prompts the user to move the object until the determination gives a positive result, upon which it can proceed to 1004 .
  • Model Zoom Point As described above, either these or numerous alternative methods support the selection of a Model Zoom Point, depending upon the type of display environment used in a particular embodiment of the invention, as well as whether a crop box and/or Magnification Region is utilized, etc.
  • process flow moves to 1005 where the system, given a user input as to ⁇ , as described above, magnifies or reduces the object or objects, optionally changes the level of detail as described above in the Section entitled “Automatic Activation Of Measures to Preserve Performance”, and automatically moves the objects closer to or farther from the center of the viewing area, as described above.
  • Process flow then passess to 1006 .
  • the system activates the clipping box so as to preserve a high level of display performance.
  • the system will deactivate the clipping box and allow for full viewing of the model by the user.
  • other methods of modifying the load on the system may be applied, as described, for example, in Section 6 above.
  • the system determines whether the user wishes to terminate the zoom operation. If “YES,” process flow moves to 1008 , and zoom operation stops. If “NO,” then process flow returns to 1005 and further magnifications and/or reductions, with the appropriate translations of objects, are implemented.
  • FIG. 11 depicts an exemplary modular software program of instructions which may be executed by an appropriate data processor, as is known in the art, to implement an preferred exemplary embodiment of the present invention.
  • the exemplary software program may be stored, for example, on a hard drive, flash memory, memory stick, optical storage medium, or other data storage devices as are known or may be known in the art.
  • the program When the program is accessed by the CPU of an appropriate data processor and run, it performs, according to a preferred exemplary embodiment of the present invention, a method for controlling the scaling of a 3D computer model in a 3D display system.
  • the exemplary software program has four modules, corresponding to four functionalities associated with a preferred exemplary embodiment of the present invention.
  • the first module is, for example, a Input Data Access Module 1101 , which can accept user inputs via a user interface as may be known in the art, such as, for example, a zoom function activation signal, a zoom scaling factor, and current crop box and clipping box settings, all as described above.
  • a second module is, for example, a Magnification Region Generation Module 1102 , which, once signalled by the Input Data Access Module that a zoom function has been activated, displays a Magnification Region around the Optimum Viewing Point in the display. If no model(s) are visible within the Magnification Region the module prompts a user to move model(s) within the Magnification Region.
  • a third module receives inputs from the Input Data Access Module 1101 and the Magnification Region Generation Module regarding what model(s) are currently located in the Magnification Region, and applies the defined rules, as described above, to select a Model Zoom Point to be used as the center of scaling.
  • a fourth module is, for example, a Scaling and Translation Module 1104 , which takes data inputs from, for example, the three other modules 1101 , 1102 , and 1103 , and implements the scaling operation and translates the Model Zoom Point towards or away from the Optimum Viewing Point as determined by defined rules and the value of the scaling factor chosen by a user.
  • FIGS. 12-18 To illustrate the functionalities available in exemplary embodiments of the present invention, an exemplary zoom operation to view an aneurysm in a brain will be next described with refernce to FIGS. 12-18.
  • the screen shots were acquired using an exemplary implementation of the present invention on a DextroscopeTM 3D data set display system, from Volume Interactions Pte Ltd of Singapore. Exemplary embodiments of the present invention can be implemented on this device. Visible in the figures are a 3D object and a virtual control palette which appears below it.
  • FIG. 12 depicts an exemplary original object from a CT data set, positioned somewhere in 3D space.
  • a user intends to zoom into a large aneurysm in the CT data set, in this example a bubble-like object (aneurysm) in the vascular system of a brain pointed to by the arrow.
  • FIG. 13 depicts an activation of zoom mode wherein a user can, for example, move a virtual pen to a virtual “zoom slider” bead.
  • a system can then, for example, automatically select a Zoom Point, here indicated by the four triangle cross, which here appears buried inside the data.
  • the Zoom Point is selected in this exemplary case at the nearest point in data that intersects an Optimum Viewing Point.
  • a Contextual Structure (the circular area surrounding the four triangle cross) is also displayed to focus a user's attention on the Zoom Point.
  • FIG. 15 depicts how once the Zoom Point coincides with the aneurysm (as here, an exemplary system can adjust the depth of the Zoom Point as a user moves a 3D object towards it) a user can press a button on the zoom slider.
  • the Contextual Structure being no longer needed, thus disappears.
  • a zoom Box can, for example, be activated, which can crop the 3D data set to obtain an optimal rendering time. In systems with very high rendering speeds this functionality can be, for example, implemented at higher magnifications, or not at all.
  • a zoom slider can display an amount of magnification, which in these figures is displayed behind the virtual pen, but nonetheless, partially visible.

Abstract

A system and method for controlling the scaling of a 3D computer model in a 3D display system include activating a zoom mode, selecting a model zoom point and setting a zoom scale factor are presented. In exemplary embodiments according to the present invention, a system, in response to the selected model zoom point and the set scale factor, can implements a zoom operation and automatically move a model zoom point from its original position towards an optimum viewing point. In exemplary embodiments according to the present invention, upon a user's activating a zoom mode, selecting a model zoom point and setting a zoom scale factor, a system can simultaneously move a model zoom point to an optimum viewing point. In preferred exemplary embodiments according to the present invention, a system can automatically identify a model zoom point by applying defined rules to visible points of a displayed model that lie in a central viewing area. If no such visible points are available the system can prompt a user to move the model until such points become available, or can select a model and a zoom point on that model by an automatic scheme.

Description

    CROSS REFERENCE TO OTHER APPLICATIONS
  • This application claims the benefit of U.S. [0001] Provisional Patent Applications 60/505,345, 60/505,346 and 60/505,344, each filed on Nov. 29, 2002, and all under common assignment herewith.
  • FIELD OF THE INVENTION
  • The present invention relates to the field of computer graphics, and more particularly to user interaction with computer-generated displays of three-dimensional (3D) data structures. [0002]
  • BACKGROUND OF THE INVENTION
  • When viewing an image on a computer or other electronically generated display, such as that of a photograph, a diagram or an X-ray, it is often necessary to examine one region in closer detail than is provided by the original resolution. As a result, most conventional image-viewing software has some type of scale-change controls, such as, e.g., a scale menu, magnify and shrink controls, or the like. Commonly, when a magnified image is larger than the available display window, the region being displayed is not the region of current interest, and a user must re-center the region of interest in the display. [0003]
  • Image viewing software may also support a directed magnification function whereby a user can specify a point in the original image which is used by the system as the center of a magnified or enlarged image. Sometimes this center is set by the position of a mouse controlled cursor. In such contexts, clicking on the mouse causes the view to jump to an enlarged one with its center at the selected point, or “jump to zoom.”[0004]
  • A desirable feature in image viewing is smooth zooming. Unlike the jump to zoom function described above, in smooth zooming a point in the image stays fixed in the display and other points in the image move outwards from it. However, this is not supported in conventional image viewing software. Thus, users simply tolerate the need to manually slide the view vertically and horizontally after sizing jumps. [0005]
  • In the viewing and manipulation of 3D, displays, the problems of magnification management become more acute for several reasons. First, in dealing with volumes, there is considerably more space to contend with. For example, a two-dimensional, or 2D image of an object that actually has four times the width and four times the height that fits in a window at a given resolution requires a user to look through sixteen window-size regions to recover a given point of interest. However, a 3D image of the same object, similarly scaled to four times the width, height and depth of a viewing box, actually encompasses a volume sixty-four times as large as the viewing box. [0006]
  • Second, a 3D display generally includes more empty space than a 2D image. A 2D image can contain image content or detail at every point in the image. Since a 3D display must be looked at from a particular point in space, any detail between that spatial viewpoint and the object of interest obscures the view. As a result, empty space may be required in 3D displays. When a 3D image is enlarged, however, this otherwise useful empty space tends to fill the display volume with such vast expanses of empty space that a user may have no clue whether to slide left or right, up or down, or forward or back to orient herself and find a particular area of interest. [0007]
  • Additionally, specifying a point in 3D presents user interfacing complexities. In what is termed a fully functional 3D interface, a user can move a stylus, pointer or other selector in three directions—horizontally across the display, vertically up and down the display, as well as along the direction into and out of the screen—and select a point. While this facilitates the “zoom from here” or close up mode, it is tedious to have to continually switch between overview and close up modes. In the more common mouse or other 2D interface, only two factors can be changed at a time. Thus, the interface can be set such that sideways motion of the interface produces a sideways motion of the cursor, and a vertical interface motion moves the cursor vertically, or, to adapt to 3D display control, the interface can be set (for example by depressing a mouse button) such that a sideways or vertical motion can be associated with the direction into/out of the screen (i.e., the depth dimension of a 3D display), or some fixed combination of these. However, there is no way that a two dimensional interface can control all three independent directions without added mode switching. [0008]
  • A further complexity of 3D displays is that it is common (in order to see past features not currently of interest) to set a crop box outside which nothing is shown. This is effectively a smaller display box within the volume of space visible in the display window. A user must therefore be able to switch between moving the displayed data—and with it the crop box—and moving the crop box across it. Distinct from the crop box, which is defined relative to the displayed model, is a clipping box which may exist in the same interface, and which typically has its size and location defined directly with reference to the display region, which, analogously to defining a subwindow in a 2D interface (usually done with its sides parallel to those of the main window) defines a subvolume within the viewing box. Thus, no part of the model that would be rendered outside the clipping box is shown, which can be useful to limit the data displayed to an amount that can be handled at interactive speeds. While a user may shrink a crop box for similar reasons of performance, its primary use is to pare away parts of the model for the sake of visibility. It moves with the model, and represents a choice of which part of the model to look at. [0009]
  • For general applications of the present invention it is important to distinguish the crop box from the bounding box, which also moves with the model but typically serves different functions, such as checking quickly for collisions. If the bounding boxes of two objects do not overlap, neither do the objects, though if the objects do not fill their bounding boxes the collision of the boxes only means that collison of the objects must be checked in more detail. It is often useful to trigger selection or highlighting when a user-controlled cursor enters the bounding box of an object. In many applications (such as in Computer Aided Design, or CAD) there may be a mulitplicity of models, each with its own bounding box, but in such applications it is rare for the user to be able to adjust the bounding box, or for the bounding box to be coupled to the graphics by cropping—causing not to be rendered—parts of the model which lie outside it. Indeed, normally no point of the model does lie outside it. In a display of the parts of an automobile, for example, the graphics functioning often requires that each model have a bounding box that acts within the code, though not visible to or modifiable by the user, but it is rare to give each wheel, pipe, washer, etc., an individual crop box by which part of it may be excluded from display. In certain applications of volume display, concerned with the rendering of rectangular blocks of 3D scan data, it can be useful to combine the functions of crop box and bounding box. In general however, they are distinguished. [0010]
  • The effects of these clip boxes and crop boxes can interact disadvantageously in regard to zoom functionality. With a small crop box (and thus no problem as regards display rendering speed or exceeding memory capacity) it can be disconcerting to have the box disappear when it passes out through the wall of a small dimensioned clipping box that was set to handle earlier performance problems. With a large crop box (including, for example, the result of zooming a smaller one)—enlarged with respect to display coordinates though not with respect to the model portion(s) it crops—the use of a clipping box may be essential for adequate performance. This interaction requires continual user attention to the logistics of viewing the model, and thus distracts from her ability to actually view the regions of interest in the actual model. [0011]
  • Within the objects of the invention is the provision of new techniques that simplify, automate, and optimize user interaction when scaling, navigating, observing and zooming such 2D and 3D images and models. [0012]
  • SUMMARY OF THE INVENTION
  • A system and method for controlling the scaling of a 3D computer model in a 3D display system include activating a zoom mode, selecting a model zoom point and setting a zoom scale factor are presented. In exemplary embodiments according to the present invention, a system, in response to the selected model zoom point and the set scale factor, can implements a zoom operation and automatically move a model zoom point from its original position towards an optimum viewing point. In exemplary embodiments according to the present invention, upon a user's activating a zoom mode, selecting a model zoom point and setting a zoom scale factor, a system can simultaneously move a model zoom point to an optimum viewing point. In preferred exemplary embodiments according to the present invention, a system can automatically identify a model zoom point by applying defined rules to visible points of a displayed model that lie in a central viewing area. If no such visible points are available the system can prompt a user to move the model until such points become available, or can select a model and a zoom point on that model by an automatic scheme.[0013]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts an exemplary system of coordinates used in describing a three dimensional display space according to an exemplary embodiment of the present invention; [0014]
  • FIGS. 2A-2B illustrate the effects of scaling an exemplary 3D object from different points according to an exemplary embodiment of the present invention; [0015]
  • FIG. 3 illustrates the exemplary use of a crop box to display only a selected part of an object according to an exemplary embodiment of the present invention; [0016]
  • FIGS. 4A-4B illustrate the exemplary effects of scaling a 3D object from points near to and distant from the boundary of a current display region or clipping box according to an exemplary embodiment of the present invention; [0017]
  • FIG. 5 illustrates various exemplary options for the selection of a Model Zoom Point with reference to a current crop box as opposed to with reference to a model according to an exemplary embodiment of the present invention; [0018]
  • FIG. 6 illustrates an exemplary Magnification Region defined by a planar Context Structure according to an exemplary embodiment of the present invention; [0019]
  • FIGS. 7A-7D depict exemplary icons for a Model Zoom Point indicator according to an exemplary embodiment of the present invention; [0020]
  • FIG. 8 depicts an exemplary slider control object used for zoom control according to an exemplary embodiment of the present invention; [0021]
  • FIG. 9 illustrates an exemplary coordination of scaling with movement of the Model Zoom Point toward the Optimum Viewing Point according to a according to an exemplary embodiment of the present invention; [0022]
  • FIG. 10 depicts an exemplary process flow according to an exemplary embodiment of the present invention; [0023]
  • FIG. 11 is an exemplary modular software diagram according to an exemplary embodiment of the present invention; and [0024]
  • FIGS. 12-18 depict an exemplary zooming in on an aneurysm according to an exemplary embodiment of the present invention. [0025]
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention comprises a user-controlled visual control interface that allows a user to manipulate scaling (either by jumps or smooth expansion) with an easily-learned sense of what will happen when particular adjustments are made, and without becoming lost in data regions where nothing is displayed. [0026]
  • In an exemplary embodiment, an Optimum Viewing Point is fixed near the center of the screen, at a central depth in the display. When zooming control is active, a visual display of a cross or other icon around the zoom center marks this point. In an exemplary embodiment there is a larger contextual structure around the Optimum Viewing Point, indicating to a user a Magnification Region in which a Model Zoom Point will be selected. User controlled motion of the visible part of the model(s) in the display brings such model(s) into contact with the z-axis or the Magnification Region, and triggers the selection of a Model Zoom Point. If this fixed zoom center is not in line (from the user's viewpoint) with any point of the current crop box, the user is prompted to move the box together with its contents toward the center of the field of view. When a user begins to zoom model(s), the model space moves in the display such that the Model Zoom Point approaches the Optimum Viewing Point. [0027]
  • In another exemplary embodiment the system searches for and selects the model to be scaled and a candidate Model Zoom Point. This requires less effort from the user but correspondingly offers less detailed control. [0028]
  • The following terms of art are used throughout this application, and are defined here to facilitate the readability of the following description: [0029]
  • 3D data display—a system capable of displaying images of 3D objects which present one or more cues as to the depth (distance from the viewer) of points, such as but not restricted to perspective, occlusion of one element by another, parallax, stereopsis, and focus. A preferred exemplary embodiment uses a computer monitor with shutter glasses that enable stereo depth perception, but the invention is equally applicable in the case of other display systems, such as, for example, a monitor without stereo, a head mounted display providing a screen for each eye, or a display screen emitting different views in different directions by means of prisms, aligned filters, holography or otherwise, as may be known in the art. [0030]
  • Center of Scaling—the point in a 3D model around which scaling takes place. Sometimes referred to herein as a “Model Zoom Point.”[0031]
  • Display Space—the 3D space whose origin is at the center of the display screen, used to orient points in the display screen. Points in Display space are denoted by co-ordinates (x, y, z). [0032]
  • Magnification Region—a preferred 3D region displayed to a user by the system once zoom functionality is activated. Used by system to select a center of scaling point. [0033]
  • Model Space—the 3D space used to describe the model or models displayed by the 3D display system. Points in Model space are denoted by co-ordinates (u, v, w), which related to display space by a co-ordinate transformation of the form specified in [0034] Equation 1. Model space is fixed relative to the model; display space is fixed relative to the display device.
  • Model Zoom Point—the point on a model that remains fixed in a zoom operation, about which all other points are scaled. [0035]
  • Optimum Viewing Point—the center or near center point of a display screen at the apparent depth of the display screen. For simplicity of discussion we assume that this point is also chosen as the origin (0, 0, 0) of the display coordinates (x, y, z), though this may be changed with trivial modifications to the algebra by one skilled in the art. [0036]
  • Scaling—multiplying the size of an object by a given number. A number greater than one effects a “zoom in” or magnification operation, while a number less than one effects a “zoom out” or reduction operation. [0037]
  • Stereoscopic—relating to a display system used to impart a three-dimensional effect by projecting two versions of a displayed scene or image from slightly different angles. There is a preferred viewing position relative to the display screen from where the stereoscopic effect is most correct, where the eye locations assumed in generating the separate visual signals coincide with actual locations of the user's eyes. (At other positions the stereoscopic effect is equally strong, but the perceived form is distorted relative to the intended form.) [0038]
  • Zoom—see “Scaling.”[0039]
  • The methods of the present invention are implementable in any 3D data display system, such as, e.g., a volume rendering system. As well, they may also be adapted and simplified for display of two-dimensional (2D) data, in ways evident to those skilled in the art. In general, a volume rendering system allows for the visualization of volumetric data. Volumetric data is digitized data obtained from some process or application, such as MR and CT scanners, ultrasound machines, seismic acquisition devices, high energy industrial CT scanners, radar and sonar systems, and other types of data input sources. One of the advantages of volume rendering, as opposed to surface rendering, is that it allows for the visualization of the insides of objects. [0040]
  • One type of such 3D display system is what is referred to herein as a fully functional 3D display environment (such as, e.g., that of the Dextroscope™ system of Volume Interactions Pte Ltd of Singapore, the assignee of the present application). Such systems allow for three-dimensional interactivity with the display. In such systems a user generally holds in one hand, or in each hand, a device whose position is sensed by a computer or other data processing device. As well, the computer monitors the status of at least one control input, such as, e.g., a button, which the user may click, hold down, or release, etc. Such devices may not be directly visible to the user, being hidden by a mirror; rather, in such exemplary systems, the user sees a virtual tool (a computer generated image drawn according to the needs of the application) co-located with the sensed device. In such exemplary systems the locational identity of the user's neuromuscular sense of the position of the held device, with the user's visual sense of the position of the virtual tool is an interactive advantage. [0041]
  • FIG. 1 depicts an exemplary co-ordinate system for 3D displays. With reference thereto there is a [0042] plane 103, which represents the apparent physical display window at a preferred central depth in the display. Display window 103 is generally a computer screen, which may be moved to a different apparent position by means of lenses or mirrors. Alternatively, in head-mounted displays the referent of plane 103 is a pair of screens occupying similar apparent positions relative to the user's left and right eyes, via a lens or lenses that allow comfortable focus of the eyes. In the case of a stereoscopic display, the preferred central depth is at or near the distance at which the user sees the physical surface of the monitor, in some cases via a mirror. Around this distance the two depth cues of stereopsis and eye accommodation are most in agreement, thus leading to greater comfort in viewing.
  • At or near the center of [0043] display window 103 is an origin 102 of an orthogonal co-ordinate system having axes 107, 108 and 109 respectively. As well, a schematic head and eye 115 are shown, representing the point from which the display is viewed by a user. For ease of illustration, the following conventions will be used given the location of such user 115: the x-axis, or horizontal axis, is 107, the y-axis, or vertical axis, is 108, and the z-axis, or depth axis, is 109. Positive directions along these axes are designated as rightward, upward and toward the user, respectively. ‘Greater depth’ thus refers to a greater value of (−z). The origin 102 as so defined has display space around it on all sides, rather than being near a boundary of the display device. Furthermore, for many users the agreement of optical accommodation with depth cues such as stereopsis (and in certain exemplary display systems parallax) causes an object at this depth to be most comfortably examined. Such an origin, therefore, is termed the Optimum Viewing Point.
  • It is understood that real world considerations of ergonomics and perceptual psychology may lead to a variant choice of origin, for a particular application and hardware configuration, that is not precisely centered in the physical display screen or not at the apparent depth of the physical display screen. [0044]
  • A 3D display system generally displays at least two kinds of objects: control interfaces such as buttons, sliders, etc., used to control system behavior, which typically retain their apparent size and position unless separately moved by the user, and the actual 3D models (generated from some external process or application such as, e.g., computerized tomography, magnetic resonance, seismography, or other sensing modalities), geometric structures defined by points, lines and polygons or implicit equations such as f (x, y, z)=0, where f is some suitably defined function. The 3D models are equipped with attributes, such as, for example, colors as well as other necessary data, such as, for example, normal vectors, specularity, and transparency, as may be required or desirable to enable the system to render them visually in ways that have positions and orientations in a shared model space. [0045]
  • Model space is said to have coordinates (u, v, w) whose relation to the display region coordinates (as specified by [0046] axes 107, 108, and 109 in FIG. 1) is given by the matrix relation [ x y z ] = [ a 1 1 a 1 2 a 1 3 a 2 1 a 2 2 a 2 3 a 3 1 a 3 2 a 3 3 ] [ u v w ] + [ X Y Z ] ( 1 )
    Figure US20040233222A1-20041125-M00001
  • or some equivalent such as, for example, a 4×4 matrix formulation of the same affine (straight-line-preserving) transformation. As can be seen from [0047] Equation 1, the origin of the model space, where (u, v, w)=(0, 0, 0), maps to the display space point (x, y, z)=(X, Y, Z). Changing the position (X, Y, Z) to which (u, v, w)=(0,0,0) is mapped to thus translates the model space and all models in it within the display space.
  • Zoom or scaling functionality operates as follows. Multiplying the matrix [a[0048] i j] of Equation 1 by a number λ shrinks or magnifies the appearance of the objects and the distances between them by that factor, keeping any visible element with (u, v, w)=(0, 0, 0) at the position (X, Y, Z) while other points move either toward or away from the point (X, Y, Z) as the distances are scaled. Combining a change of (X, Y, Z) with a scale change allows a point other than (u, v, w)=(0, 0, 0) in model space to retain a constant position in display coordinates (x, y, z), in either a single step or a succession of small steps that give a sense of continuous change.
  • FIGS. 2A and 2B show two [0049] choices 201 of such an unmoving or fixed point denoted by a “+” icon, with the different effects of scaling an object 202 to be three times larger 203 along all three axes (and hence in all directions). In FIG. 2A Optimal Viewing Point 201 is chosen to remain fixed. Thus, all points on the expanded object 203 remain centered about that point. In FIG. 2B a point somewhat translated from (0, 0, 0) was chosen as the center of scaling. Thus in FIG. 2B the center of expanded object 203 has moved within the display space.
  • If the point (U, V, W) is to appear fixed, we replace the above equation (1) by [0050] [ x y z ] = λ [ a 1 1 a 1 2 a 1 3 a 2 1 a 2 2 a 2 3 a 3 1 a 3 2 a 3 3 ] [ u v w ] + [ X Y Z ] where [ X Y Z ] = ( ( 1 - λ ) [ a 1 1 a 1 2 a 1 3 a 2 1 a 2 2 a 2 3 a 3 1 a 3 2 a 3 3 ] [ U V W ] + [ X Y Z ] ) ( 2 )
    Figure US20040233222A1-20041125-M00002
  • The point (U, V, W) is termed the model center of the scaling, and the point to which it corresponds under the transformations to both the left and the right of [0051] Equation 3 λ [ a 1 1 a 1 2 a 1 3 a 2 1 a 2 2 a 2 3 a 3 1 a 3 2 a 3 3 ] [ U V W ] + [ X Y Z ] = [ a 1 1 a 1 2 a 1 3 a 2 1 a 2 2 a 2 3 a 3 1 a 3 2 a 3 3 ] [ U V W ] + [ X Y Z ] ( 3 )
    Figure US20040233222A1-20041125-M00003
  • is known as the display center of the scaling. Note that the two sides of (3) are not identically equal for all (U, V, W): the truth of the equality for a particular (U, V, W), necessarily unique if λ is not equal to one, precisely characterizes that (U, V, W) as the display center of the scaling. In a one-dimensional analogy, λaU+X′=aU+X precisely if U=(X′−X)/(a−λa). A model center of scaling is also known as a “zoom point” or “Model Zoom Point.” The correspondence between display space coordinates and model space coordinates may also be modified by rotation, reflection and other geometric operations by multiplying the matrix [a[0052] i j] by other appropriate matrices as may be known in the art. As well, the positions of models within the model space (and hence relative to each other) can be modifiable in an application. Thus, while the present description of the invention addresses primarily the common scaling of objects in a shared model space, the extension to the case of one or more such objects, each in a separate model space that is itself related to the main model space will be evident to one skilled in the art.
  • The use of a user controlled crop box is distinct from the inherent clipping that is caused by the finite nature of the display apparatus. As a physical limitation of the display system, no part of a model can be shown outside the region that the system is configured to display to the user. However, this region can be further user-restricted by the use of a clipping box as described above. FIGS. [0053] 4 illustrate the effect of a zoom point near the boundary of the available display region (or current clipping box). In FIG. 4A zoom point 401 is chosen. Point 401, being centered with respect to the crop box boundary 450, causes minimal loss of the enlarged object 405 from view. On the other hand, in FIG. 4B, where zoom point 412, located near the left boundary of the crop box 450, is chosen, upon magnification of the object 411 to the enlarged object 413, large portions are lost from view. This choice of zoom point makes more of model 411 move out of view and become undisplayable than occurs for the same model 410 using model zoom point 401 which is in a more central location. However, it is often the case that a user desires to zoom an object using a model zoom point that is near a boundary (either crop box or viewing box). The reduction of the resulting inconvenience to the user is a major aim of the present invention. (Note that a similar effect occurs if the model zoom point is near the surface of the current crop box, but since the user frequently manipulates the crop box this is less often a problem).
  • As noted above, the present invention may be implemented in various display system environments. For illustration purposes its implementation is described herein in two such exemplary environments. An exemplary preferred embodiment of the invention is in a Dextroscope™-like environment, but the invention is understood to be fully capable of implementation using, for example, a standard mouse-equipped computer, or an interface using hardware not otherwise mentioned here such as a joystick or trackball, or using a head-mounted display, or any functional equivalent. Although for illustrative purposes the range of options is described with reference to a Dextroscope™-like environment and a standard mouse, adaptations to other equipment will be clear to one skilled in the art. [0054]
  • Sections 1-4 below describe the successive steps of user interaction with a display according to the present invention. [0055]
  • 1. Activate Zoom Mode [0056]
  • Initially a user signals that the system should enter a state (‘zoom mode’) in which signals are interpreted by the system as directed to the control of magnification rather than other features. This may be done through, for example, a voice interface (speaking a command such as ‘engage zoom mode’ which is recognized by the system), by clicking on a 2D button with the mouse or on a 2D button with a Dextroscope™-type stylus, by touching a button, or in a preferred exemplary embodiment, by merely touching (as opposed to clicking) a zoom slider interface object as described below (in connection with FIG. 8). [0057]
  • 2. Select Model Zoom Point [0058]
  • Next, a user selects a point in the model space, termed Model Zoom Point, around which it is desired to see more detail by scaling around it. Examples of such a Model Zoom Point are the [0059] points 201 shown in FIGS. 2. This selection may be done in a number of ways. For example, in a mouse interface, the user may click on a point of the screen, and the Model Zoom Point selected will be the nearest visible (non-transparent) point on the model that is in line with that point from the user's viewpoint. (In the case of a stereo display, where the user has two viewpoints, the system may select one eye for this calculation.) Alternatively, in a more exact but simultaneously more demanding interface, a moving “currently selected” point can be displayed that the user may select with some input interaction, such as, e.g., an additional click. In a Dextroscope™-like interface, for example, the user may click on a three dimensional point (on or off of the visible surface of the displayed object) which then becomes the Model Zoom Point. Other such selection means may be utilized as may be known in the art.
  • In the first method described below for the selection of a Model Zoom Point, selection is integrated with the placement of the model in a convenient place for zooming within the display space. Just as with a 2D image, a zoom point near the boundary of the display region makes points near it disappear over that boundary more quickly than a central point does, as was illustrated in connection with FIGS. 4. Moreover, in a stereoscopic display there is a most comfortable viewing depth, which coincides with the real or apparent distance from the user's eyes of the physical display screen. [0060]
  • 2.1 First Selection Method [0061]
  • A. Crop Box Enabled [0062]
  • Thus, in an exemplary embodiment, the system uses a centering method for assisting a user in selecting the Model Zoom Point. The system examines the z-axis of the display coordinate system (x, y, z) to find the nearest point in display space (0, 0, Z[0063] 0) at which the current display includes a visible point of some model in model space, in the current position of model space relative to the display coordinates. If such a point exists, by being visible it is necessarily inside the current crop box (if such a crop box is available and enabled, as is typical for, e.g., volume imaging but less so for, e.g., rendering a complex machine design or virtual film set.). If no such point can exist for the reason that the z-axis does not pass through the crop box given the current position of the crop box, the reader is then prompted to move the crop box (with its contents, so that the change is of the transformation type quantified in Equation (1), as discussed above) until the crop box does meet the z-axis. This may be accomplished, for example, in a Dextroscope™-like display system by the user grasping the box with the workpiece hand, moving the sensor device, the tool (visible or logical) attached to the sensor device, and the box in coordination with the tool, until the box is sufficiently centered in the display for such a passing through of the z-axis to occur. Alternatively, in display environments using a standard 2D mouse interface, for example, the screen position of the box may be dragged across the display in a standard ‘drag and drop’ action, since the z-component of the motion is not impacted in such an operation, and the object may be maintained through this step at constant z.
  • B. Crop Box But No Point Visible on z-axis [0064]
  • If a crop box is currently enabled, but the z-axis encounters no visible point of a model thereon, in an exemplary embodiment the system may determine Z[0065] 0 by a default rule involving box geometry as is illustrated in FIG. 5. For example, it may define (0,0, Z0) as (a) the point nearest the user 501 (in FIG. 5 the user 500 views from the far left of the Figure) at which the z-axis 510 meets the crop box 520; (b) the point farthest from the user 502 at which the z-axis meets the box; (c) the mid-point 503 of the latter two points 501, 502; (d) the point on the z-axis nearest the centroid of the box, (e) the z-value of the (x, y, z) position of the centroid of the box; or (f) such other rules as may be desirable or useful in a given design context. Alternatively, it may determine Z0 by a default rule involving the crop box contents. For example, it may, as above, set Z0 at the z value of the (x, y, z) position of the nearest point to the z-axis at which a visible point on a model exists, or it may define Z0 as the z value of the (x, y, z) position of the centroid of the points in the box that are currently treated as visible rather than transparent. Numerous other alternatives as may be known in the art may be implemented in various alternative exemplary embodiments.
  • C. No Crop Box Enabled [0066]
  • If there is no crop box or equivalent functionality, the system may, in an exemplary embodiment, set the Model Zoom Point to be the center of the currently selected model, the origin of internal model coordinates (distinct from general model space coordinates (u, v, w)) which may or may not coincide with such a center, the center of the bounding box of the current model, the Optimum Viewing Point or the origin (0, 0, 0). (If Optimum Viewing Point is chosen, however, the description below of the movement of the center of scaling becomes moot.) Alternatively, it may use for Z[0067] 0 the z value of the (x, y, z) position of the nearest point to the z-axis at which a visible point on a model exists (though such a search could be overly computationally expensive). Numerous other alternative choices may be implemented in other exemplary embodiments as may be desired or appropriate in given design contexts.
  • 2.2 Second Selection Method [0068]
  • In a preferred exemplary embodiment of the invention, a second centering method for selecting the Model Zoom Point can be utilized, as illustrated in FIG. 6. With reference to FIG. 6, this method utilizes the concept of a [0069] Magnification Region 603. A Magnification Region is a central region (fixed by the system or adjusted by the user, and made visible to the user) of display space, within which a visible point of the model or its crop box may be selected. In an exemplary embodiment, this region is shown by displaying a translucent Context Plane 602. As its name implies, a Context Plane covers much of the screen, leaving a hole 601 (circular in the example depicted in FIG. 6, but other shapes, such as, for example, square, rectangle, ellipse, hexagon, etc. may equally be used) around the center of the screen. More precisely it is preferred that the Context Plane be rendered such that its hole's centroid is the Model Zoom Point of the display space, since ‘centered in the hole’ is more easily apparent to the user than other relationships. However, other such context structures and relations will be apparent to one skilled in the art. Such a Context Plane 602 may preferably be drawn after all other rendering and with the depth buffer of the graphics rendering system turned off, so that the colors of images at all apparent depths are modified by it to thus highlight the hole. Such color modification may, for example, comprise blending all pixels with gray, so that the modified parts are de-emphasized, and the parts within the hole 601 are emphasized.
  • In the case of a stereo display the plane is physically rendered for each eye, so that it has an apparent depth. If this depth is set at that at which the user perceives the display screen to be (for example, through mirrors, lenses or other devices, such that it need not be the physical location of the display surface), it is rendered identically to each eye. The apparent depth of the display screen is often most preferred for detailed examination, but other depths may be used as well. Parenthetically, it is noted that a structure rendered translucently without reference to the depth buffer after other stereo elements have been rendered can be of use for any 3D-located feature of a display, not only as an icon marking a Model Zoom Point. Moreover, since its perceived depth is less certain than a structure rendered with the added depth cue of occlusion, in a preferred exemplary embodiment such translucency is utilized only for a context structure, such as the [0070] Context Plane 602, to call attention to an opaquely rendered point marker.
  • However, in a stereo environment with additional depth cues, such as, for example, parallax (i.e., the change of appearance in response to motions of the user's head and eyes, which are tracked by the system), the location of an object rendered in such translucent manner may appear very definite to the user, making this a useful technique for placing one object within another. It permits the user's visual system to construct (perceive) a consistent model of what is being seen. Thus, suppose that from a given viewpoint two opaque objects are rendered, the first of which geometrically occludes the second by having parts which block lines of sight to parts of the second, but the second is rendered after the first, opaquely replacing it in the display. The user is faced with conflicting depth cues. Stereopsis (plus parallax and perspective if available) indicate the second as more distant, while occlusion indicates that it is nearer. However, if the second object is translucently rendered, the visual system can resolve the conflict by perceiving that the second is visible through the first, as though the first were translucent to light from (and only from) the second object. While this is not common with physical, non-virtual objects, the mental transfer of transparency to an object that was in fact opaquely rendered appears to be automatic and comfortable to users. Such technique is thus referred to as apparent transferred translucency. [0071]
  • With reference again to FIG. 6, the [0072] hole 601 in the Context Plane 602 defines the Magnification Region 603 (or, more precisely, the cross-section of the Magnification Region at the z-value of the Context Plane). In a monoscopic display environment this region is the half-cone (or analogous geometric descriptor for a non-circular shape used for the hole 601) consisting of all points lying on straight lines that begin at the viewpoint of a user 610 (element 610 is a stylized circular “face” with an embedded eye, intended to schematically denote a user's viewpoint) and that pass through the hole 601 in the Context Plane 602. In a stereoscopic display the Magnification Region may be defined in a variety of ways, such as, for example, the corresponding cone for the right eye's viewpoint, the cone for the left eye's viewpoint, the set of points that are in the cones for both eyes' viewpoints (i.e., the intersection of the two cones), or as the set of points that are in the cone for either eye's viewpoint (i.e., the union of the two cones), or any contextually reasonable alternative. Commonly, such regions are further truncated by a near and a far plane, such that points respectively nearer to or farther from the user are not included.
  • In an exemplary embodiment which uses a Magnification Region to select a Model Zoom Point, after displaying the Magnification Region the system determines whether there is a visible point of the model (or alternatively of the crop box) in the Magnification Region. If there is not, the user is then prompted by the system to move the model. [0073]
  • If there is such a point, the system selects the nearest such point as a Model Zoom Point. There are various possibilities for defining “nearest” in this context. For example, the system may select the nearest visible point of the model (or alternatively, of the crop box) within the Magnification Region which lies at a greater depth than the Context Structure. (In most 3D display environments it is unnecessary to examine all of the points in the model to find this point, inasmuch as a depth buffer generally records the depth of the nearest visible point along each line of sight). Or, for example, the system may select the nearest such point, which lies on the viewer's line of sight through the center of the Context Structure (and this choice may also be subject to the condition that the point be at a greater depth than the Context Structure). Finally, for example, the system may select a point by minimizing the sum of squared depth beyond the Context Structure and squared distance from the viewer's line of sight through the center of the Context Structure, multiplied by some chosen coefficients to emphasize depth proximity or centrality respectively. Many alternative selection rules can be implemented as desired in various embodiments of the invention. [0074]
  • 2.3 Third Selection Method [0075]
  • It is noted that in the first and second methods above, the choice of which model (supposing there are several in a model space) the Model Zoom Point is to lie on is implicit in the concern with proximity to the z-axis. The user wishing to enlarge one model rather than another may simply arrange that it meets the z-axis or the Magnification Region and is the nearest model to do so. However, in a complex scene such as is common in CAD, the user may not wish to move (for example) the position of a whole engine in order to temporarily examine a gearbox in more detail. The system may, in an exemplary embodiment, thus optionally provide for the user to identify one or more of the displayed models (by, for example, clicking on their images with a 3D stylus or 2D mouse, by calling out their names, or otherwise), and may then optionally select the center of a single or first-chosen or last-chosen model, the centroid of all chosen models, or other point selections as may be known in the art. Alternatively, the system may prompt the user to select a Model Zoom Point by whatever means of 3D point selection is standard in the application, or may offer the opportunity to select such a point in replacement of the automatic selection just described. [0076]
  • In a preferred exemplary embodiment of the invention, these selections of model and Model Zoom Point can be, for example, automated and integrated, as described in the following pseudocode. It is noted that the system begins by testing whether the user has aligned a model with the z-axis or Magnification Region as described above, but if she has not the preferred exemplary embodiment defaults to an automatic scheme rather than prompting her to do so. In what follows the convention that text following a // is a comment descriptive of the active code is followed, though the ‘code’ itself is simplified for clarity. “Widget” refers to some interactive display object. [0077]
    class ScalingControl {
    // defines the Control Widget for Scaling: typically a slider
    public: // functions callable by other parts of the program, described more fully below
     bool Contain (Point); // test for containing a given point
     void Update ( ); // modify graphics according to the scale value read from the widget
     void Active ( ); // use the fact that the user touches the widget (without depressing
    // a button, to drag and change its scale value) to engage zoom mode
     void Update_Model_Zoom_Point ( ); // modify Model zoom Point according to the state
    // of the widget
     void Render_Context_Plane ( ); // add Context Plane, appropriately placed, to display
     void Render_Model_Zoom_Point ( );// add Zoom Point icon to display private:
     Point mModelZoomPoint; // data accessible only to the functions of ScalingControl
    };
    // the following pseudodefintions describe the functions performed
    // by the functions introduced above.
    bool ScalingControl::Contain (Point p)
    {
     // Return true if p is inside the scaling control widget as positioned in display space.
     // Otherwise, return false.
    }
    void ScalingControl::Update ( )
    // This function is called when the scaling control is being focused
    // but not yet triggered.
    {
     Update_Model_Zoom_Point ( );
     Render_Context_Plane ( );
     Render_Model_Zoom_Point ( );
    }
    void ScalingControl::Active ( )
    // This function is called when the scaling control is triggered
    // and active.
    {
     Move_Object_To_Optimum_Viewing_Point ( );
     model.size = Get_Scale_Factor ( );
     if (model.size > size_threshold)
      Show_Clipping_Box ( );
     else
      Hide_Clipping_Box ( );
     Draw_Model_Zoom_Point ( );
    }
    bool ScalingControl::Update_Model_Zoom_Point ( )
    {
     // Search for a Model Zoom Point using four methods:
     // Method 1 - select user-nearest point visible (if any along) z-axis.
     //  if none visible, try
     // Method 2 - select user-nearest point visible along the line from user's
     //  viewpoint to the Center of All Visible Objects
     //  if none visible, try
     // Method 3 - select user-nearest point visible along the line from user's
     //  viewpoint to the Center of Each Visible Object (sorted according to the
     //  centers' distance to the Optimum Viewing Point)
     //  if none visible, use
     // Method 4 - Use the Center of the Object nearest to the Optimum Viewing Point
    if (Clipping_Box_Is_Enabled ( ))
    {
     if (Model_Ray_Intersection (CENTER_OF_THE_SCREEN, $$mModelZoomPoint))
       return true;
      return false;
     }
     else
     {
      if (Model_Ray_Intersection (CENTER_OF_THE_SCREEN, $$mModelZoomPoint))
       return true;
      if (Model_Ray_Intersection (CENTER_OF_ALL_OBJECTS, $$mModelZoomPoint))
       return true;
      if (Model_Ray_Intersection (CENTER_OF_EACH_OBJECT, $$mModelZoomPoint))
       return true;
      if (Model_Ray_Intersection (CENTER_OF_NEAREST_OBJECT, $$mModelZoomPoint))
       return true;
      return false;  // if none of the above tests is passed.
     }
    }
    void ScalingControl::MoveObject_To_Optimum_Viewing_Point ( )
    {
     // Move the Object and the Model Zoom Point (mModelZoomPoint) to
     // the Optimum Viewing Point.
    }
    void ScalingControl::Render_Context_Plane ( )
    {
     // Disable Depth Buffer checking.
     // Render a semi-transparent plane with a hole inside. The center
     // of the hole should be mModelZoomPoint.
    }
    void ScalingControl::Render_Model_Zoom_Point ( )
    {
     // Render a 3D crosshair with mModelZoomPoint as the position.
    }
  • The following Program Entry Point pseudocode illustrates the way in which the above exemplary functionality can be, for example, called by a functioning application. [0078]
    void main ( )
    {
     // Set up variables and states, create objects
     Initialization ( );
     Tool tool; // Create one 3D tool
     ScalingControl scalingControl; // Create one control widget
     while (true)
     {
      Render_Model ( );
      if (scalingControl.Contain (tool.GetPosition ( )) // If 3D tool is inside control widget
      {
       if (tool.IsButtonPressed ( )) // If the 3D tool button is pressed
        scalingControl.Active ( ); // Bring Model Zoom Point to Optimum Viewing Point
       else
        scalingControl.Update ( ); // Update value of Model Zoom Point
      }
      UpdateSystem( ); // Update all display and system variables
     }
    }
  • 2.4 Optional Modification By User [0079]
  • It is noted that in various exemplary embodiments utilizing a Magnification Region the Model Zoom Point is automatically selected, according to defined rules, as described above. The user's control over this process consists of what she places within the Magnification Box, or what part of what model she was viewing prior to activating the zoom function, or what model is nearest the Optimum Viewing Point. Such automatic selection of the Model Zoom Point relieves the user of ‘logistical’ concerns, thus allowing her to focus on the work at hand. However, an exemplary embodiment may allow the user at any point to invoke the procedure described above, at the beginning of the current Section 2, ‘Selection Of Model Zoom Point’, to choose a different point and override the automatic selection. [0080]
  • 3. Display Of Model Zoom Point [0081]
  • With reference to FIG. 7, after selection of a Model Zoom Point, the system displays an icon, such as for example, a cross, at the Model Zoom Point. In the exemplary embodiment depicted in FIG. 7B, an everted cross [0082] 700 of four triangles 710, each pointing inward toward the Model Zoom Point 711 is utilized. Any desirable icon form or design may be substituted, such as, for example, other flat patterns of polygons 715, a flat cross 705, or a three-dimensional surface structure 720 as depicted in FIGS. 7C, 7A and 7D, respectively. In an exemplary embodiment the icon is drawn at the depth of the Model Zoom Point, rather than as simply a marker on the display screen. Thus, in a stereo display it is drawn such that a user whose visual system is capable of stereo fusion will perceive it to be at the intended depth, and that it is drawn as an opaque object capable of hiding objects behind it and of being hidden by objects between it and the user's eye or eyes. These different depth cues strengthen the user's sense of its three-dimensional location. However, since the use of opacity allows all or part of the icon to be obscured by the object displayed, in a preferred exemplary embodiment the displayed Context Plane is moved to lie at the same depth, with its hole centered on the Model Zoom Point. In an alternate exemplary embodiment, the icon can be rendered with apparent transferred translucency, as discussed above. Alternative exemplary implementations of the invention could function without context cues in locating the Model Zoom Point, or could use other cues such as, for example, a system of axes or other lines through it, with or without apparent transferred translucency.
  • 4. Zoom Control [0083]
  • Once a Model Zoom Point has been selected, zooming is enabled. Zooming can be controlled in a variety of ways, such as, for example, (1) voice control (with the system recognizing commands such as (a) “Larger” or “Smaller” and responding with a step change in size, (b) “Reset” and restoring the previous size, (c) “Quit zoom mode”, etc.); or (2) step response to key strokes, mouse or tool clicks on an icon, or clicks on a particular sensor button, etc. For example, while in zoom mode a middle mouse button click might automatically mean “Larger” while a right click is interpreted as “Smaller.” In a preferred exemplary embodiment a slider such as depicted in FIG. 8 is utilized. Such a slider may also be used as a zoom mode trigger, when, for example, a user uses a stylus to touch its [0084] body 801 without clicking. In such an exemplary embodiment, when a user places the point of the 3D stylus in or near the slider bead 802 and holds down the sensor button while moving the sensor, this moves the stylus and drags the slider bead 802 along the slider bar 801. The distance moved is mapped by the system to a magnification factor, by an appropriate algorithm. In an exemplary embodiment the algorithm assigns the minimum allowed value for the zoom factor λ to the left end of the slider 810, the maximum allowed value to the right end of the slider 811, and interpolates linearly between such assignments. Alternative exemplary embodiments include an exponential or logarithmic relation, or a function defined by assigning λ values at certain positions for the slider bead 802 and interpolating in a piecewise linear, polynomial or rational B-spline manner between them, or a variety of other options as may be known in the art. Other exemplary alternatives for the control of λ include the use of a standard mouse-controlled slider, a control wheel dragged around by a Dextroscope™-like system stylus, a physical scrolling wheel such as is known in certain mouse designs, etc. In order to facilitate both magnification as well as reduction operations, the value range of λ may run from some minimum value less than unity (maximum reduction factor) to some maximum value greater than unity (maximum magnification factor). The range need not be symmetric about unity, however, inasmuch as some embodiments utilize magnification to a greater extent than reduction, and vice versa.
  • 5. Automatic Translation Of Model Zoom Point [0085]
  • In coordination with the zooming function, in an exemplary embodiment the model space is moved in a manner intended to add to a user's comfort and convenience in examining a model, while avoiding the perceptive disjunction that would follow from large shifts. If the position in display coordinates (x, y, z) of the Model Zoom Point is (x[0086] 0, y0, z0) before zooming begins, depicted as the point 901 in FIG. 9, the value of λ is used to calculate to a translation towards the Optimum Viewing Point 903 in such a way that the unzoomed starting value λ=1 in effect at the start of zooming connects to a zero translation (leaving the model space unmoved), while a large value connects to a translation by a vector equal or near to (−x0, −y0, −z0), moving the Model Zoom Point 901 via intermediate positions such as 902 to a display point (x′, y′, z′) at or near the Optimum Viewing Point 903. This will typically produce the most comfortable location of the zoomed model for detailed viewing and manipulation, while not losing the original layout of the larger context. In a wholly complementary manner, upon reducing the zoom factor the Model Zoom Point moves back toward its original location 901.
  • In particular, if the system is adjusted to allow a maximum scale value of λ=λ[0087] max, for λ>1 we may in a particular implementation define t = λ - 1 λ max - 1
    Figure US20040233222A1-20041125-M00004
  • and translate the display of the model space by (−tx[0088] 0, −ty0, −tz0). Intermediate scaling of the model 912 is thus associated with an intermediate position 902 for the Model Zoom Point, and maximum scaling of the model 913 coincides with translation of the display position of the Model Zoom Point 901 exactly to the Optimum Viewing Point 903. For λ<1 we may use the same formula, resulting in a movement away from the Optimum Viewing Point as the display size diminishes, or alternatively, for a minimum scale value of λmin replace it by t = λ - 1 λ min - 1
    Figure US20040233222A1-20041125-M00005
  • so that for the extreme case the translation by (−tx[0089] 0, −ty0, −tz0) again moves the Model Zoom Point exactly to the Optimum Viewing Point. These formulae may be replaced by many others evident to those skilled in the art, such as exponential or polynomial functions, subject only to the condition that on each side (separately considered) of λ=1 the change in t should be monotonic (always increasing, or always decreasing, for an increase in λ), so that the model does not advance and retreat in ways surprising to the user. Two particular functions t(λ) that may be included in this framework are noteworthy. One extreme case sets t(λ)=0 for all values of λ=1, so that the model space does not move at all apart from the effects of scaling, and the Model Zoom Point remains fixed in display space. The other extreme (used in a preferred exemplary embodiment) sets t(λ)=1 for all values of λ≠1, so that the Model Zoom Point moves immediately to the Optimum Viewing Point, before user-elected zoom values come into play. For continuity of the user perception reasons, in a preferred exemplary embodiment t is allowed to move continuously (that is, through a sequence of changes small enough to give an impression of smooth motion) to the value t=1; such change to occur either (a) immediately upon the selection of a Model Zoom Point, or (b) when the user begins to modify λ by whatever method is selected.
  • 6. Automatic Activation of Measures to Preserve Performance [0090]
  • Since a large value of λ increases the displayed size of the crop box, and hence the load on the graphics rendering hardware, performance may degrade (sometimes abruptly) in the course of zooming, whether or not coupled to motion of a selected model point such as the translation of the Model Zoom Point described above. Therefore, in a preferred exemplary embodiment a load estimation function is attached to the crop box (such as, for example, a computation of the volume it currently occupies in the display space, which can be multiplied or otherwise modified by one or more factors specifying (a) the density of the rendering used: (b) the spacing of 3D texture slices; (c) the spacing of sample points within a 3D texture slice, where permitted by the hardware; (d) the spacing of the rays used (in embodiments utilizing a ray-casting rendering system); or such other quantities that can modify the quality and speed of rendering in a given exemplary system). When such load estimate reaches a threshold value (generally set by experiment with the particular hardware or derived from analysis of its specifications) at which there is a significant risk of performance degradation, the system automatically activates the clipping box at a default or user-specified size and position values, without requiring any affirmative user intervention. Alternatively, factors such as (a) to (d) just described may be automatically modified to reduce the load. Conversely, if the current load is below a threshold (typically set lower than the threshold above), the system, in an exemplary embodiment, may enlarge or remove the clipping box, or so modify the factors such as (a) to (d) as to improve the quality of the rendering while increasing the load within supportable limits. [0091]
  • 7. Exemplary Process Flow [0092]
  • FIG. 10 is a flowchart depicting process flow in an exemplary preferred embodiment of the invention. With reference to FIG. 10, the following events occur. At [0093] 1001, process flow begins when a user indicates to the system that she desires to scale an object in a model. The user may indicate such directive by, for example, moving an input device such that the tip of a displayed virtual tool is inside the bead 802 of a displayed control object, such as, for example, the zoom slider 801 depicted in FIG. 8. As described above, numerous alternative embodiments may have numerous alternative methods of signaling the system that a zoom or scaling function is desired.
  • At [0094] 1002, the system determines whether a visible object is inside the Magnification Region. If there is such a visible object, process flow moves to 1004 and selects the center of magnification/reduction as in the First or Second Method above. If there is no such object, then according to the method chosen from those described above the system shall, in an exemplary preferred embodiment, enter an automatic selection process such as that illustrated by the pseudocode above. Alternatively, as shown in this diagram 1003, it prompts the user to move the object until the determination gives a positive result, upon which it can proceed to 1004.
  • As described above, either these or numerous alternative methods support the selection of a Model Zoom Point, depending upon the type of display environment used in a particular embodiment of the invention, as well as whether a crop box and/or Magnification Region is utilized, etc. [0095]
  • Once the Model Zoom Point is selected, process flow moves to [0096] 1005 where the system, given a user input as to λ, as described above, magnifies or reduces the object or objects, optionally changes the level of detail as described above in the Section entitled “Automatic Activation Of Measures to Preserve Performance”, and automatically moves the objects closer to or farther from the center of the viewing area, as described above. Process flow then passess to 1006.
  • At [0097] 1006, if the size of the magnification factor is such that performance degradation may ensue, the system activates the clipping box so as to preserve a high level of display performance. Alternatively, if a high magnification value had been in effect previously, and the value of λ is decreased such that the load estimate dips below the applicable threshold value, and there is thus no longer a need for activation of the clipping box, the system will deactivate the clipping box and allow for full viewing of the model by the user. Alternatively, in other exemplary embodiments, other methods of modifying the load on the system may be applied, as described, for example, in Section 6 above.
  • At [0098] 1007, the system determines whether the user wishes to terminate the zoom operation. If “YES,” process flow moves to 1008, and zoom operation stops. If “NO,” then process flow returns to 1005 and further magnifications and/or reductions, with the appropriate translations of objects, are implemented.
  • FIG. 11 depicts an exemplary modular software program of instructions which may be executed by an appropriate data processor, as is known in the art, to implement an preferred exemplary embodiment of the present invention. The exemplary software program may be stored, for example, on a hard drive, flash memory, memory stick, optical storage medium, or other data storage devices as are known or may be known in the art. When the program is accessed by the CPU of an appropriate data processor and run, it performs, according to a preferred exemplary embodiment of the present invention, a method for controlling the scaling of a 3D computer model in a 3D display system. The exemplary software program has four modules, corresponding to four functionalities associated with a preferred exemplary embodiment of the present invention. [0099]
  • The first module is, for example, a Input [0100] Data Access Module 1101, which can accept user inputs via a user interface as may be known in the art, such as, for example, a zoom function activation signal, a zoom scaling factor, and current crop box and clipping box settings, all as described above. A second module is, for example, a Magnification Region Generation Module 1102, which, once signalled by the Input Data Access Module that a zoom function has been activated, displays a Magnification Region around the Optimum Viewing Point in the display. If no model(s) are visible within the Magnification Region the module prompts a user to move model(s) within the Magnification Region. A third module, the Model Zoom Point Selection Module 1103 receives inputs from the Input Data Access Module 1101 and the Magnification Region Generation Module regarding what model(s) are currently located in the Magnification Region, and applies the defined rules, as described above, to select a Model Zoom Point to be used as the center of scaling. A fourth module is, for example, a Scaling and Translation Module 1104, which takes data inputs from, for example, the three other modules 1101, 1102, and 1103, and implements the scaling operation and translates the Model Zoom Point towards or away from the Optimum Viewing Point as determined by defined rules and the value of the scaling factor chosen by a user.
  • Exemplary Zoom Operation [0101]
  • To illustrate the functionalities available in exemplary embodiments of the present invention, an exemplary zoom operation to view an aneurysm in a brain will be next described with refernce to FIGS. 12-18. The screen shots were acquired using an exemplary implementation of the present invention on a Dextroscope™ 3D data set display system, from Volume Interactions Pte Ltd of Singapore. Exemplary embodiments of the present invention can be implemented on this device. Visible in the figures are a 3D object and a virtual control palette which appears below it. [0102]
  • FIG. 12 depicts an exemplary original object from a CT data set, positioned somewhere in 3D space. In the depicted example, a user intends to zoom into a large aneurysm in the CT data set, in this example a bubble-like object (aneurysm) in the vascular system of a brain pointed to by the arrow. FIG. 13 depicts an activation of zoom mode wherein a user can, for example, move a virtual pen to a virtual “zoom slider” bead. A system can then, for example, automatically select a Zoom Point, here indicated by the four triangle cross, which here appears buried inside the data. The Zoom Point is selected in this exemplary case at the nearest point in data that intersects an Optimum Viewing Point. A Contextual Structure (the circular area surrounding the four triangle cross) is also displayed to focus a user's attention on the Zoom Point. [0103]
  • With reference to FIG. 14, since the system did not automatically find the desired point of interest (i.e., the aneurysm), a user needs to refine the selection of Zoom Point. Thus, a user can, for example, move the object so that the desired part of the object (in this case the aneurysm) coincides with the Zoom Point (which remains at the Optimal Viewing Point). Throughout this operation, the user holds the pen at the bead of the zoom slider, without pressing the virtual button. [0104]
  • FIG. 15 depicts how once the Zoom Point coincides with the aneurysm (as here, an exemplary system can adjust the depth of the Zoom Point as a user moves a 3D object towards it) a user can press a button on the zoom slider. The Contextual Structure, being no longer needed, thus disappears. [0105]
  • With refernce to FIG. 16, as the user drags the bead of the zoom slider, the magnification of the 3D data set around the Zoom Point begins, and with reference to FIG. 17, when a magnification point reaches a certain value, a Zoom Box can, for example, be activated, which can crop the 3D data set to obtain an optimal rendering time. In systems with very high rendering speeds this functionality can be, for example, implemented at higher magnifications, or not at all. As can be seen with reference to FIGS. 15-18, a zoom slider can display an amount of magnification, which in these figures is displayed behind the virtual pen, but nonetheless, partially visible. [0106]
  • Finally, with reference to FIG. 18, when a desired magnification of the aneurysm in achieved, for example, a user stops the movement of the zoom slider and can inspect the object. [0107]
  • The present invention has been described in connection with exemplary embodiments and exemplary preferred embodiments and implementations, as examples only. It will be understood by those having ordinary skill in the pertinent art that modifications to any of the embodiments or preferred embodiments may be easily made without materially departing from the scope and spirit of the present invention as defined by the appended claims. [0108]

Claims (34)

What is claimed is:
1. A method for controlling the scaling of a 3D computer model in a 3D display system, comprising:
activating a zoom mode;
selecting a model zoom point; and
setting a zoom scale factor;
wherein the system, in response to the selected model zoom point and the set scale factor implements the zoom operation and automatically moves the model zoom point from its original position towards an optimum viewing point.
2. The method of claim 1, wherein said 3D display system is stereoscopic.
3. The method of claim 1, wherein said method is implemented by a user via a mouse or other 2D position calculating computer input device.
4. The method of claim 1, wherein said method is implemented by a user via a sensor which can move in three dimensions.
5. The method of claim 1, wherein selection of the model zoom point is effected by signaling the system when a cursor or other indicator appears in front of the desired point on the displayed model.
6. The method of claim 1, wherein selection of the model zoom point is effected by signaling the system when a tool moving in the 3D display has its tip at the desired point relative to the model.
7. The method of claim 1, wherein the model zoom point is selected by the system as the nearest model point visible to the user along the z-axis of the display space, wherein the z-axis is set to run through an optimum viewing point.
8. The method of claim 1, wherein the model zoom point is selected by the system as a point in a crop box on the z-axis of the display space, wherein the z-axis is set so as to run through an optimum viewing point.
9. The method of claim 8, wherein said model zoom point is one of the nearest such point to the user's viewpoint, the farthest such point from the user's viewpoint, and the centroid of a collection of such points that are in the crop box and on the z-axis.
10. The method of claim 1, wherein the model zoom point is selected as a point in a crop box and in a magnification region.
11. The method of claim 10, wherein the model zoom point is also a visible model point which is nearest to either an optimum viewing point or a user's viewpoint.
12. The method of claim 10, wherein the magnification region is made visible to a user as an opening in a contextual structure.
13. The method of claim 12 wherein said contextual structure is a plane with a hole.
14. The method of claim 13 wherein the hole's shape is substantially one of a circle, an oval, an ellipse, a square, a rectangle, a triangle, a trapezoid, or any regular polygon.
15. The method of claim 8, wherein a user causes the motion of the displayed model or models necessary to produce said visible model point that is inside the crop box and on said z-axis.
16. The method of claim 15, wherein the user causes said motion of the displayed model or models by at least one of grasping with a three-dimensional tool and dragging with a mouse.
17. The method of claims 1 wherein the location of said model zoom point is indicated to a user by the display of a small structure centered thereon.
18. The method of claim 17, wherein said small structure is a small cross composed of lines and triangles, including or not including as a visible point the model zoom point.
19. The method of claim 1 wherein the attention of the user is directed to the location of the model zoom point by a larger displayed contextual structure.
20. The method of claim 19, wherein said contextual structure is a plane with a hole surrounding the model zoom point.
21. The method of claim 20, wherein said plane is so rendered in a stereoscopic display as to appear to be translucently visible through other structures imaged in the display, regardless of whether said other structures are otherwise shown as opaque or translucent.
22. The method of claim 1 wherein the zoom operation can be set to be implemented stepwisely or smoothly, as controlled by the user.
23. The method of claim 22 wherein each of the setting of the zoom scale factor and said stepwise or smooth implementation of the zoom operation can be controlled by one or more of the user's voice, a mouse, a 3D tool or other device, a slider, a wheel, and increment/decrement buttons.
24. The method of claim 1, wherein the zoom operation and the motion of the model zoom point are implemented substantially simultaneously.
25. The method of claim 22, wherein the correspondence between the degree of zoom and the motion of the model zoom point is linear, adjusted to display the unzoomed size with the model zoom point at its originally selected location and to display the maximum degree of zoom with the Model Zoom Point at the optimum viewing point.
26. The method of claim 1, wherein the system automatically activates a clipping box in the display for values above a defined threshold of a system load estimate.
27. The method of claim 1, wherein said moving of the model zoom point towards the an optimum viewing point is immediate to said optimum viewing point.
28. A method of resizing 3D computer generated models in a 3D display system, comprising:
determining a position of a center of scaling point in response to user input;
determining a scaling factor to be applied to one or more 3D models in response to user input; and
simultaneously implementing the zoom operation and automatically moving the position of the center of scaling point from its original position a certain portion of a distance towards or away from an optimum viewing point depending upon said scaling factor.
29. The method of claim 28, wherein simultaneously with implementation of the zoom the model zoom point is immediately moved to an optimum viewing point.
30. A computer program product comprising:
a computer usable medium having computer readable program code means embodied therein for controlling the scaling of a 3D computer model in a 3D display system, the computer readable program code means in said computer program product comprising:
computer readable program code means for causing a computer to activate a zoom mode;
computer readable program code means for causing a computer to select a model zoom point; and
computer readable program code means for causing a computer to set a zoom scale factor; and
computer readable program code means for causing a computer to, in response to the selected model zoom point and the set scale factor, simultaneously move the model zoom point from its original position towards an optimum viewing point.
31. The product of claim 30, further containing computer readable program code means for causing a computer to, simultaneously with implementation of the zoom, immediately move the model zoom point to an optimum viewing point.
32. A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to implement a method to control scaling of a 3D computer model in a 3D display system, said method comprising:
activating a zoom mode;
selecting a model zoom point; and
setting a zoom scale factor;
wherein, in response to the selected Model zoom point and the set scale factor, moving the model zoom point from its original position towards an optimum viewing point.
33. The program storage device of claim 30, wherein said method further comprises, immediately move the model zoom point to an optimum viewing point simultaneously with implementation of the zoom.
34. The method of claim 12, wherein the contextual structure is displayed in a stereoscopic display system using apparent transferred translucency.
US10/725,773 2002-11-29 2003-12-01 Method and system for scaling control in 3D displays ("zoom slider") Abandoned US20040233222A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/725,773 US20040233222A1 (en) 2002-11-29 2003-12-01 Method and system for scaling control in 3D displays ("zoom slider")

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US50534502P 2002-11-29 2002-11-29
US50534402P 2002-11-29 2002-11-29
US50534602P 2002-11-29 2002-11-29
US10/725,773 US20040233222A1 (en) 2002-11-29 2003-12-01 Method and system for scaling control in 3D displays ("zoom slider")

Publications (1)

Publication Number Publication Date
US20040233222A1 true US20040233222A1 (en) 2004-11-25

Family

ID=32719228

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/725,772 Expired - Lifetime US7408546B2 (en) 2002-11-29 2003-12-01 System and method for displaying and comparing 3D models (“3D matching”)
US10/725,773 Abandoned US20040233222A1 (en) 2002-11-29 2003-12-01 Method and system for scaling control in 3D displays ("zoom slider")
US10/727,344 Abandoned US20040246269A1 (en) 2002-11-29 2003-12-01 System and method for managing a plurality of locations of interest in 3D data displays ("Zoom Context")

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/725,772 Expired - Lifetime US7408546B2 (en) 2002-11-29 2003-12-01 System and method for displaying and comparing 3D models (“3D matching”)

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/727,344 Abandoned US20040246269A1 (en) 2002-11-29 2003-12-01 System and method for managing a plurality of locations of interest in 3D data displays ("Zoom Context")

Country Status (6)

Country Link
US (3) US7408546B2 (en)
EP (2) EP1565888A2 (en)
JP (3) JP2006508475A (en)
AU (3) AU2003303111A1 (en)
CA (3) CA2507959A1 (en)
WO (3) WO2004061544A2 (en)

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040051709A1 (en) * 2002-05-31 2004-03-18 Eit Co., Ltd. Apparatus for controlling the shift of virtual space and method and program for controlling same
US20050041044A1 (en) * 2003-08-22 2005-02-24 Gannon Aaron James System and method for changing the relative size of a displayed image
US20060112350A1 (en) * 2004-11-22 2006-05-25 Sony Corporation Display apparatus, display method, display program, and recording medium with the display program
US20060174213A1 (en) * 2004-11-22 2006-08-03 Sony Corporation Displaying apparatus, displaying method, displaying program, and recording medium holding displaying program
US20060188144A1 (en) * 2004-12-08 2006-08-24 Sony Corporation Method, apparatus, and computer program for processing image
US20060256110A1 (en) * 2005-05-11 2006-11-16 Yasuhiro Okuno Virtual reality presentation apparatus, virtual reality presentation method, program, image processing method, image processing apparatus, information processing method, and information processing apparatus
US20070083826A1 (en) * 2004-05-13 2007-04-12 Pessetto John R Method and system for scaling the area displayed in a viewing window
US20070182732A1 (en) * 2004-02-17 2007-08-09 Sven Woop Device for the photorealistic representation of dynamic, complex, three-dimensional scenes by means of ray tracing
US20080161997A1 (en) * 2005-04-14 2008-07-03 Heino Wengelnik Method for Representing Items of Information in a Means of Transportation and Instrument Cluster for a Motor Vehicle
US20090079732A1 (en) * 2007-09-26 2009-03-26 Autodesk, Inc. Navigation system for a 3d virtual scene
US20090100366A1 (en) * 2007-09-26 2009-04-16 Autodesk, Inc. Navigation system for a 3d virtual scene
WO2008110989A3 (en) * 2007-03-15 2009-05-14 Koninkl Philips Electronics Nv Method and apparatus for editing an image
US20090199119A1 (en) * 2008-02-05 2009-08-06 Park Chan-Ho Method for providing graphical user interface (gui), and multimedia apparatus applying the same
US20100332006A1 (en) * 2008-01-31 2010-12-30 Siemens Ag Method and Device for Visualizing an Installation of Automation Systems Together with a Workpiece
US7880738B2 (en) 2005-07-14 2011-02-01 Molsoft Llc Structured documents and systems, methods and computer programs for creating, producing and displaying three dimensional objects and other related information in those structured documents
US20110055726A1 (en) * 2009-08-27 2011-03-03 International Business Machines Corporation Providing alternative representations of virtual content in a virtual universe
WO2010151044A3 (en) * 2009-06-23 2011-04-07 엘지전자 주식회사 Image-processing method for a display device which outputs three-dimensional content, and display device adopting the method
US20110115885A1 (en) * 2009-11-19 2011-05-19 Sony Ericsson Mobile Communications Ab User interface for autofocus
US8001476B2 (en) 2004-11-16 2011-08-16 Open Text Inc. Cellular user interface
US20110304650A1 (en) * 2010-06-09 2011-12-15 The Boeing Company Gesture-Based Human Machine Interface
WO2012001625A1 (en) * 2010-06-30 2012-01-05 Koninklijke Philips Electronics N.V. Zooming a displayed image
US20120033866A1 (en) * 2009-04-16 2012-02-09 Fujifilm Corporation Diagnosis assisting apparatus, diagnosis assisting method, and recording medium having a diagnosis assisting program stored therein
US20120033052A1 (en) * 2010-08-03 2012-02-09 Sony Corporation Establishing z-axis location of graphics plane in 3d video display
US20120050277A1 (en) * 2010-08-24 2012-03-01 Fujifilm Corporation Stereoscopic image displaying method and device
US20120081364A1 (en) * 2010-09-30 2012-04-05 Fujifilm Corporation Three-dimensional image editing device and three-dimensional image editing method
US20120262446A1 (en) * 2011-04-12 2012-10-18 Soungmin Im Electronic device and method for displaying stereoscopic image
CN102843564A (en) * 2011-06-22 2012-12-26 株式会社东芝 Image processing system, apparatus, and method
US20130013264A1 (en) * 2011-07-07 2013-01-10 Autodesk, Inc. Interactively shaping terrain through composable operations
CN102985942A (en) * 2010-06-30 2013-03-20 皇家飞利浦电子股份有限公司 Zooming-in a displayed image
US8418075B2 (en) * 2004-11-16 2013-04-09 Open Text Inc. Spatially driven content presentation in a cellular environment
USRE44347E1 (en) 2002-08-26 2013-07-09 Jordaan Consulting Ltd, V, LLC Method and device for creating a two-dimensional representation of a three-dimensional structure
US20140152649A1 (en) * 2012-12-03 2014-06-05 Thomas Moeller Inspector Tool for Viewing 3D Images
RU2538335C2 (en) * 2009-02-17 2015-01-10 Конинклейке Филипс Электроникс Н.В. Combining 3d image data and graphical data
US9020783B2 (en) 2011-07-07 2015-04-28 Autodesk, Inc. Direct manipulation of composite terrain objects with intuitive user interaction
US20150154788A1 (en) * 2010-01-14 2015-06-04 Humaneyes Technologies Ltd. Method and system for adjusting depth values of objects in a three dimensional (3d) display
US20150179147A1 (en) * 2013-12-20 2015-06-25 Qualcomm Incorporated Trimming content for projection onto a target
US20160259524A1 (en) * 2015-03-05 2016-09-08 Chang Yub Han 3d object modeling method and storage medium having computer program stored thereon using the same
US9715008B1 (en) * 2013-03-20 2017-07-25 Bentley Systems, Incorporated Visualization of 3-D GPR data in augmented reality
US20180005454A1 (en) * 2016-06-29 2018-01-04 Here Global B.V. Method, apparatus and computer program product for adaptive venue zooming in a digital map interface
US9869785B2 (en) 2013-11-12 2018-01-16 Schlumberger Technology Corporation Systems and methods for speed-adjustable model navigation
US10241638B2 (en) * 2012-11-02 2019-03-26 Atheer, Inc. Method and apparatus for a three dimensional interface
US10310721B2 (en) 2015-06-18 2019-06-04 Facebook, Inc. Systems and methods for providing image perspective adjustment and automatic fitting
US20190191069A1 (en) * 2014-04-09 2019-06-20 Imagination Technologies Limited Virtual Camera for 3-D Modeling Applications
US10388077B2 (en) 2017-04-25 2019-08-20 Microsoft Technology Licensing, Llc Three-dimensional environment authoring and generation
US10462459B2 (en) * 2016-04-14 2019-10-29 Mediatek Inc. Non-local adaptive loop filter
CN111179410A (en) * 2018-11-13 2020-05-19 韦伯斯特生物官能(以色列)有限公司 Medical user interface
US10860748B2 (en) * 2017-03-08 2020-12-08 General Electric Company Systems and method for adjusting properties of objects depicted in computer-aid design applications
US11016299B2 (en) * 2017-11-27 2021-05-25 Fujitsu Limited Storage medium, control method, and control device for changing setting values of a wearable apparatus
US11016631B2 (en) 2012-04-02 2021-05-25 Atheer, Inc. Method and apparatus for ego-centric 3D human computer interface
US20210233330A1 (en) * 2018-07-09 2021-07-29 Ottawa Hospital Research Institute Virtual or Augmented Reality Aided 3D Visualization and Marking System
EP3882867A1 (en) * 2016-05-03 2021-09-22 Affera, Inc. Anatomical model displaying
US20210409669A1 (en) * 2018-11-21 2021-12-30 Boe Technology Group Co., Ltd. A method for generating and displaying panorama images based on rendering engine and a display apparatus
US20220358256A1 (en) * 2020-10-29 2022-11-10 Intrface Solutions Llc Systems and methods for remote manipulation of multi-dimensional models
US11635808B2 (en) 2021-08-12 2023-04-25 International Business Machines Corporation Rendering information in a gaze tracking device on controllable devices in a field of view to remotely control
US11728026B2 (en) 2016-05-12 2023-08-15 Affera, Inc. Three-dimensional cardiac representation
EP4286991A1 (en) * 2022-06-01 2023-12-06 Koninklijke Philips N.V. Guidance for medical interventions
WO2023232612A1 (en) * 2022-06-01 2023-12-07 Koninklijke Philips N.V. Guidance for medical interventions

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4100195B2 (en) * 2003-02-26 2008-06-11 ソニー株式会社 Three-dimensional object display processing apparatus, display processing method, and computer program
JP4758351B2 (en) * 2003-10-17 2011-08-24 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Manual tool for model-based image segmentation
WO2005043465A2 (en) * 2003-11-03 2005-05-12 Bracco Imaging S.P.A. Stereo display of tube-like structures and improved techniques therefor ('stereo display')
US20050219240A1 (en) * 2004-04-05 2005-10-06 Vesely Michael A Horizontal perspective hands-on simulator
US20050248566A1 (en) * 2004-04-05 2005-11-10 Vesely Michael A Horizontal perspective hands-on simulator
US7714855B2 (en) * 2004-05-17 2010-05-11 Siemens Medical Solutions Usa, Inc. Volume rendering processing distribution in a graphics processing unit
EP1759379A2 (en) 2004-06-01 2007-03-07 Michael A. Vesely Horizontal perspective display
US20050273001A1 (en) * 2004-06-04 2005-12-08 The Mcw Research Foundation MRI display interface for medical diagnostics and planning
US20060158459A1 (en) * 2004-07-20 2006-07-20 Ferguson Stuart H Systems and methods for creating user interfaces
US7190364B2 (en) * 2004-08-09 2007-03-13 Siemens Medical Solution Usa, Inc. System and method for polygon-smoothing in texture-based volume rendering
EP1815437A1 (en) * 2004-11-27 2007-08-08 Bracco Imaging S.P.A. Systems and methods for generating and measuring surface lines on mesh surfaces and volume objects and mesh cutting techniques ("curved measurement")
US7679625B1 (en) * 2005-01-07 2010-03-16 Apple, Inc. Straightening digital images
US7382374B2 (en) * 2005-05-02 2008-06-03 Bitplane Ag Computerized method and computer system for positioning a pointer
US20060250391A1 (en) 2005-05-09 2006-11-09 Vesely Michael A Three dimensional horizontal perspective workstation
US8717423B2 (en) 2005-05-09 2014-05-06 Zspace, Inc. Modifying perspective of stereoscopic images based on changes in user viewpoint
US7852335B2 (en) 2005-05-09 2010-12-14 Siemens Medical Solutions Usa, Inc. Volume rendering processing distribution in a graphics processing unit
JP4732925B2 (en) * 2006-03-09 2011-07-27 イマグノーシス株式会社 Medical image display method and program thereof
US7468731B2 (en) * 2006-04-11 2008-12-23 Invensys Systems, Inc. Automatic resizing of moved attribute elements on a graphical representation of a control object
US20080094398A1 (en) * 2006-09-19 2008-04-24 Bracco Imaging, S.P.A. Methods and systems for interacting with a 3D visualization system using a 2D interface ("DextroLap")
US20080118237A1 (en) * 2006-11-22 2008-05-22 Rainer Wegenkittl Auto-Zoom Mark-Up Display System and Method
US20080122839A1 (en) * 2006-11-28 2008-05-29 Microsoft Corporation Interacting with 2D content on 3D surfaces
DE102006060957B4 (en) * 2006-12-20 2011-09-15 Universitätsklinikum Hamburg-Eppendorf (UKE) Method and apparatus for the compressed storage of interactions on computer graphic volume models
US11275242B1 (en) 2006-12-28 2022-03-15 Tipping Point Medical Images, Llc Method and apparatus for performing stereoscopic rotation of a volume on a head display unit
US11315307B1 (en) 2006-12-28 2022-04-26 Tipping Point Medical Images, Llc Method and apparatus for performing rotating viewpoints using a head display unit
US10795457B2 (en) 2006-12-28 2020-10-06 D3D Technologies, Inc. Interactive 3D cursor
US11228753B1 (en) 2006-12-28 2022-01-18 Robert Edwin Douglas Method and apparatus for performing stereoscopic zooming on a head display unit
US9530142B2 (en) * 2007-02-13 2016-12-27 Claudia Juliana Minsky Method and system for creating a multifunctional collage useable for client/server communication
DE102007010806B4 (en) * 2007-03-02 2010-05-12 Siemens Ag A method of providing advanced capabilities in the use of patient image data and radiographic angiography system unsuitable for use in registration procedures
JP5242672B2 (en) * 2007-04-11 2013-07-24 オレゴン ヘルス アンド サイエンス ユニバーシティ System for non-invasive quantitative detection of fibrosis in the heart
US9058679B2 (en) * 2007-09-26 2015-06-16 Koninklijke Philips N.V. Visualization of anatomical data
DE102008007199A1 (en) * 2008-02-01 2009-08-06 Robert Bosch Gmbh Masking module for a video surveillance system, method for masking selected objects and computer program
US8793619B2 (en) * 2008-03-03 2014-07-29 The United States Of America, As Represented By The Secretary Of The Navy Graphical user control for multidimensional datasets
FR2929417B1 (en) * 2008-03-27 2010-05-21 Univ Paris 13 METHOD FOR DETERMINING A THREE-DIMENSIONAL REPRESENTATION OF AN OBJECT FROM POINTS, COMPUTER PROGRAM AND CORRESPONDING IMAGING SYSTEM
WO2009150566A1 (en) * 2008-06-11 2009-12-17 Koninklijke Philips Electronics N.V. Multiple modality computer aided diagnostic system and method
US20100099974A1 (en) * 2008-10-20 2010-04-22 Siemens Medical Solutions Usa, Inc. System for Generating a Multi-Modality Imaging Examination Report
US8350846B2 (en) * 2009-01-28 2013-01-08 International Business Machines Corporation Updating ray traced acceleration data structures between frames based on changing perspective
US8717360B2 (en) 2010-01-29 2014-05-06 Zspace, Inc. Presenting a view within a three dimensional scene
CN102771113A (en) * 2010-02-24 2012-11-07 汤姆逊许可证公司 Split screen for 3D
JP2012105048A (en) * 2010-11-10 2012-05-31 Fujifilm Corp Stereoscopic image display device, method, and program
JP2012105796A (en) * 2010-11-17 2012-06-07 Fujifilm Corp Radiation image display device and method
US8379955B2 (en) 2010-11-27 2013-02-19 Intrinsic Medical Imaging, LLC Visualizing a 3D volume dataset of an image at any position or orientation from within or outside
US20120169776A1 (en) * 2010-12-29 2012-07-05 Nokia Corporation Method and apparatus for controlling a zoom function
US8507868B2 (en) * 2011-03-04 2013-08-13 Landmark Graphics Corporation Systems and methods for determining fluid mobility in rock samples
US9063643B2 (en) * 2011-03-29 2015-06-23 Boston Scientific Neuromodulation Corporation System and method for leadwire location
US8786529B1 (en) 2011-05-18 2014-07-22 Zspace, Inc. Liquid crystal variable drive voltage
US9323402B1 (en) 2011-05-26 2016-04-26 D.R. Systems, Inc. Image navigation
KR20140063993A (en) 2012-11-19 2014-05-28 삼성메디슨 주식회사 Apparatus and method for generating medical image
JP2014182638A (en) * 2013-03-19 2014-09-29 Canon Inc Display control unit, display control method and computer program
CN104794746B (en) * 2014-01-20 2018-05-18 深圳市医诺智能科技发展有限公司 A kind of three-dimensional planar subdivision method and system
JP2015158771A (en) * 2014-02-24 2015-09-03 大樹 谷口 Cloud type electronic cad system for circuit design and printed board design
US11227427B2 (en) 2014-08-11 2022-01-18 Covidien Lp Treatment procedure planning system and method
US10162908B1 (en) 2014-11-04 2018-12-25 The Boeing Company Systems and methods for extracting bounding planes of solid models
KR101923183B1 (en) * 2016-12-14 2018-11-28 삼성전자주식회사 Method and apparatus for displaying medical images
US11157152B2 (en) * 2018-11-05 2021-10-26 Sap Se Interaction mechanisms for pointer control
DK3666225T3 (en) * 2018-12-11 2022-09-12 Sirona Dental Systems Gmbh METHOD FOR PRODUCING A GRAPHIC REPRESENTATION OF A DENTAL CONDITION
WO2021131941A1 (en) * 2019-12-26 2021-07-01 ソニーグループ株式会社 Information processing device, information processing system, and information processing method
EP4325436A1 (en) * 2022-08-17 2024-02-21 Siemens Healthineers AG A computer-implemented method for rendering medical volume data

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5293313A (en) * 1990-11-21 1994-03-08 Picker International, Inc. Real time physician view box
US5461709A (en) * 1993-02-26 1995-10-24 Intergraph Corporation 3D input system for CAD systems
US5588098A (en) * 1991-11-22 1996-12-24 Apple Computer, Inc. Method and apparatus for direct manipulation of 3-D objects on computer displays
US5616031A (en) * 1991-03-21 1997-04-01 Atari Games Corporation System and method of shadowing an object in motion
US5745126A (en) * 1995-03-31 1998-04-28 The Regents Of The University Of California Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
US5838906A (en) * 1994-10-17 1998-11-17 The Regents Of The University Of California Distributed hypermedia method for automatically invoking external application providing interaction and display of embedded objects within a hypermedia document
US5850352A (en) * 1995-03-31 1998-12-15 The Regents Of The University Of California Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images
US5861889A (en) * 1996-04-19 1999-01-19 3D-Eye, Inc. Three dimensional computer graphics tool facilitating movement of displayed object
US5966139A (en) * 1995-10-31 1999-10-12 Lucent Technologies Inc. Scalable data segmentation and visualization system
US6028645A (en) * 1996-04-12 2000-02-22 Tektronix, Inc. Digital video effects apparatus and method therefor
US6215494B1 (en) * 1997-12-18 2001-04-10 Mgi Software Corporation Method and system for centering image objects
US6323878B1 (en) * 1999-03-03 2001-11-27 Sony Corporation System and method for providing zooming video capture
US20020135601A1 (en) * 1997-06-02 2002-09-26 Sony Corporation Digital map display zooming method, digital map display zooming device, and storage medium for storing digital map display zooming program
US20020158873A1 (en) * 2001-01-26 2002-10-31 Todd Williamson Real-time virtual viewpoint in simulated reality environment
US6496183B1 (en) * 1998-06-30 2002-12-17 Koninklijke Philips Electronics N.V. Filter for transforming 3D data in a hardware accelerated rendering architecture
US6674461B1 (en) * 1998-07-07 2004-01-06 Matthew H. Klapman Extended view morphing
US6710783B2 (en) * 2000-02-04 2004-03-23 Siemens Aktiengesellschaft Presentation device
US6720987B2 (en) * 1997-04-21 2004-04-13 Sony Corporation Controller for photographing apparatus and photographing system
US6741250B1 (en) * 2001-02-09 2004-05-25 Be Here Corporation Method and system for generation of multiple viewpoints into a scene viewed by motionless cameras and for presentation of a view path
US7042449B2 (en) * 2002-06-28 2006-05-09 Autodesk Canada Co. Push-tumble three dimensional navigation system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2719056B2 (en) * 1991-08-20 1998-02-25 富士通株式会社 3D object drawing device
JP3015262B2 (en) * 1994-09-27 2000-03-06 松下電器産業株式会社 3D shape data processing device
US5940068A (en) * 1995-02-14 1999-08-17 Snk Corporation Display controlling apparatus and control method thereof
WO2004089486A1 (en) * 1996-06-21 2004-10-21 Kyota Tanaka Three-dimensional game device and information recording medium
US6847336B1 (en) * 1996-10-02 2005-01-25 Jerome H. Lemelson Selectively controllable heads-up display system
US6178358B1 (en) * 1998-10-27 2001-01-23 Hunter Engineering Company Three-dimensional virtual view wheel alignment display system
US20020064759A1 (en) * 2000-11-30 2002-05-30 Durbin Duane Milford Method and system for viewing, altering and archiving digital models of dental structures and computer integrated manufacturing of physical models of dental structures
EP1221671A3 (en) * 2001-01-05 2006-03-29 LION Bioscience AG Method for organizing and depicting biological elements
US6826297B2 (en) * 2001-05-18 2004-11-30 Terarecon, Inc. Displaying three-dimensional medical images

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5293313A (en) * 1990-11-21 1994-03-08 Picker International, Inc. Real time physician view box
US5616031A (en) * 1991-03-21 1997-04-01 Atari Games Corporation System and method of shadowing an object in motion
US5588098A (en) * 1991-11-22 1996-12-24 Apple Computer, Inc. Method and apparatus for direct manipulation of 3-D objects on computer displays
US5461709A (en) * 1993-02-26 1995-10-24 Intergraph Corporation 3D input system for CAD systems
US5838906A (en) * 1994-10-17 1998-11-17 The Regents Of The University Of California Distributed hypermedia method for automatically invoking external application providing interaction and display of embedded objects within a hypermedia document
US5745126A (en) * 1995-03-31 1998-04-28 The Regents Of The University Of California Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
US5850352A (en) * 1995-03-31 1998-12-15 The Regents Of The University Of California Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images
US5966139A (en) * 1995-10-31 1999-10-12 Lucent Technologies Inc. Scalable data segmentation and visualization system
US6028645A (en) * 1996-04-12 2000-02-22 Tektronix, Inc. Digital video effects apparatus and method therefor
US5861889A (en) * 1996-04-19 1999-01-19 3D-Eye, Inc. Three dimensional computer graphics tool facilitating movement of displayed object
US6720987B2 (en) * 1997-04-21 2004-04-13 Sony Corporation Controller for photographing apparatus and photographing system
US20020135601A1 (en) * 1997-06-02 2002-09-26 Sony Corporation Digital map display zooming method, digital map display zooming device, and storage medium for storing digital map display zooming program
US6215494B1 (en) * 1997-12-18 2001-04-10 Mgi Software Corporation Method and system for centering image objects
US6496183B1 (en) * 1998-06-30 2002-12-17 Koninklijke Philips Electronics N.V. Filter for transforming 3D data in a hardware accelerated rendering architecture
US6674461B1 (en) * 1998-07-07 2004-01-06 Matthew H. Klapman Extended view morphing
US6323878B1 (en) * 1999-03-03 2001-11-27 Sony Corporation System and method for providing zooming video capture
US6710783B2 (en) * 2000-02-04 2004-03-23 Siemens Aktiengesellschaft Presentation device
US20020158873A1 (en) * 2001-01-26 2002-10-31 Todd Williamson Real-time virtual viewpoint in simulated reality environment
US6741250B1 (en) * 2001-02-09 2004-05-25 Be Here Corporation Method and system for generation of multiple viewpoints into a scene viewed by motionless cameras and for presentation of a view path
US7042449B2 (en) * 2002-06-28 2006-05-09 Autodesk Canada Co. Push-tumble three dimensional navigation system

Cited By (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040051709A1 (en) * 2002-05-31 2004-03-18 Eit Co., Ltd. Apparatus for controlling the shift of virtual space and method and program for controlling same
US7477243B2 (en) * 2002-05-31 2009-01-13 Eit Co., Ltd. Apparatus for controlling the shift of virtual space and method and program for controlling same
USRE44347E1 (en) 2002-08-26 2013-07-09 Jordaan Consulting Ltd, V, LLC Method and device for creating a two-dimensional representation of a three-dimensional structure
US20050041044A1 (en) * 2003-08-22 2005-02-24 Gannon Aaron James System and method for changing the relative size of a displayed image
US7405739B2 (en) * 2003-08-22 2008-07-29 Honeywell International Inc. System and method for changing the relative size of a displayed image
US20070182732A1 (en) * 2004-02-17 2007-08-09 Sven Woop Device for the photorealistic representation of dynamic, complex, three-dimensional scenes by means of ray tracing
US8115763B2 (en) * 2004-02-17 2012-02-14 Jordaan Consulting Ltd. V, Llc Device for the photorealistic representation of dynamic, complex, three-dimensional scenes by means of ray tracing
US20070083826A1 (en) * 2004-05-13 2007-04-12 Pessetto John R Method and system for scaling the area displayed in a viewing window
US8418075B2 (en) * 2004-11-16 2013-04-09 Open Text Inc. Spatially driven content presentation in a cellular environment
US10222943B2 (en) 2004-11-16 2019-03-05 Open Text Sa Ulc Cellular user interface
US8001476B2 (en) 2004-11-16 2011-08-16 Open Text Inc. Cellular user interface
US10055428B2 (en) 2004-11-16 2018-08-21 Open Text Sa Ulc Spatially driven content presentation in a cellular environment
US9304837B2 (en) 2004-11-16 2016-04-05 Open Text S.A. Cellular user interface
US20060174213A1 (en) * 2004-11-22 2006-08-03 Sony Corporation Displaying apparatus, displaying method, displaying program, and recording medium holding displaying program
US20060112350A1 (en) * 2004-11-22 2006-05-25 Sony Corporation Display apparatus, display method, display program, and recording medium with the display program
US7568166B2 (en) 2004-11-22 2009-07-28 Sony Corporation Apparatus for displaying a part of an object
US20100235089A1 (en) * 2004-11-22 2010-09-16 Sony Corporation Display apparatus, display method, display program, and recording medium with the display program for controlling display of at least a portion of a map
US7852357B2 (en) * 2004-11-22 2010-12-14 Sony Corporation Display apparatus, display method, display program, and recording medium with the display program for controlling display of at least a portion of a map
US7764828B2 (en) * 2004-12-08 2010-07-27 Sony Corporation Method, apparatus, and computer program for processing image
US20060188144A1 (en) * 2004-12-08 2006-08-24 Sony Corporation Method, apparatus, and computer program for processing image
US20080161997A1 (en) * 2005-04-14 2008-07-03 Heino Wengelnik Method for Representing Items of Information in a Means of Transportation and Instrument Cluster for a Motor Vehicle
US11091036B2 (en) * 2005-04-14 2021-08-17 Volkswagen Ag Method for representing items of information in a means of transportation and instrument cluster for a motor vehicle
US20060256110A1 (en) * 2005-05-11 2006-11-16 Yasuhiro Okuno Virtual reality presentation apparatus, virtual reality presentation method, program, image processing method, image processing apparatus, information processing method, and information processing apparatus
US7773098B2 (en) * 2005-05-11 2010-08-10 Canon Kabushiki Kaisha Virtual reality presentation apparatus and method
US7880738B2 (en) 2005-07-14 2011-02-01 Molsoft Llc Structured documents and systems, methods and computer programs for creating, producing and displaying three dimensional objects and other related information in those structured documents
WO2008110989A3 (en) * 2007-03-15 2009-05-14 Koninkl Philips Electronics Nv Method and apparatus for editing an image
US20100070857A1 (en) * 2007-03-15 2010-03-18 Koninklijke Philips Electronics N.V. Method and apparatus for editing an image
US20090083628A1 (en) * 2007-09-26 2009-03-26 Autodesk, Inc. navigation system for a 3d virtual scene
US20090100366A1 (en) * 2007-09-26 2009-04-16 Autodesk, Inc. Navigation system for a 3d virtual scene
WO2009042928A1 (en) * 2007-09-26 2009-04-02 Autodesk, Inc. A navigation system for a 3d virtual scene
US20090083674A1 (en) * 2007-09-26 2009-03-26 George Fitzmaurice Navigation system for a 3d virtual scene
US20090083678A1 (en) * 2007-09-26 2009-03-26 Autodesk, Inc. Navigation system for a 3D virtual scene
US20090083666A1 (en) * 2007-09-26 2009-03-26 Autodesk, Inc. Navigation system for a 3d virtual scene
US10564798B2 (en) 2007-09-26 2020-02-18 Autodesk, Inc. Navigation system for a 3D virtual scene
US10504285B2 (en) 2007-09-26 2019-12-10 Autodesk, Inc. Navigation system for a 3D virtual scene
US20090083672A1 (en) * 2007-09-26 2009-03-26 Autodesk, Inc. Navigation system for a 3d virtual scene
US20090083671A1 (en) * 2007-09-26 2009-03-26 Autodesk, Inc. Navigation system for a 3d virtual scene
US10162474B2 (en) 2007-09-26 2018-12-25 Autodesk, Inc. Navigation system for a 3D virtual scene
US20090083645A1 (en) * 2007-09-26 2009-03-26 Autodesk, Inc Navigation system for a 3d virtual scene
US10025454B2 (en) 2007-09-26 2018-07-17 Autodesk, Inc. Navigation system for a 3D virtual scene
US9891783B2 (en) 2007-09-26 2018-02-13 Autodesk, Inc. Navigation system for a 3D virtual scene
US20090083662A1 (en) * 2007-09-26 2009-03-26 Autodesk, Inc. Navigation system for a 3d virtual scene
US20090083626A1 (en) * 2007-09-26 2009-03-26 Autodesk, Inc. Navigation system for a 3d virtual scene
US9280257B2 (en) 2007-09-26 2016-03-08 Autodesk, Inc. Navigation system for a 3D virtual scene
US9122367B2 (en) 2007-09-26 2015-09-01 Autodesk, Inc. Navigation system for a 3D virtual scene
US9052797B2 (en) 2007-09-26 2015-06-09 Autodesk, Inc. Navigation system for a 3D virtual scene
US9021400B2 (en) * 2007-09-26 2015-04-28 Autodesk, Inc Navigation system for a 3D virtual scene
US8803881B2 (en) 2007-09-26 2014-08-12 Autodesk, Inc. Navigation system for a 3D virtual scene
US8749544B2 (en) 2007-09-26 2014-06-10 Autodesk, Inc. Navigation system for a 3D virtual scene
US8314789B2 (en) 2007-09-26 2012-11-20 Autodesk, Inc. Navigation system for a 3D virtual scene
US8686991B2 (en) 2007-09-26 2014-04-01 Autodesk, Inc. Navigation system for a 3D virtual scene
US8665272B2 (en) 2007-09-26 2014-03-04 Autodesk, Inc. Navigation system for a 3D virtual scene
US20090079732A1 (en) * 2007-09-26 2009-03-26 Autodesk, Inc. Navigation system for a 3d virtual scene
US20100332006A1 (en) * 2008-01-31 2010-12-30 Siemens Ag Method and Device for Visualizing an Installation of Automation Systems Together with a Workpiece
US8515718B2 (en) * 2008-01-31 2013-08-20 Siemens Ag Method and device for visualizing an installation of automation systems together with a workpiece
US9411491B2 (en) * 2008-02-05 2016-08-09 Samsung Electronics Co., Ltd. Method for providing graphical user interface (GUI), and multimedia apparatus applying the same
US11334217B2 (en) 2008-02-05 2022-05-17 Samsung Electronics Co., Ltd. Method for providing graphical user interface (GUI), and multimedia apparatus applying the same
US20090199119A1 (en) * 2008-02-05 2009-08-06 Park Chan-Ho Method for providing graphical user interface (gui), and multimedia apparatus applying the same
US11042260B2 (en) 2008-02-05 2021-06-22 Samsung Electronics Co., Ltd. Method for providing graphical user interface (GUI), and multimedia apparatus applying the same
US9927950B2 (en) 2008-02-05 2018-03-27 Samsung Electronics Co., Ltd. Method for providing graphical user interface (GUI), and multimedia apparatus applying the same
RU2538335C2 (en) * 2009-02-17 2015-01-10 Конинклейке Филипс Электроникс Н.В. Combining 3d image data and graphical data
US20120033866A1 (en) * 2009-04-16 2012-02-09 Fujifilm Corporation Diagnosis assisting apparatus, diagnosis assisting method, and recording medium having a diagnosis assisting program stored therein
US9036882B2 (en) * 2009-04-16 2015-05-19 Fujifilm Corporation Diagnosis assisting apparatus, diagnosis assisting method, and recording medium having a diagnosis assisting program stored therein
CN102450022A (en) * 2009-06-23 2012-05-09 Lg电子株式会社 Image-processing method for a display device which outputs three-dimensional content, and display device adopting the method
US20120050502A1 (en) * 2009-06-23 2012-03-01 Sanghoon Chi Image-processing method for a display device which outputs three-dimensional content, and display device adopting the method
WO2010151044A3 (en) * 2009-06-23 2011-04-07 엘지전자 주식회사 Image-processing method for a display device which outputs three-dimensional content, and display device adopting the method
US20150127826A1 (en) * 2009-08-27 2015-05-07 International Business Machines Corporation Providing alternative representations of virtual content in a virtual universe
US8972870B2 (en) * 2009-08-27 2015-03-03 International Business Machines Corporation Providing alternative representations of virtual content in a virtual universe
US9769048B2 (en) * 2009-08-27 2017-09-19 International Business Machines Corporation Providing alternative representations of virtual content in a virtual universe
US20110055726A1 (en) * 2009-08-27 2011-03-03 International Business Machines Corporation Providing alternative representations of virtual content in a virtual universe
US8988507B2 (en) * 2009-11-19 2015-03-24 Sony Corporation User interface for autofocus
US20110115885A1 (en) * 2009-11-19 2011-05-19 Sony Ericsson Mobile Communications Ab User interface for autofocus
US9438759B2 (en) * 2010-01-14 2016-09-06 Humaneyes Technologies Ltd. Method and system for adjusting depth values of objects in a three dimensional (3D) display
US20150154788A1 (en) * 2010-01-14 2015-06-04 Humaneyes Technologies Ltd. Method and system for adjusting depth values of objects in a three dimensional (3d) display
US20110304650A1 (en) * 2010-06-09 2011-12-15 The Boeing Company Gesture-Based Human Machine Interface
US9569010B2 (en) * 2010-06-09 2017-02-14 The Boeing Company Gesture-based human machine interface
KR20110138995A (en) * 2010-06-22 2011-12-28 엘지전자 주식회사 Method for processing image of display system outputting 3 dimensional contents and display system enabling of the method
KR101719980B1 (en) * 2010-06-22 2017-03-27 엘지전자 주식회사 Method for processing image of display system outputting 3 dimensional contents and display system enabling of the method
CN102985944A (en) * 2010-06-30 2013-03-20 皇家飞利浦电子股份有限公司 Zooming a displayed image
US9342862B2 (en) * 2010-06-30 2016-05-17 Koninklijke Philips N.V. Zooming a displayed image
US20130088519A1 (en) * 2010-06-30 2013-04-11 Koninklijke Philips Electronics N.V. Zooming a displayed image
CN102985942A (en) * 2010-06-30 2013-03-20 皇家飞利浦电子股份有限公司 Zooming-in a displayed image
WO2012001625A1 (en) * 2010-06-30 2012-01-05 Koninklijke Philips Electronics N.V. Zooming a displayed image
US20120033052A1 (en) * 2010-08-03 2012-02-09 Sony Corporation Establishing z-axis location of graphics plane in 3d video display
US10194132B2 (en) * 2010-08-03 2019-01-29 Sony Corporation Establishing z-axis location of graphics plane in 3D video display
TWI501646B (en) * 2010-08-03 2015-09-21 Sony Corp Establishing z-axis location of graphics plane in 3d video display
US20120050277A1 (en) * 2010-08-24 2012-03-01 Fujifilm Corporation Stereoscopic image displaying method and device
CN102440788A (en) * 2010-08-24 2012-05-09 富士胶片株式会社 Stereoscopic image displaying method and device
US20120081364A1 (en) * 2010-09-30 2012-04-05 Fujifilm Corporation Three-dimensional image editing device and three-dimensional image editing method
US20120262446A1 (en) * 2011-04-12 2012-10-18 Soungmin Im Electronic device and method for displaying stereoscopic image
US9189825B2 (en) * 2011-04-12 2015-11-17 Lg Electronics Inc. Electronic device and method for displaying stereoscopic image
CN102843564A (en) * 2011-06-22 2012-12-26 株式会社东芝 Image processing system, apparatus, and method
US9196085B2 (en) * 2011-07-07 2015-11-24 Autodesk, Inc. Interactively shaping terrain through composable operations
US9020783B2 (en) 2011-07-07 2015-04-28 Autodesk, Inc. Direct manipulation of composite terrain objects with intuitive user interaction
US20130013264A1 (en) * 2011-07-07 2013-01-10 Autodesk, Inc. Interactively shaping terrain through composable operations
US11620032B2 (en) 2012-04-02 2023-04-04 West Texas Technology Partners, Llc Method and apparatus for ego-centric 3D human computer interface
US11016631B2 (en) 2012-04-02 2021-05-25 Atheer, Inc. Method and apparatus for ego-centric 3D human computer interface
US10241638B2 (en) * 2012-11-02 2019-03-26 Atheer, Inc. Method and apparatus for a three dimensional interface
US10782848B2 (en) 2012-11-02 2020-09-22 Atheer, Inc. Method and apparatus for a three dimensional interface
US11789583B2 (en) 2012-11-02 2023-10-17 West Texas Technology Partners, Llc Method and apparatus for a three dimensional interface
US20140152649A1 (en) * 2012-12-03 2014-06-05 Thomas Moeller Inspector Tool for Viewing 3D Images
US9715008B1 (en) * 2013-03-20 2017-07-25 Bentley Systems, Incorporated Visualization of 3-D GPR data in augmented reality
US9869785B2 (en) 2013-11-12 2018-01-16 Schlumberger Technology Corporation Systems and methods for speed-adjustable model navigation
US20150179147A1 (en) * 2013-12-20 2015-06-25 Qualcomm Incorporated Trimming content for projection onto a target
US9484005B2 (en) * 2013-12-20 2016-11-01 Qualcomm Incorporated Trimming content for projection onto a target
US11570372B2 (en) * 2014-04-09 2023-01-31 Imagination Technologies Limited Virtual camera for 3-d modeling applications
US20190191069A1 (en) * 2014-04-09 2019-06-20 Imagination Technologies Limited Virtual Camera for 3-D Modeling Applications
US10834328B2 (en) * 2014-04-09 2020-11-10 Imagination Technologies Limited Virtual camera for 3-D modeling applications
US20160259524A1 (en) * 2015-03-05 2016-09-08 Chang Yub Han 3d object modeling method and storage medium having computer program stored thereon using the same
US10310721B2 (en) 2015-06-18 2019-06-04 Facebook, Inc. Systems and methods for providing image perspective adjustment and automatic fitting
US10725637B2 (en) 2015-06-18 2020-07-28 Facebook, Inc. Systems and methods for providing image perspective adjustment and automatic fitting
US10462459B2 (en) * 2016-04-14 2019-10-29 Mediatek Inc. Non-local adaptive loop filter
EP3882867A1 (en) * 2016-05-03 2021-09-22 Affera, Inc. Anatomical model displaying
US11728026B2 (en) 2016-05-12 2023-08-15 Affera, Inc. Three-dimensional cardiac representation
US20180005454A1 (en) * 2016-06-29 2018-01-04 Here Global B.V. Method, apparatus and computer program product for adaptive venue zooming in a digital map interface
US10008046B2 (en) * 2016-06-29 2018-06-26 Here Global B.V. Method, apparatus and computer program product for adaptive venue zooming in a digital map interface
US10860748B2 (en) * 2017-03-08 2020-12-08 General Electric Company Systems and method for adjusting properties of objects depicted in computer-aid design applications
US10388077B2 (en) 2017-04-25 2019-08-20 Microsoft Technology Licensing, Llc Three-dimensional environment authoring and generation
US10453273B2 (en) 2017-04-25 2019-10-22 Microsoft Technology Licensing, Llc Method and system for providing an object in virtual or semi-virtual space based on a user characteristic
US11436811B2 (en) 2017-04-25 2022-09-06 Microsoft Technology Licensing, Llc Container-based virtual camera rotation
US11016299B2 (en) * 2017-11-27 2021-05-25 Fujitsu Limited Storage medium, control method, and control device for changing setting values of a wearable apparatus
US20210233330A1 (en) * 2018-07-09 2021-07-29 Ottawa Hospital Research Institute Virtual or Augmented Reality Aided 3D Visualization and Marking System
CN111179410A (en) * 2018-11-13 2020-05-19 韦伯斯特生物官能(以色列)有限公司 Medical user interface
US11589026B2 (en) * 2018-11-21 2023-02-21 Beijing Boe Optoelectronics Technology Co., Ltd. Method for generating and displaying panorama images based on rendering engine and a display apparatus
US20210409669A1 (en) * 2018-11-21 2021-12-30 Boe Technology Group Co., Ltd. A method for generating and displaying panorama images based on rendering engine and a display apparatus
US20220358256A1 (en) * 2020-10-29 2022-11-10 Intrface Solutions Llc Systems and methods for remote manipulation of multi-dimensional models
US11635808B2 (en) 2021-08-12 2023-04-25 International Business Machines Corporation Rendering information in a gaze tracking device on controllable devices in a field of view to remotely control
EP4286991A1 (en) * 2022-06-01 2023-12-06 Koninklijke Philips N.V. Guidance for medical interventions
WO2023232612A1 (en) * 2022-06-01 2023-12-07 Koninklijke Philips N.V. Guidance for medical interventions

Also Published As

Publication number Publication date
WO2004061775A2 (en) 2004-07-22
JP2006512133A (en) 2006-04-13
AU2003303099A1 (en) 2004-08-13
JP2006508475A (en) 2006-03-09
WO2004061775A3 (en) 2004-11-11
US20040246269A1 (en) 2004-12-09
WO2004061544A8 (en) 2005-06-16
WO2004066137A9 (en) 2004-09-23
CA2507959A1 (en) 2004-07-22
WO2004066137A2 (en) 2004-08-05
WO2004061544A2 (en) 2004-07-22
AU2003303099A8 (en) 2004-08-13
WO2004061544A3 (en) 2004-11-04
EP1565808A2 (en) 2005-08-24
US7408546B2 (en) 2008-08-05
US20040249303A1 (en) 2004-12-09
AU2003303111A8 (en) 2004-07-29
JP2006513503A (en) 2006-04-20
WO2004061544A9 (en) 2004-09-23
EP1565888A2 (en) 2005-08-24
AU2003303086A8 (en) 2004-07-29
CA2507930A1 (en) 2004-08-05
CA2523623A1 (en) 2004-07-22
AU2003303111A1 (en) 2004-07-29
WO2004066137A3 (en) 2004-12-16
AU2003303086A1 (en) 2004-07-29

Similar Documents

Publication Publication Date Title
US20040233222A1 (en) Method and system for scaling control in 3D displays (&#34;zoom slider&#34;)
US9299186B2 (en) Occlusion reduction and magnification for multidimensional data presentations
US20080094398A1 (en) Methods and systems for interacting with a 3D visualization system using a 2D interface (&#34;DextroLap&#34;)
US20160203634A1 (en) Detection of Partially Obscured Objects in Three Dimensional Stereoscopic Scenes
JP6139143B2 (en) Medical image processing apparatus and medical image processing program
US7983473B2 (en) Transparency adjustment of a presentation
US20070279436A1 (en) Method and system for selective visualization and interaction with 3D image data, in a tunnel viewer
EP1737347B1 (en) Multiple volume exploration system and method
US20040240709A1 (en) Method and system for controlling detail-in-context lenses through eye and position tracking
US20070279435A1 (en) Method and system for selective visualization and interaction with 3D image data
WO2002021437A2 (en) 3d occlusion reducing transformation
KR20130043663A (en) 3-d model view manipulation apparatus
JPH04233666A (en) Moving viewpoint for target in three-dimensional working region
JP2008521462A (en) 2D / 3D integrated contour editor
KR20020041290A (en) 3 dimensional slab rendering system and method
WO2004081871A1 (en) Image segmentation in a three-dimensional environment
KR20150078845A (en) User interface system and method for enabling mark-based interraction to images
US7477232B2 (en) Methods and systems for interaction with three-dimensional computer models
Stenicke et al. Interscopic user interface concepts for fish tank virtual reality systems
KR101428577B1 (en) Method of providing a 3d earth globes based on natural user interface using motion-recognition infrared camera
US10979697B2 (en) Post processing and displaying a three-dimensional angiography image data set
EP1749282A2 (en) Method and system for scaling control in 3d displays (zoom slider)
JP5247398B2 (en) Display adjustment device, display adjustment method, and computer program
WO2008093167A2 (en) Methods and systems for interacting with a 3d visualization system using a 2d interface
KR20230128858A (en) 3D image control method and apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: VOLUME INTERACTIONS PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, JEROME CHAN;SERRA, LUIS;KOCKRO, RALF ALFONS;AND OTHERS;REEL/FRAME:019807/0306;SIGNING DATES FROM 20040520 TO 20040531

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION