EP1008956A1 - Automatic image montage system - Google Patents
Automatic image montage system Download PDFInfo
- Publication number
- EP1008956A1 EP1008956A1 EP98310047A EP98310047A EP1008956A1 EP 1008956 A1 EP1008956 A1 EP 1008956A1 EP 98310047 A EP98310047 A EP 98310047A EP 98310047 A EP98310047 A EP 98310047A EP 1008956 A1 EP1008956 A1 EP 1008956A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- montage
- montage image
- source
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000003384 imaging method Methods 0.000 claims abstract description 8
- 238000013500 data storage Methods 0.000 claims abstract description 6
- 238000000034 method Methods 0.000 claims description 27
- 238000005259 measurement Methods 0.000 claims description 13
- 208000036829 Device dislocation Diseases 0.000 claims description 2
- 238000004590 computer program Methods 0.000 claims 3
- 238000004364 calculation method Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 230000007547 defect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 3
- 238000011524 similarity measure Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
Definitions
- the present invention relates to the automatic creation of a larger image of an object or scene from a series of smaller source views.
- Such an image is known as a 'montage' and, particularly when using a microscope and camera combination, enables a higher resolution image to be formed of a much larger region of the object or scene than is possible otherwise.
- the present invention is aimed primarily at dealing with the first and third of the three problems outlined above, it may also, in suitable embodiments, deal with second problem.
- a system for creating a larger montage image of an object from a series of smaller source images of different views of an object, which includes
- the present invention also includes a method of creating a larger montage image of an object from a series of smaller source images of different views of an object, which method includes
- a threshold value of the measure of the extent of matching is preselected so that source images which will not match with the montage image are discarded.
- the montage image is thus built up gradually from the subsequent source image data being added to the first source image data which represents a first data set for the montage image.
- the source image data can be provided by a camera or other image capture device moved relative to the object.
- the source image can be 'located' relative to the montage image and the additional data representative of the source image incorporated into the data set defining the montage image so as to expand the montage image. This process can be repeated until the montage image is complete.
- the comparison may be carried out in a number of different ways, some of which are detailed later, and may be carried out on a subset of the image data of both the montage image and the source images.
- the size of the completed montage image may be predetermined, but may be allowed to increase by allocating more memory to the montage image.
- new data is stored representative of a new image at a position at which the new source image closely matches a region of the montage image. If this position has not changed since the previous measurement, the source is stationary and thus there is determined an appropriate position at which to insert the source image into the montage image.
- source image data it is preferred to add source image data to the montage image data when the source image data is provided from a stationary image. This provides two benefits:
- the system indicates when it has determined the subject to be stationary (and is therefore updating the montage image), and when it is moving.
- the system may also indicate when it cannot determine the position confidently, either because of poor matching or low overlap.
- the system may indicate when it requires the user to pause for some reason (such as to save some data).
- the system is particularly suitable for use with a microscope and camera combination for capturing highly magnified images of small objects.
- the system includes a computer and software adapted to carry out the steps outlined above, the montage image and the current source image (or a rectangle representing its location relative to the montage image) being displayed on a monitor to provide feedback to a user operating the system (in the case of a microscope, the operator moving the microscope stage in order to re-position the object relative to the camera).
- Colours, flashing and audio signals may all be employed so that the user has a 'quiet cockpit' when the system is operating normally, and can concentrate on exploring the subject.
- the determination of the appropriate position at which a newly captured image should be incorporated into the montage image may be achieved by a variety of means, including the location and matching of specific or significant features in the image, or by finding the location of the source image at which the difference between the source and the montage images is at a minimum.
- a similarity measure is used, so that the source is, notionally, shifted across the montage image until a maximum is found in the similarity measure.
- This may conveniently be achieved by means of a type of cross-correlation statistic called 'covariance'.
- Cross-correlation and covariance are known to be robust similarity measures in the presence of noise, an important characteristic when dealing with images from a microscope.
- the covariance c can be calculated at the selected relative position by the equation: where S ⁇ M is the intersection of the valid regions of the (displaced) source and montage images, and n is the number of pixels in that intersection.
- the initial source image is generally considered to be 'valid' throughout, whereas the montage image is only valid where it has already been set. (In the current implementation, the montage image starts out with the value of zero, ie it is notionally 'empty', and only non-zero pixels are considered valid).
- the search space can be reduced by using previous knowledge of the position of the source image within the montage image, and (potentially) its speed of movement across it. If we are able to reduce the uncertainty in position to (say) 64 pixels either way from the previous (or predicted) position, then this reduces the search process to around 10 10 calculations.
- the motion tracking process works exactly as in the '2D' mode, and the consideration of the third dimension occurs only when the subject is stationary and therefore we have a 'new' source image and a position within the montage image where it belongs.
- the action is somewhat different: each pixel within the source image is considered, and it is written to the montage image only if the contrast in the source image at that point is greater than the best contrast previously recorded for the corresponding point in the montage image.
- 'contrast' is deliberately vague: what we want to measure is how well 'in focus' that part of the image is.
- Various measures may be used, including local edge information or the variance of the pixels surrounding; all these may be shown to relate to contrast. Implicit in this description is the fact that a separate 'best contrast' value for each point in the montage image needs to be maintained; this adds considerably to the memory requirements.
- the '3D' update process can be described as follows:
- the 'best contrast' is initialised to zero throughout, so that the first source image to arrive at a particular position is always inserted in toto .
- the covariance and contrast can be calculated from a single pre-selected colour plane, a colour plane selected to give the maximum overall contrast, or the sum of the colour planes (which effectively computes the equivalent monochrome image).
- the system as described uses a montage image of fixed size. However, it is important to react appropriately as the position of the source image approaches an edge of the montage image. Several actions are possible:
- Microscope optics frequently collect dust and debris which is difficult to remove. These may result in high-contrast detail in the source images, which will not only appear in the final result but may also distort both the similarity calculations and the contrast calculations. A separate procedure may be used to identify the defects, which are stationary in each frame and independent of the sample. Two refinements to the basic technique are then possible:
- the shape of the peak in covariance will be very broad if the image is defocussed, or very narrow if there is a great deal of sharp detail. In such cases it may be necessary to adapt the search technique to suit. In the case of a sharp peak, it may be necessary to consider more possible positions of the source image relative to the montage image, since otherwise the true peak may be fall between the steps in position.
- the expected shape of the 'covariance vs. relative position' function is known (and it can easily be estimated by computing this function using the source image and a displaced version of itself), then from a few points near the peak one can interpolate the exact position of the peak. In the case of a coarse search, this may enable the estimation of the peak location to be achieved more accurately than the basic step size would imply. In the case of a fine search, it can be used to determine the matching position of the source within the montage image to a precision better than one pixel; the source image can be re-sampled to reflect this more accurate position before it is inserted into the montage image.
- Some cameras have a poor optical performance which results in an effective resolution well below the pixel resolution. In such cases, it may be possible to obtain better results by reducing the pixel resolution of the source image before it is placed in the montage image.
- the magnification of the microscope can be altered to compensate, and the invention allows the pixel resolution of the final montage image to be far higher than the original resolution of the source.
- the system as implemented in the example consists of a camera 1 (in the particular example, a JVC KY-F50E camera) mounted on a microscope 2,(in the particular example, a Nikon Labophot microscope) and having a stage 3 which can move the specimen (or object) 9 in x and y (ie the plane of the stage), but which does not rotate.
- the camera output signals are digitised by a framestore board 4 (a Synoptics Prysm framestore) within a personal computer 5 (a Gateway 2000 PC with 233 MHz Pentium II processor and 96 MB RAM running Windows 98) and then the digital image is placed in the computer's memory. Control of the system was achieved using software written in Visual Basic and C++ using Synoptics Image Objects to handle the display and processing functions.
- the computer screen or monitor 6 generally shows the 'montage' image 8 on which is indicated both the detail of a new source image 7 direct from the camera 1 and (by the rectangle 7') the area of the montage image it will occupy.
- the images are shown in more detail in Figures 2 to 4.
- the montage image is initially allocated in memory as (say) a 2000 x 2000 array of pixels. Memory for all the pixels is allocated and initialised to zero. After some time the montage image has been built up to what can be seen in Figure 2, so that this represents the 'current' montage image. Note that some portions of it are still black (ie zero), because they have not yet been filled in or set to anything. The valid parts are simply the regions which are not black and which have been filled from captured images.
- Figure 3 represents the new (say 768 x 576 pixel) source image which has been captured, it is necessary now to locate the rectangle of the new source image within the montage image. This is done by finding the position where there is the best match between a (768 x 576 pixel) region of the montage and the new source image. This is done by notionally (ie within the computer) shifting the new source image across the montage image, continually computing the covariance of corresponding pixels and looking for the position of the source which maximises the covariance.
- s(x,y) is the intensity of the source at (x, y)
- m(x+px, y+py) is the intensity of the corresponding pixel of the montage when the source is shifted by(px, py).
- the pixels are copied into the montage by a routine as follows:
- the contents of the memory are changed to include the source image 7 as illustrated in Figure 4, providing an updated montage image 8'.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
A system is provided for creating a larger montage image 8 of an object
from a series of smaller source images 7 of different views of an object through,
say, a microscope which forms an imaging means for providing images of an
object. A data storage means such a framestore board 4 in a computer 5 stores
data representative of the montage image of the object. Data representative of
a first and subsequent source images is included in that representative of the
montage image, in order to increase the data stored representative of the
montage image. In order to do this, a measure of the extent of matching of the
most recent source image provided by the imaging means and individual ones
of a plurality of regions of the montage image which correspond in area is
provided and the data representative of subsequent source images is included
in the data representative of the montage image when the measure of the
extent of matching is at a maximum, whereby the source image is matched to
the montage image at the correct position.
Description
- The present invention relates to the automatic creation of a larger image of an object or scene from a series of smaller source views. Such an image is known as a 'montage' and, particularly when using a microscope and camera combination, enables a higher resolution image to be formed of a much larger region of the object or scene than is possible otherwise.
- When recording a scene by means of a camera, three particular limitations may pose problems: firstly, there may be a limited field of view (ie. in the x and y coordinate directions); secondly, there may be a limited depth of field (in the z coordinate direction, ie. that of the camera axis); and, thirdly, there may be limited camera resolution. In particular, these can be severe in the case of an image being recorded (eg photographed) through a microscope.
- Prior art solutions to one or more of these problems are as follows:
- Accurately reposition the camera or the subject and record multiple images, then 'stitch together' afterwards. An example of a system using this technique is the MIA package from Soft Imaging System GmbH (SiS) which generally requires direct control of a microscope stage in order to position the object under the objective lens in a series of positions such that overlapping images can be 'stitched' together to form a larger image. Disadvantages are that the positions need to be known accurately.
- Our Auto-Montage technique in which multiple digitally recorded images are taken at different focus settings of the camera or z-positions of the subject, and then formed into a single image where each pixel is selected from the source image which shows the most contrast (equivalently, is most in focus).
- High resolution and hence more expensive cameras. For electronic cameras, cost becomes prohibitive beyond about 2k pixels square.
- Although the present invention is aimed primarily at dealing with the first and third of the three problems outlined above, it may also, in suitable embodiments, deal with second problem.
- According to the present invention there is provided a system, for creating a larger montage image of an object from a series of smaller source images of different views of an object, which includes
- imaging means for providing, continually in use, images of an object;
- data storage means for storing data representative of the montage image of the object;
- means for including the data representative of a first and subsequent source images in that representative of the montage image, in order to increase the data stored representative of the montage image; and
- means for providing a measure of the extent of matching of the most recent source image provided by the imaging means and individual ones of a plurality of regions of the montage image which correspond in area;
- the arrangement being such that data representative of subsequent source images is included in the data representative of the montage image when the measure of the extent of matching is at a maximum, whereby the source image is matched to the montage image at the correct position.
-
- The present invention also includes a method of creating a larger montage image of an object from a series of smaller source images of different views of an object, which method includes
- generating a series of source images of an object;
- allocating memory within a data storage means for storing data representative of the montage image of the object;
- storing data representative of a first and subsequent source images in the memory, in order to store data representative of the montage image; and
- measuring the extent of matching of a most recent source image and individual ones of a plurality of regions of the montage image which correspond in area;
- data representative of the subsequent source images being included in the data representative of the montage image, when the measure of the extent of matching is at a maximum, so that the source image is matched to the montage image at the correct position.
-
- Preferably, a threshold value of the measure of the extent of matching is preselected so that source images which will not match with the montage image are discarded.
- The montage image is thus built up gradually from the subsequent source image data being added to the first source image data which represents a first data set for the montage image. The source image data can be provided by a camera or other image capture device moved relative to the object.
- By comparing the most recent source image with regions of the montage image in turn, the source image can be 'located' relative to the montage image and the additional data representative of the source image incorporated into the data set defining the montage image so as to expand the montage image. This process can be repeated until the montage image is complete. The comparison may be carried out in a number of different ways, some of which are detailed later, and may be carried out on a subset of the image data of both the montage image and the source images.
- The size of the completed montage image may be predetermined, but may be allowed to increase by allocating more memory to the montage image.
- Thus, new data is stored representative of a new image at a position at which the new source image closely matches a region of the montage image. If this position has not changed since the previous measurement, the source is stationary and thus there is determined an appropriate position at which to insert the source image into the montage image.
- It is preferred to add source image data to the montage image data when the source image data is provided from a stationary image. This provides two benefits:
- Any motion blur or interlace distortion of the source images is minimised, so the image pasted into the montage should be free of these artefacts;
- If the system waits for the movement to stop before doing the more time-consuming elements of the process (fine position determination, 3D Auto-Montage), movement can be tracked rapidly and the time-consuming elements of the processing can be done when the delay is less significant to the operator.
- Advantageously, the system indicates when it has determined the subject to be stationary (and is therefore updating the montage image), and when it is moving. The system may also indicate when it cannot determine the position confidently, either because of poor matching or low overlap. Furthermore, the system may indicate when it requires the user to pause for some reason (such as to save some data).
- The system is particularly suitable for use with a microscope and camera combination for capturing highly magnified images of small objects. Preferably, the system includes a computer and software adapted to carry out the steps outlined above, the montage image and the current source image (or a rectangle representing its location relative to the montage image) being displayed on a monitor to provide feedback to a user operating the system (in the case of a microscope, the operator moving the microscope stage in order to re-position the object relative to the camera).
- Because of the nature of the acquisition process, it is important that this feedback be given simply and directly. Colours, flashing and audio signals may all be employed so that the user has a 'quiet cockpit' when the system is operating normally, and can concentrate on exploring the subject.
- The determination of the appropriate position at which a newly captured image should be incorporated into the montage image may be achieved by a variety of means, including the location and matching of specific or significant features in the image, or by finding the location of the source image at which the difference between the source and the montage images is at a minimum.
- Preferably, a similarity measure is used, so that the source is, notionally, shifted across the montage image until a maximum is found in the similarity measure. This may conveniently be achieved by means of a type of cross-correlation statistic called 'covariance'. Cross-correlation and covariance are known to be robust similarity measures in the presence of noise, an important characteristic when dealing with images from a microscope.
- For example therefore, if s is an intensity from the source image and m is the corresponding intensity from the corresponding position of the montage image, then the covariance c can be calculated at the selected relative position by the equation: where S∩M is the intersection of the valid regions of the (displaced) source and montage images, and n is the number of pixels in that intersection. The initial source image is generally considered to be 'valid' throughout, whereas the montage image is only valid where it has already been set. (In the current implementation, the montage image starts out with the value of zero, ie it is notionally 'empty', and only non-zero pixels are considered valid). It makes sense, however, to allocate a 'rectangular' array of memory of some chosen size (say 2000 x 2000 pixels) from the outset. When a source image is 'pasted' or 'written' into the montage image (for which the term 'set' can be used), that part of the montage is then regarded as 'valid' or 'set to something' rather than being 'empty'. The particular representation of this which has been chosen for one example is to fill the montage initially with zero, then to treat non-zero pixels as 'valid' or 'set'.
- It is useful to consider the validity of each pixel, because otherwise the process tends to 'lock on' to the edge of the valid montage regions (this edge is effectively the most significant feature in the source data).
- If the covariance similarity score is plotted against the offset vector V, a single peak in this surface should be seen corresponding to the true offset between the source image and the montage, V1. If, however, non-valid pixels are not discarded from the calculation, a second peak (quite likely even higher) at another offset, V2 may result. Thus, referring to Figure 2, if the source image lines up with some edges of the montage image, then these 'edges' give rise to a very high similarity score despite the fact that the image detail does not match up. In other words, the edges are much more significant 'features' than the details of the image. This can easily result in the algorithm choosing the incorrect (V2) peak in the covariance function, and pasting the source image into the wrong position.
- Even with a source image of standard TV resolution (768x576 in Europe), if one tries to calculate this statistic over the whole of the montage image of (let's say) 2000x2000 pixels, in the order of 1013 calculations may be required. To make the system practical with current personal computers, it is therefore advisable to 'prune' the search space as far as possible. First, the search space can be reduced by using previous knowledge of the position of the source image within the montage image, and (potentially) its speed of movement across it. If we are able to reduce the uncertainty in position to (say) 64 pixels either way from the previous (or predicted) position, then this reduces the search process to around 1010 calculations.
- Further, we can adopt a coarse-fine technique. Assuming that the noise level is 'not too bad', we can achieve a coarse estimate of the position by both sub-sampling the image in both directions (only processing every 16th pixel in each direction, say), and by shifting the source image over the montage image in larger steps (say 4 pixels). In this manner, we can obtain a coarse estimate with perhaps around 106 calculations, which will take around 5ms on a modern PC. We can then improve the measurement by reducing the sub-sampling and the size of the shift step, but now we need to search over a much smaller area. Thus the total position measurement time can be kept short relative to the camera acquisition time and the speed of the operator. (It may be noted that as the speed of the algorithm improves, the uncertainty in the position diminishes, and the search space can be reduced further.)
- An important refinement is that if the coarse measurement indicates that the subject is moving, there is no need for the fine measurement. This means that the system can have the highest update rate when it is most necessary; when the subject stops the more lengthy matching procedure is not objectionable.
- The system needs to be 'confident' of the matching location because in general mistakes cannot be undone once data has been incorporated into the montage image data stored. Two confidence limits may therefore be imposed by the present system:
- The covariance measured must lie within some tolerance of the previous accepted covariance measurement;
- The number of valid pixels considered in the calculation of the highest covariance score must exceed a threshold. This is effectively the area of overlap of the source with the existing montage pixels, so it can be expressed simply as (for example) "half of the source image must overlap with the valid regions of the montage image".
- The following describes how the system of the invention may be adapted to work in '3D' mode, which adds consideration of the z (camera axis) direction. These aspects extend the effective depth of field of the optical system.
- In this case the motion tracking process works exactly as in the '2D' mode, and the consideration of the third dimension occurs only when the subject is stationary and therefore we have a 'new' source image and a position within the montage image where it belongs. At this point the action is somewhat different: each pixel within the source image is considered, and it is written to the montage image only if the contrast in the source image at that point is greater than the best contrast previously recorded for the corresponding point in the montage image. Here 'contrast' is deliberately vague: what we want to measure is how well 'in focus' that part of the image is. Various measures may be used, including local edge information or the variance of the pixels surrounding; all these may be shown to relate to contrast. Implicit in this description is the fact that a separate 'best contrast' value for each point in the montage image needs to be maintained; this adds considerably to the memory requirements.
-
- Note that the 'best contrast' is initialised to zero throughout, so that the first source image to arrive at a particular position is always inserted in toto.
- The descriptions above have described the case of a monochrome camera. For colour images, the covariance and contrast can be calculated from a single pre-selected colour plane, a colour plane selected to give the maximum overall contrast, or the sum of the colour planes (which effectively computes the equivalent monochrome image).
- The system as described uses a montage image of fixed size. However, it is important to react appropriately as the position of the source image approaches an edge of the montage image. Several actions are possible:
- The calculations are 'clipped' as appropriate, and areas of the source image that extend beyond the edge of the montage image are ignored. This means that the montage image can be 'filled' right up to its edge.
- If the montage image is only partially used (say, we placed the first source image in the centre and subsequently we have only moved up and right), then the contents of the montage image can be shifted within the existing memory, so that this memory can be used more effectively. To achieve this efficiently, it is probably necessary to calculate and maintain the extremes of the valid montage pixels in x and y.
- A new montage image (and in the '3D' mode, corresponding 'best contrast' values) can be allocated to further extend the field of view. The existing data can be copied into the new. The user may need to be instructed to pause while this happens.
- The montage image might be 'tiled', with only those tiles currently required remaining in working store (RAM) and the unused tiles being saved on disk. In this way, extremely large images might be acquired. This mechanism is more complex, but has the potential to avoid any significant delays for the user since the disk data may be written in small amounts as a background task.
- Manual override of the automatic operation may sometimes be desirable. Three cases have been identified:
- A really good image of an extended field of view has been captured, but the user wishes to add to the extent of the montage in a region remote from the present position. In this case, the user should be able to specify that the system should continue to track the motion of the source image across the montage image, but should not update the montage image. When the region is reached where it is desired to capture more images, the user removes this injunction.
- Incorrect or poor images have been recorded in some region of the montage. In this case the user may wish to 'undo' recent updates to the montage image (although this will require considerable memory), or more likely to simply 'rub out' these regions. Subsequently, the facility described above will allow the position to be re-synchronised with the montage image, before new updates are permitted.
- An incorrect, or poor, image has been recorded in '3D' mode and consequently the 'contrast' information associated with this region of the montage image is also incorrect - which in turn will mean that subsequent updates in this region will not take place correctly. In this case the user may wish to 'refresh' both the montage image for the region and the corresponding 'best contrast' record.
- Whilst it is a significant benefit of the 'basic' invention that it requires no special modifications to the microscope, there are potential benefits from adding control of the 'z' position of the sample, particularly in '3D' mode. These are:
- Knowledge of the z position of the sample means that the (relative) depth of each point can be measured, by detecting the z position that resulted in the maximum contrast. This requires the retention of this information for each point (more RAM), but means that the result of the process is a data set that provides the z position of each point of the subject. This enables 3D measurements of the surface to be made, and visualisations using computer graphic techniques or stereo pairs (Auto-Montage provides such facilities).
- The scan in depth (z) can be made reliably and repeatably, and more quickly than by hand. Potentially, the system could automatically perform a z scan every time it detected that the subject was stationary.
- Motion tracking will be most reliable when the subject is generally in focus. The system could measure the overall contrast for each z step (the total variance of the intensity values is suitable), and automatically return the stage to the z position which gave maximum contrast after each scan.
- Again, while it is a significant benefit of the invention that no motorised stage is required, there may be applications where it is appropriate to use one.
- Control of the microscope stage and the image acquisition through a single computer interface may be advantageous.
- The present invention may be used to perform a rapid scan of the object, to determine its extent in x, y (and z) and to build up a preliminary view. In this case, it will be appropriate to place the stage under the control of the system, so that its movements can be related to the motion of the image (in other words, the system as a whole can be calibrated). Using both the extent and calibration information, the system might perform a much more sophisticated image montaging scan (in x, y, and z) which would take much longer but which would be fully automatic.
- Where the source image contains (or is likely to contain) many repeated elements, such as the features on an integrated circuit, the covariance calculation may show many maxima of similar value, and no clear global maximum will be discernable. In these cases, the use of direct stage control will enable a particular local maximum to be identified as corresponding to the 'true' position.
- There may sometimes be a systematic distortion of the brightness and/or colour values in each source image. Possible causes are camera defects (variations in black level or gain across the camera, uneven illumination or variations in the transparency of the optics). Such defects can generally be modelled, measured, and subsequently corrected. Where such artifacts are significant, they will often produce a noticeable edge in the montage image where (say) the left-hand region of one field of view overlays the right-hand region of another. In this case, it is appropriate to correct each field before inserting it into the montage image. As mentioned above, this may be more useful in an automatic scan, where time is not so critical.
- Microscope optics frequently collect dust and debris which is difficult to remove. These may result in high-contrast detail in the source images, which will not only appear in the final result but may also distort both the similarity calculations and the contrast calculations. A separate procedure may be used to identify the defects, which are stationary in each frame and independent of the sample. Two refinements to the basic technique are then possible:
- The regions of the source image corresponding to the defects can be considered 'invalid' as far as the similarity measurements are concerned. This prevents them from dominating the measurements as they might otherwise do.
- The regions of the source image corresponding to the defects can be masked out when the montage image is updated with the contents of the source image. Since the subject can be moved slightly and the montage updated again, the 'missing' regions can be filled in.
- It may be desirable to adapt certain algorithms to the characteristics of the source images. For example, the shape of the peak in covariance will be very broad if the image is defocussed, or very narrow if there is a great deal of sharp detail. In such cases it may be necessary to adapt the search technique to suit. In the case of a sharp peak, it may be necessary to consider more possible positions of the source image relative to the montage image, since otherwise the true peak may be fall between the steps in position.
- Similarly, by measuring the statistics of the source images, it is easy to derive the expected value of the covariance function. This might be used to determine the confidence threshold, rather than simply working from the most recent accepted maximum covariance value.
- If the expected shape of the 'covariance vs. relative position' function is known (and it can easily be estimated by computing this function using the source image and a displaced version of itself), then from a few points near the peak one can interpolate the exact position of the peak. In the case of a coarse search, this may enable the estimation of the peak location to be achieved more accurately than the basic step size would imply. In the case of a fine search, it can be used to determine the matching position of the source within the montage image to a precision better than one pixel; the source image can be re-sampled to reflect this more accurate position before it is inserted into the montage image.
- Some cameras have a poor optical performance which results in an effective resolution well below the pixel resolution. In such cases, it may be possible to obtain better results by reducing the pixel resolution of the source image before it is placed in the montage image. The magnification of the microscope can be altered to compensate, and the invention allows the pixel resolution of the final montage image to be far higher than the original resolution of the source.
- One example of a system constructed in accordance with the present invention will now be described with reference to the accompanying drawings, in which:-
- Figure 1 is a general view of the apparatus;
- Figure 2 is a representation of a montage image on the monitor at a given time during operating of the system;
- Figure 3 is a representation of a new source image; and
- Figure 4 is a representation of the montage image after the source image data has been incorporated into the montage image data.
-
- The system as implemented in the example consists of a camera 1 (in the particular example, a JVC KY-F50E camera) mounted on a microscope 2,(in the particular example, a Nikon Labophot microscope) and having a
stage 3 which can move the specimen (or object) 9 in x and y (ie the plane of the stage), but which does not rotate. The camera output signals are digitised by a framestore board 4 (a Synoptics Prysm framestore) within a personal computer 5 (a Gateway 2000 PC with 233 MHz Pentium II processor and 96 MB RAM running Windows 98) and then the digital image is placed in the computer's memory. Control of the system was achieved using software written in Visual Basic and C++ using Synoptics Image Objects to handle the display and processing functions. - As indicated in Fig. 2, the computer screen or monitor 6 generally shows the 'montage'
image 8 on which is indicated both the detail of anew source image 7 direct from thecamera 1 and (by the rectangle 7') the area of the montage image it will occupy. The images are shown in more detail in Figures 2 to 4. - Overall, the operation is to copy the
source image 7 into the montage image 8: - When the sample is stationary, and
- At a position which lines up with the current contents of the montage image.
- The technique involved is as described above, using the covariance measure of similarity, although other measures of similarity may be used or measures of difference.
- The montage image is initially allocated in memory as (say) a 2000 x 2000 array of pixels. Memory for all the pixels is allocated and initialised to zero. After some time the montage image has been built up to what can be seen in Figure 2, so that this represents the 'current' montage image. Note that some portions of it are still black (ie zero), because they have not yet been filled in or set to anything. The valid parts are simply the regions which are not black and which have been filled from captured images.
- Now if Figure 3 represents the new (say 768 x 576 pixel) source image which has been captured, it is necessary now to locate the rectangle of the new source image within the montage image. This is done by finding the position where there is the best match between a (768 x 576 pixel) region of the montage and the new source image. This is done by notionally (ie within the computer) shifting the new source image across the montage image, continually computing the covariance of corresponding pixels and looking for the position of the source which maximises the covariance.
-
- It may be easier to consider an alternative method in which a 'difference' measure is considered instead of covariance. In such a case the position (px, py) is searched for which minimises the sum of differences between the source and the montage.
-
-
- If the position of best match - (px, py) - is measured and a new image is then captured and the measurement made again (px', py') and if these two are very similar, then it can be concluded that the object or scene and camera are stationary relative to one another. This is not an essential step, but it provides the two benefits referred to above. The stationary situation is thus detected through successive measurements of the source image position within the montage image, not by comparing successive source images (though this would be possible).
- An advantage over the prior art SiS type of system referred to above is that it does not require the expense of a motorised stage. Additional advantages (which also hold in the case where a motorised stage is available), are that the result can be seen immediately, as the subject is explored, and that the extent and shape of the subject do not need to be known (or guessed at) in advance.
Claims (10)
- A system, for creating a larger montage image (8) of an object from a series of smaller source images of different views of an object, which includesimaging means (1,2,3) for providing, continually in use, images of an object;data storage means (4) for storing data representative of the montage image of the object;means (5) for including the data representative of a first and subsequent source images (7) in that representative of the montage image, in order to increase the data stored representative of the montage image; andmeans (5) for providing a measure of the extent of matching of the most recent source image provided by the imaging means and individual ones of a plurality of regions of the montage image (8) which correspond in area;the arrangement being such that data representative of subsequent source images (7) is included in the data representative of the montage image (8) when the measure of the extent of matching is at a maximum, whereby the source image is matched to the montage image at the correct position.
- A method of creating a larger montage image (8) of an object from a series of smaller source images (7) of different views of an object, which method includesgenerating a series of source images (7) of an object;allocating memory within a data storage means (5) for storing data representative of the montage image of the object;storing data representative of a first and subsequent source images in the memory, in order to store data representative of the montage image; andmeasuring the extent of matching of a most recent source image and individual ones of a plurality of regions of the montage image which correspond in area;data representative of the subsequent source images being included in the data representative of the montage image, when the measure of the extent of matching is at a maximum, so that the source image is matched to the montage image at the correct position.
- A method according to claim 2, wherein a threshold value of the measure of the extent of matching is preselected and source images which will not match with the montage image are discarded if the respective measure is less than the threshold value.
- A method according to claim 2 or claim 3, wherein the initial size of the montage image is predetermined, but is allowed to increase by allocating more memory to the montage image.
- A method according to any of claims 2 to 4, wherein, in order that the new data that is matched is the most representative of the respective image of the object, a determination is first made of whether or not the image means is stationary relative to the object.
- A method according to any of claims 2 to 5, wherein the measurement of the degree of matching includes measuring a parameter determinative of the degree of similarity or a parameter determinative of the degree of difference.
- A method according to any of claims 2 to 6, wherein the measurement of the degree of matching is first carried out on a subset of the captured data and secondly, if required, on the full set of data captured.
- A system according to claim 1, wherein the source image data is provided by a camera (1) or other image capture device moved relative to the object (9).
- A system according to claim 1 or claim 8, wherein the data storage means, the means for including data in the montage image data, and the means for providing a measure of the extent of matching comprise a computer programmed accordingly.
- A computer program medium having a computer program for use in a system according to claim 1, the computer program being arranged toprovide a measure of the extent of matching of the most recent source image provided by the imaging means and individual ones of a plurality of regions of the montage image which correspond in area; andinclude the data representative of a subsequent source image in the data representative of the montage image when the measure of the extent of matching is at a maximum.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP98310047A EP1008956A1 (en) | 1998-12-08 | 1998-12-08 | Automatic image montage system |
US09/438,461 US6687419B1 (en) | 1998-12-08 | 1999-11-12 | Automatic image montage system |
JP11345920A JP2000182034A (en) | 1998-12-08 | 1999-12-06 | Automatic image synthesizing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP98310047A EP1008956A1 (en) | 1998-12-08 | 1998-12-08 | Automatic image montage system |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1008956A1 true EP1008956A1 (en) | 2000-06-14 |
Family
ID=8235196
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP98310047A Withdrawn EP1008956A1 (en) | 1998-12-08 | 1998-12-08 | Automatic image montage system |
Country Status (3)
Country | Link |
---|---|
US (1) | US6687419B1 (en) |
EP (1) | EP1008956A1 (en) |
JP (1) | JP2000182034A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1394739A1 (en) * | 2002-08-22 | 2004-03-03 | Christer Pihl Data Ab | Mosaicing from microscopic images of a specimen |
EP1538562A4 (en) * | 2003-04-17 | 2005-08-10 | Seiko Epson Corp | GENERATING A FIXED IMAGE FROM A PLURALITY OF FRAME IMAGES |
US7409106B2 (en) * | 2003-01-23 | 2008-08-05 | Seiko Epson Corporation | Image generating device, image generating method, and image generating program |
EP1368782A4 (en) * | 2001-01-16 | 2008-08-27 | Applied Precision Llc | Coordinate calibration for scanning systems |
DE102009054704A1 (en) * | 2009-12-15 | 2011-06-16 | Carl Zeiss Imaging Solutions Gmbh | Microscope for recording mosaic image, has evaluation unit adjusted to determine recording time point based on image of partial region of specimen and during relative movement of specimen table and recording unit by specimen images |
US9224063B2 (en) | 2011-08-02 | 2015-12-29 | Viewsiq Inc. | Apparatus and method for digital microscopy imaging |
EP3183612A4 (en) * | 2014-08-18 | 2018-06-27 | ViewsIQ Inc. | System and method for embedded images in large field-of-view microscopic scans |
EP2871512B1 (en) * | 2012-07-04 | 2019-06-05 | Sony Corporation | Information processing device, information processing method, program, and microscope system |
Families Citing this family (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6847729B1 (en) | 1999-04-21 | 2005-01-25 | Fairfield Imaging Limited | Microscopy |
JP2003203229A (en) * | 2002-01-08 | 2003-07-18 | Canon Inc | Image processor, image processing method, storage medium and program |
GB2398196B (en) | 2003-02-05 | 2005-06-01 | Fairfield Imaging Ltd | Microscope system and method |
AU2003900924A0 (en) * | 2003-02-28 | 2003-03-13 | Medsaic Pty Ltd | Imaging device |
US7848595B2 (en) * | 2004-06-28 | 2010-12-07 | Inphase Technologies, Inc. | Processing data pixels in a holographic data storage system |
US8275216B2 (en) * | 2004-06-28 | 2012-09-25 | Inphase Technologies, Inc. | Method and system for equalizing holographic data pages |
US7456377B2 (en) * | 2004-08-31 | 2008-11-25 | Carl Zeiss Microimaging Ais, Inc. | System and method for creating magnified images of a microscope slide |
WO2006028439A1 (en) * | 2004-09-01 | 2006-03-16 | Aperio Technologies, Inc. | System and method for data management in a linear-array-based microscope slide scanner |
DE102004044721B4 (en) * | 2004-09-15 | 2013-11-14 | Qimonda Ag | Self-test for the phase position of the data read clock signal DQS |
JP4653041B2 (en) * | 2005-08-31 | 2011-03-16 | クラリエント・インコーポレーテッド | System and method for synthesizing image blocks and creating a seamless enlarged image of a microscope slide |
US7567346B2 (en) * | 2006-03-01 | 2009-07-28 | General Electric Company | System and method for multimode imaging |
US20080018669A1 (en) * | 2006-07-18 | 2008-01-24 | General Electric Company | method and system for integrated image zoom and montage |
US8144919B2 (en) * | 2006-09-22 | 2012-03-27 | Fuji Xerox Co., Ltd. | Annealing algorithm for non-rectangular shaped stained glass collages |
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
US8160391B1 (en) * | 2008-06-04 | 2012-04-17 | Google Inc. | Panoramic image fill |
FR2934050B1 (en) * | 2008-07-15 | 2016-01-29 | Univ Paris Curie | METHOD AND DEVICE FOR READING EMULSION |
JP5096302B2 (en) * | 2008-12-12 | 2012-12-12 | 株式会社キーエンス | Imaging device |
JP5096303B2 (en) * | 2008-12-12 | 2012-12-12 | 株式会社キーエンス | Imaging device |
JP5154392B2 (en) * | 2008-12-12 | 2013-02-27 | 株式会社キーエンス | Imaging device |
KR101164353B1 (en) * | 2009-10-23 | 2012-07-09 | 삼성전자주식회사 | Method and apparatus for browsing and executing media contents |
US8861890B2 (en) | 2010-11-24 | 2014-10-14 | Douglas Alan Lefler | System and method for assembling and displaying individual images as a continuous image |
US9679404B2 (en) | 2010-12-23 | 2017-06-13 | Microsoft Technology Licensing, Llc | Techniques for dynamic layout of presentation tiles on a grid |
US9436685B2 (en) | 2010-12-23 | 2016-09-06 | Microsoft Technology Licensing, Llc | Techniques for electronic aggregation of information |
US8768102B1 (en) | 2011-02-09 | 2014-07-01 | Lytro, Inc. | Downsampling light field images |
US9715485B2 (en) | 2011-03-28 | 2017-07-25 | Microsoft Technology Licensing, Llc | Techniques for electronic aggregation of information |
US8811769B1 (en) | 2012-02-28 | 2014-08-19 | Lytro, Inc. | Extended depth of field and variable center of perspective in light-field processing |
US9858649B2 (en) | 2015-09-30 | 2018-01-02 | Lytro, Inc. | Depth-based image blurring |
US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
US9979909B2 (en) | 2015-07-24 | 2018-05-22 | Lytro, Inc. | Automatic lens flare detection and correction for light-field images |
US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
WO2020198660A1 (en) | 2019-03-27 | 2020-10-01 | Digimarc Corporation | Artwork generated to convey digital messages, and methods/apparatuses for generating such artwork |
US11037038B2 (en) | 2019-03-27 | 2021-06-15 | Digimarc Corporation | Artwork generated to convey digital messages, and methods/apparatuses for generating such artwork |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0199573A2 (en) * | 1985-04-22 | 1986-10-29 | E.I. Du Pont De Nemours And Company | Electronic mosaic imaging process |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4202037A (en) * | 1977-04-22 | 1980-05-06 | Der Loos Hendrik Van | Computer microscope apparatus and method for superimposing an electronically-produced image from the computer memory upon the image in the microscope's field of view |
US4760385A (en) * | 1985-04-22 | 1988-07-26 | E. I. Du Pont De Nemours And Company | Electronic mosaic imaging process |
JPH0245734A (en) * | 1988-08-05 | 1990-02-15 | Mitsubishi Heavy Ind Ltd | Automatic structure analytic processor |
JPH07115534A (en) * | 1993-10-15 | 1995-05-02 | Minolta Co Ltd | Image reader |
US5465163A (en) * | 1991-03-18 | 1995-11-07 | Canon Kabushiki Kaisha | Image processing method and apparatus for processing oversized original images and for synthesizing multiple images |
US5521984A (en) * | 1993-06-10 | 1996-05-28 | Verification Technologies, Inc. | System for registration, identification and verification of items utilizing unique intrinsic features |
US5566877A (en) * | 1995-05-01 | 1996-10-22 | Motorola Inc. | Method for inspecting a semiconductor device |
JP3227478B2 (en) * | 1995-05-17 | 2001-11-12 | シャープ株式会社 | Still image pickup device |
US6549681B1 (en) * | 1995-09-26 | 2003-04-15 | Canon Kabushiki Kaisha | Image synthesization method |
US6075905A (en) * | 1996-07-17 | 2000-06-13 | Sarnoff Corporation | Method and apparatus for mosaic image construction |
US6137498A (en) * | 1997-01-02 | 2000-10-24 | Runaway Technology, Inc. | Digital composition of a mosaic image |
US5890120A (en) * | 1997-05-20 | 1999-03-30 | At&T Corp | Matching, synchronization, and superposition on orginal speaking subject images of modified signs from sign language database corresponding to recognized speech segments |
-
1998
- 1998-12-08 EP EP98310047A patent/EP1008956A1/en not_active Withdrawn
-
1999
- 1999-11-12 US US09/438,461 patent/US6687419B1/en not_active Expired - Fee Related
- 1999-12-06 JP JP11345920A patent/JP2000182034A/en not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0199573A2 (en) * | 1985-04-22 | 1986-10-29 | E.I. Du Pont De Nemours And Company | Electronic mosaic imaging process |
Non-Patent Citations (2)
Title |
---|
KAPLAN L ET AL: "ACQUISITION AND MOSAICING OF LOW-CONTRAST BACKGROUND RADIOMETRIC IMAGES FOR SCENE SIMULATION", OPTICAL ENGINEERING, vol. 35, no. 9, September 1996 (1996-09-01), pages 2583 - 2591, XP000633951 * |
PANKAJ DANI ET AL: "AUTOMATED ASSEMBLING OF IMAGES: IMAGE MONTAGE PREPARATION", PATTERN RECOGNITION, vol. 28, no. 3, 1 March 1995 (1995-03-01), pages 431 - 445, XP000494937 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1368782A4 (en) * | 2001-01-16 | 2008-08-27 | Applied Precision Llc | Coordinate calibration for scanning systems |
EP1394739A1 (en) * | 2002-08-22 | 2004-03-03 | Christer Pihl Data Ab | Mosaicing from microscopic images of a specimen |
US7409106B2 (en) * | 2003-01-23 | 2008-08-05 | Seiko Epson Corporation | Image generating device, image generating method, and image generating program |
EP1538562A4 (en) * | 2003-04-17 | 2005-08-10 | Seiko Epson Corp | GENERATING A FIXED IMAGE FROM A PLURALITY OF FRAME IMAGES |
US7672538B2 (en) | 2003-04-17 | 2010-03-02 | Seiko Epson Corporation | Generation of still image from a plurality of frame images |
DE102009054704A1 (en) * | 2009-12-15 | 2011-06-16 | Carl Zeiss Imaging Solutions Gmbh | Microscope for recording mosaic image, has evaluation unit adjusted to determine recording time point based on image of partial region of specimen and during relative movement of specimen table and recording unit by specimen images |
US9224063B2 (en) | 2011-08-02 | 2015-12-29 | Viewsiq Inc. | Apparatus and method for digital microscopy imaging |
EP2871512B1 (en) * | 2012-07-04 | 2019-06-05 | Sony Corporation | Information processing device, information processing method, program, and microscope system |
US10955655B2 (en) | 2012-07-04 | 2021-03-23 | Sony Corporation | Stitching images based on presence of foreign matter |
EP3183612A4 (en) * | 2014-08-18 | 2018-06-27 | ViewsIQ Inc. | System and method for embedded images in large field-of-view microscopic scans |
Also Published As
Publication number | Publication date |
---|---|
US6687419B1 (en) | 2004-02-03 |
JP2000182034A (en) | 2000-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6687419B1 (en) | Automatic image montage system | |
US5453784A (en) | Imaging apparatus and method for determining range and determining focus information | |
EP0817495B1 (en) | Image subject extraction apparatus and method | |
US7990462B2 (en) | Simple method for calculating camera defocus from an image scene | |
EP0778543B1 (en) | A gradient based method for providing values for unknown pixels in a digital image | |
KR19990067567A (en) | Vector Correlation System for Automatic Positioning of Patterns in Images | |
US6181345B1 (en) | Method and apparatus for replacing target zones in a video sequence | |
US9781412B2 (en) | Calibration methods for thick lens model | |
KR20000023784A (en) | Method and apparatus for mosaic image construction | |
GB2532541A (en) | Depth map generation | |
JP2013021682A (en) | Apparatus and method for focus based depth reconstruction of dynamic scenes | |
DE112010005189T5 (en) | Depth from defocus calibration | |
CN105791801A (en) | Image Processing Apparatus, Image Pickup Apparatus, Image Processing Method | |
US6278796B1 (en) | Image processing system and method using subsampling with constraints such as time and uncertainty constraints | |
CN111630569B (en) | Binocular matching method, visual imaging device and device with storage function | |
GB2218507A (en) | Digital data processing | |
Cardillo et al. | 3-D position sensing using a passive monocular vision system | |
Subbarao et al. | Integration of defocus and focus analysis with stereo for 3D shape recovery | |
CN118393711A (en) | Real-time bright field focusing method for avoiding influence of impurities on surface of glass slide | |
KR102636767B1 (en) | Method for capturing and processing a digital panoramic image | |
JPH10170817A (en) | Method and device for calculating focal position, and electron microscope using the same | |
JPH09327037A (en) | Contour extracting method and method and device for extracting image | |
Tung et al. | Depth extraction from a single image and its application | |
JPH11248433A (en) | Apparatus and method for recognizing an object | |
GB2406992A (en) | Deconvolution of a digital image using metadata |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): CH DE FR GB IT LI NL SE |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
17P | Request for examination filed |
Effective date: 20001121 |
|
AKX | Designation fees paid |
Free format text: CH DE FR GB IT LI NL SE |
|
17Q | First examination report despatched |
Effective date: 20030617 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20050426 |