US20230099284A1 - System and method for prognosis management based on medical information of patient - Google Patents
System and method for prognosis management based on medical information of patient Download PDFInfo
- Publication number
- US20230099284A1 US20230099284A1 US17/501,041 US202117501041A US2023099284A1 US 20230099284 A1 US20230099284 A1 US 20230099284A1 US 202117501041 A US202117501041 A US 202117501041A US 2023099284 A1 US2023099284 A1 US 2023099284A1
- Authority
- US
- United States
- Prior art keywords
- time
- image
- prognosis
- information
- patient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000004393 prognosis Methods 0.000 title claims abstract description 111
- 238000000034 method Methods 0.000 title claims abstract description 51
- 206010018852 Haematoma Diseases 0.000 claims description 58
- 238000001514 detection method Methods 0.000 claims description 23
- 230000011218 segmentation Effects 0.000 claims description 23
- 206010008111 Cerebral haemorrhage Diseases 0.000 claims description 18
- 208000020658 intracerebral hemorrhage Diseases 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 14
- 230000000391 smoking effect Effects 0.000 claims description 4
- 230000036772 blood pressure Effects 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 3
- 206010012601 diabetes mellitus Diseases 0.000 claims description 3
- 230000035622 drinking Effects 0.000 claims description 3
- 238000007689 inspection Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 208000024172 Cardiovascular disease Diseases 0.000 claims description 2
- 206010027476 Metastases Diseases 0.000 claims description 2
- 230000006866 deterioration Effects 0.000 claims description 2
- 230000009401 metastasis Effects 0.000 claims description 2
- JHIVVAPYMSGYDF-UHFFFAOYSA-N cyclohexanone Chemical compound O=C1CCCCC1 JHIVVAPYMSGYDF-UHFFFAOYSA-N 0.000 claims 2
- 238000003745 diagnosis Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 238000011282 treatment Methods 0.000 description 5
- 206010028980 Neoplasm Diseases 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003902 lesion Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 208000032843 Hemorrhage Diseases 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000002269 spontaneous effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010020772 Hypertension Diseases 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011976 chest X-ray Methods 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 230000034994 death Effects 0.000 description 1
- 231100000517 death Toxicity 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 201000005577 familial hyperlipidemia Diseases 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000007914 intraventricular administration Methods 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 238000002558 medical inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009469 supplementation Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
- A61B5/021—Measuring pressure in heart or blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4842—Monitoring progression or stage of a disease
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4869—Determining body composition
- A61B5/4872—Body fat
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- the present disclosure relates to medical data processing technology, and more particularly, to systems and methods for prognosis management based on medical information of patient.
- volumetric ( 3 D) imaging such as volumetric CT
- target objects are usually detected manually by experienced medical personnel (such as radiologists), which make it tedious, time-consuming and error-prone,
- ICH intracerebral hemorrhage
- NCCT non-contrast computed tomography
- Intracerebral hemorrhage is typically classified into one of the five subtypes: intracerebral, subdural, epidural, intraventricular and subarachnoid.
- Hematoma enlargement (RE) namely the spontaneous enlargement of hematoma after onset of ICH, occurs in about one third of ICH patients and is an important risk factor for poor treatment outcomes.
- RE Hematoma enlargement
- Predicting the risk of HE by visual examination of head CT images and patient clinical history information is a challenging task for radiologists.
- Existing clinical practice cannot predict and assess the risk of ICH patients (for example risk of hematoma enlargement) in an accurate and prompt manner. Accordingly, there is also a lack of accurate and efficient risk management approach.
- the present disclosure provides a method and a device for prognosis management based on medical information of a patient, which may realize automatic prediction for progression condition of an object associated with the prognosis outcome using the existing medical information, and may generate prognosis image reflecting prognosis morphology of an object at the second time, so as to aid users (such as doctors and radiologists) in improving assessment accuracy and management efficiency of progression condition of an object, and assist users in making decisions.
- an embodiment according to the present disclosure provides a method for prognosis management based on medical information of a patient.
- the method may include receiving the medical information including at least a medical image of the patient reflecting a morphology of an object associated with the patient at a first time.
- the method may further include predicting, by a processor, a progression condition of the object at a second time based on the medical information of the first time, where the progression condition is indicative of a prognosis risk, and the second time is after the first time.
- the method may also include generating, by the processor, a prognosis image at the second time reflecting the morphology of the object at the second time based on the medical information of the first time.
- the method may additionally include providing the progression condition of the object at the second time and the prognosis image at the second time to an information management system for presentation to a user.
- an embodiment of the present disclosure provides a system for prognosis management based on medical information of a patient.
- the system may comprise an interface configured to receive the medical information including at least a medical image of the patient reflecting a morphology of an object associated with the patient at a first time.
- the system may also comprise a processor configured to predict a progression condition of the object at a second time based on the medical information of the first time, wherein the progression condition is indicative of a prognosis risk, wherein the second time is after the first time.
- the processor may be further configured to generate a prognosis image at a second time reflecting the morphology of the object at the second time based on the medical information of the first time, Besides, the processor may be also configured to provide the progression condition of the object at the second time and the prognosis image at the second time for presentation to a user.
- an embodiment of the present disclosure provides a non-transitory computer-readable medium storing computer instructions thereon.
- the computer instructions when executed by the processor, may implement the method for prognosis management based on medical information of a patient according to any embodiment of the present disclosure.
- the method may include receiving the medical information including at least a medical image of the patient reflecting a morphology of an object associated with the patient at a first time.
- the method may further include predicting, by a processor, a progression condition of the object at a second time based on the medical information of the first time, where the progression condition is indicative of a prognosis risk, and the second time is after the first time
- the method may also include generating, by the processor, a prognosis image at the second time reflecting the morphology of the object at the second time based on the medical information of the first time.
- the method may additionally include providing the progression condition of the object at the second time and the prognosis image at the second time to an information management system for presentation to a user.
- the progression condition of an object associated with the prognosis outcome at a later time be predicted automatically by using medical information of the patient at an earlier time, and prognosis image reflecting prognosis morphology of the object at the later time may be generated simultaneously.
- the progression condition and the prognosis image may be provided to an information management system and/or intuitively presented to the users (such as doctors and radiologists). Accordingly, assessment accuracy and management efficiency of progression condition of the object may be improved.
- FIG. 1 illustrates an exemplary flowchart of a method for prognosis management, according to an embodiment of the present disclosure.
- FIG. 2 illustrates an exemplary user interface, according to an embodiment of the present disclosure.
- FIG. 3 illustrates an exemplary framework for generating a prognosis image at a future time using a Generative Adversarial Network (GAN), according to an embodiment of the present disclosure.
- GAN Generative Adversarial Network
- FIG. 4 illustrates an exemplary framework for detection and segmentation of HE, according to an embodiment of the present disclosure.
- FIG. 5 illustrates an exemplary framework for training of GAN, according to an embodiment of the present disclosure.
- FIG. 6 illustrates an exemplary framework of a generator of GAN, according to an embodiment of the present disclosure.
- FIG. 7 illustrates an exemplary framework of a discriminator of GAN, according to an embodiment of the present disclosure.
- FIG. 8 illustrates a block diagram of a prognosis management device, according to an embodiment of the present disclosure.
- the embodiments of the present disclosure provide systems and methods for prognosis management based on the medical information of the patient.
- the method of prognosis management of the present disclosure may acquire, by a processor, medical information including at least medical image(s) of the patient at a first time.
- the medical information of a patient at the first time may be input through a user interface, or may be read from a database, for example, acquired from a local distribution center, or loaded based on a directory of a database.
- the source from which the medical information of the patient at the first time may be selected does not have specific limitations.
- Various types of medical information of patients may be utilized, which may include e.g., medical (such as chest X-ray, MRI, ultrasound, etc.) images, medical inspection reports, test results, medical advice, etc.
- medical such as chest X-ray, MRI, ultrasound, etc.
- the types of medical information of patients are not specifically limited herein.
- the medical image may be medical images in DICOM-format, such as CT images, or medical images in other formats, which are not limited specifically.
- the progression condition of the object at the second time associated with progression outcome may be predicted by a processor based on the acquired medical information, where the second time is temporally after the first time.
- the medical information of the patient at current time is used to predict the progression condition of the object at a certain time in the future, thus facilitating the prognosis management for the patient. More details of the prediction performed by step S 103 are described in U.S. application Ser. No. 17/489,682, entitled “System and Method for Prognosis Management Based on Medical Information of Patient,” filed Sep. 29, 2021, the content of which is hereby incorporated in reference in its entirety.
- a prognosis image at a second time which reflects prognosis morphology of the object at the second time, may be generated by the processor based on the acquired medical information and a time interval between the first time and the second time.
- the progression condition at the second time and the prognosis image at the second tune may be provided by the processor to an information management system.
- the information management system may be a centralized system that stores and manages patient medical information.
- the information management system may store the multi-modality images of a patient, non-image clinical data of the patient, as well as the prognosis prediction results and simulated prognosis images of the patient.
- the information management system may be accessed by a user to monitor the patient's progression condition.
- the information management system may present the prediction results via a user interface.
- the object may be a site of lesion or a body of lesion in medical image(s), example, the object instance may be a nodule, a tumor, or any other lesion or medical conditions that may be captured by a medical image. Accordingly, if a patient has nodules, the predicted progression condition of the object in this embodiment can also be the progression condition of the nodules of the patient in the future. Besides, the object also may be the patient has nodules or tumors.
- the medical information of the patient at current time may be used to perform prediction of the progression condition of the object in the future, and to simulate and generate (synthesize) the prognosis image reflecting prognosis morphology of the object at the future time.
- the method for prognosis management of the disclosure may improve the diagnosis. Furthermore, by intuitively presenting the progression condition of the object at the second time together with (in combination with) the prognosis image at the second time, sophisticated information may be provided to users for more informative diagnosis decisions.
- the medical information of the patient at the first time includes medical images of the patient at the first time.
- the medical image may be medical images in DICOM-format, such as CT images, or medical images in other modalities, without limitation.
- the medical information may further include non-image clinical data.
- the medical information may also include non-image clinical data. That is, the prediction may be performed based on the combination of medical images and non-image clinical data, to obtain the progression condition of the object at the second time associated with the prognosis outcome.
- the non-image clinical data may be, for example, clinical data, clinical reports, or other data that does not contain medical images.
- the non-image clinical data may be acquired from various types of data sources according to clinical use.
- the non-image clinical data may be acquired from structured clinical data, such as clinical feature items, or narrative clinical reports, or a combination of both.
- structured clinical data such as clinical feature items, or narrative clinical reports, or a combination of both.
- a narrative and unstructured clinical report may be provided, it may be converted into structured clinical information items by automated processing methods, such as natural language processing (NLP) according to the required format of the clinical data, to obtain the non-image clinical data.
- NLP natural language processing
- various types of data such as narrative and unstructured clinical reports, etc., may be converted and unified into non-image clinical data which can be processed by a processor, thus reducing the complexity of data processing by the processor.
- the method for prognosis management may provide the progression condition of the object at the second time and the prognosis image at the second time to the information management system, which may be accessible by users.
- the time interval between the first time and the second time may also be presented by the processor along with at least one of the corresponding progression condition of the object at the second time and the corresponding prognosis image at the second time.
- the time interval of 26 hours and the progression conditions of the object and the prognosis image at the second time for example, after 26 hours, may be presented in an associated manner, in the corresponding areas of the user interface.
- the specific second time may be the time that the doctor needs to monitor or observe a certain condition and the time interval can be set accordingly as the difference between the second time and the first time, such as 24 hours, 48 hours or 72 hours, and the like.
- the time interval can be set accordingly as the difference between the second time and the first time, such as 24 hours, 48 hours or 72 hours, and the like.
- the time interval can be set accordingly as the difference between the second time and the first time, such as 24 hours, 48 hours or 72 hours, and the like.
- the time interval of 26 hours is illustrated. It is contemplated that other time interval can be used depending on the observation needs for the prognostic management.
- the user can adjust the time interval, and the processor may adjust the second time accordingly.
- the progression condition of the object and the prognosis image at the adjusted second time may be predicted and provided to the information management system for presentation to the user.
- the second time may be an arbitrary future time.
- the expansion risk of hematoma at arbitrary future time can be predicted, that is, the expansion risk of hematoma in the future, may be predicted.
- the enlargement risk of the hematoma in the future is an important reference index for the diagnosis of intracerebral hemorrhage (ICH), which can provide sufficient guidance for the decision of the doctor.
- a prognosis management report may be output (or printed), or the information on prognosis management may be transmitted through a short message or email, etc. to the user.
- the outcome of the prognosis management may also be presented to the user e.g., by the information management system, through a user interface to the user.
- the medical image of the patient reflecting the morphology of the object at the first time may be presented in one part of a user interface to the user. As shown in FIG.
- the user interface may include five parts (parts 201 - 205 ), each of which may be separated by dividing lines.
- the medical image of the patient reflecting the morphology of the object at the first time may be presented to the user.
- brain images in DICOT-format may include both sectional images and a 3D image of the patient (John Smith) at the same time reflecting the morphology of the object at the first time, where the first time is 23 hours ago as indicated in the fourth part 204 .
- the first part 201 may present the details of each hematoma instance.
- volume, subtype and location of each object instance may be presented associated with the medical image of the patient at the first time in the first part of a user interface.
- three numbered hematoma instances, hematoma 1 , hematoma 3 and hematoma 4 are included in FIG. 2 . Therefore, the hematoma information at the first time may be presented, such as the volume, the subtype and the location of hematoma 1 , hematoma 3 and hematoma 4 , respectively.
- the visual and textual information of each hematoma instance it may assist the users to intuitively determine the priority of treatment for each hematoma. For example, doctors and hospitals may focus resources on one or more vital hematomas, while deferring the treatment time, for hematomas in non-vital parts, thus improving the efficiency of using medical resources.
- the non-image clinical data of the patient associated with the progression of the object at the first time may be presented to the user in a second part of the user interface.
- the non-image clinical data of patient-John Smith is presented in the second part 202 , which may include the data associated with the progression of the object.
- the content presented in the second part 202 may be the data associated with the progression of the nodule, such as age, gender, genetic history, etc.
- the content presented in the second part 202 may be the data associated with the progression of the tumor, such as age, gender, smoking history, etc.
- the non-image clinical data of the patient associated with the progression of the object may include gender, age, time period from onset to first inspection, BMI, diabetes history, smoking history, drinking history, blood pressure and history of cardiovascular disease of the patient.
- the non-image clinical data can be presented in the third part 203 , such as John Smith, male, 36 years old, 23 hours from onset to first inspection, John's diabetes history, smoking history, drinking history existed, normal blood pressure, no hypertension, and hyperlipemia.
- the drugs the patient is currently taking may be presented, to further assist the doctor in making decisions. Labels or links may also be provided to present more non-image clinical data of the patient in response to the click operation of the user.
- the progression condition may include the enlargement risk of the hematoma for hematoma instance or the patient, and the first time is after onset of intracerebral hemorrhage. That is, when the object is hematomas, the progression condition of the object may include the enlargement risk of a certain hemorrhage or the patient.
- HE namely the spontaneous enlargement of hematoma after onset of ICH, occurs in about one third of ICH patients and is an important risk factor for poor treatment outcomes.
- the primary concern of the doctor is whether the intracerebral hemorrhage occurred, thus the first time may be after onset of intracerebral hemorrhage, when doctors may deem helpful to observe hematoma enlargement, such that the diagnostic needs of doctors may be better meet.
- the enlargement risks of three hematomas including hematoma 2 , hematoma 3 and hematoma 5 after 23 hours are presented in the fifth part 205 .
- a predetermined threshold may be set for the corresponding risk, and when the predicted enlargement risk is larger than the predetermined threshold, the level of the risk may be further presented.
- hematoma 2 and hematoma 3 may be hematomas with high enlargement risks, and thus may be labeled as high risk; hematoma 5 may a hematoma with low enlargement risk, and accordingly may be labeled as low risk.
- the specific value of predicted enlargement risks may be presented, and at least one preset threshold may be set to sort the enlargement risk. For example, when the predicted enlargement risk value exceeds the preset threshold, it may be considered as high risk, and when it is below the threshold, it may be considered as low risk.
- a medium risk value range can also be set, which is not specifically limited herein.
- the enlargement risk of hematoma or the risk level of the patient may be presented, meanwhile, may be given.
- the risk of enlargement of hematoma of the patient is 95% (high risk).
- the predicted enlargement risk itself may also be a numerical range, such as 85%-95%, which is not specifically limited herein.
- the method of prognosis management of the present disclosure provides an efficient risk management scheme for the pain point, which can effectively assist doctors in the prognosis management of patients.
- the prognosis image of the patient at the second time may be presented in the fourth part 204 of the user interface.
- FIG. 2 shows that the user expects to predict the progression condition of the object after 23 hours, thus the prognosis image of the object after 23 hours may be presented through the fourth part 204 .
- the image displayed fourth part 204 may be in a corresponding type as the image presented in the first part 201 . For example, if a sectional image and a 3D image are simultaneously presented in the first part 201 in FIG. 2 , then the corresponding simulated sectional image and the 3D image at the second time may he presented in the fourth part 204 , so that the user can perform a side-by-side comparison.
- the presented prognostic image reflecting the prognostic morphology at the second time may be presented as a two-dimensional sectional image, a 3D image, or a combination of a two-dimensional image section and a 3D image.
- image operations such as scaling, rotation and generation of a local image may be performed according to the operation instructions of the user.
- the presented medical images and prognostic images may include a coronal plane image, a sagittal plane image, an axial plane image and a 3D image.
- the coronal plane image, the sagittal plane image and the axial plane image are representative sections.
- the 3D image may be presented, and the operation such as resealing, extraction of local sections, etc. may be performed according to instruction of the user, so that the doctor can access sectional images of other regions of interest.
- the progression condition of the object may include one or more of the following: enlargement risk of an object instance or the patient, deterioration risk of an object instance or the patient, expansion risk of an object instance or the patient, metastasis risk of an object instance or the patient, recurrence risk of an object instance or the patient, location of an object instance, volume of an object instance, and subtype of an object instance.
- An object instance may be an occurrence of the target object of the patient, such as a hematoma instance.
- the enlargement risk of each hematoma may be presented individually (e enlargement risks for hematomas 2 , 3 , and 5 are shown separately) and/or in a collective manner (e.g., a collective hematoma enlargement risk for the patient is also shown) in the fifth part 205 .
- other patient information such as name, hospital, and information related to the first time can also be presented in the first part 201 .
- the first time point was 23 hours ago.
- several buttons may be provided, which the users may click to perform operations such as selecting other time points, comparing among multiple time points, or selecting other patients.
- the method of prognosis management may predict the progression condition of the object at the second time associated with the prognosis outcome.
- the specific prediction process may be implemented in combination with deep learning network.
- the prognosis image at the second time may be generated based on the acquired medical information and the time interval by performing the following steps: generating the prognosis image at the second time using a Generative Adversarial Network (GAN) based on the acquired medical information and the time interval. That is, in the prediction stage, a GAN generator may be used to generate the prognosis image.
- GAN Generative Adversarial Network
- a GAN generator may be used to generate the prognosis image.
- the simulated head image at the second time may be generated by GAN, to provide the doctors with a more intuitive manner to assess the potential risk in the future for the ICH patient.
- FIG. 3 illustrates an exemplary framework for generating a prognosis image at a future time using GAN, according to the embodiment of the present disclosure.
- the GAN may include a generator module 300 and discriminator module.
- the prognosis image at the second time may be generated using the GAN based on the acquired medical information and the time interval by performing the following steps: first, acquiring detection and segmentation information of the object corresponding to the medical image at the first time; and then, fusing, the medical image at the first time and the corresponding detection and segmentation information of the object, to obtain a first fused information.
- hematoma Take hematoma as an example of the object, as shown in FIG.
- the fusion may be performed based on the detection and segmentation information of the hematoma instances and the initial head CT image,
- the detection and segmentation of the hematoma may be implemented by a mask RCNN such as a multi-task encoder-decoder network, which may be used to perform voxel-level classification tasks and regression tasks.
- the mask RCNN may include first encoder 401 and first decoder 402 . As in FIG.
- the head image data of the hematoma patient may be input into the first encoder 401 of the mask RCNN, and then the output of the first encoder 401 may be used as the input of the first decoder 402 to obtain the detection and segmentation information of each hematoma instance.
- the detection and segmentation information may include the center point, size, subtype, bleed position and volume associated with the hematoma.
- the obtained detection and segmentation information of the hematoma may be fused with the initial head CT image to obtain the initial first fused information.
- the prognosis image at the second time may be generated using the trained generator module 300 based on the first fused information and the time interval between the first time and the second time.
- the GAN may be trained based on the training data through the following steps.
- a training set may be constructed for the GAN, and the training set may include a plurality of training data.
- Each training data item may include medical image(s) at a third time and detection and segmentation information of the object at the third time, a sample time interview between the third time and a fourth time after the third time, and medical image(s) at the fourth time and detection and segmentation information of object at the fourth time,
- the medical image at the third time and detection and segmentation information of the object at the third time may be determined firstly, and the first fused information may be determined based on the medical image at the third time and detection and segmentation information at the third time.
- the mask RNN may be adopted for detection and segmentation, which is not described in detail herein.
- a synthetic fused information at the fourth time may be determined using the generator module 300 based on the first fused information and the time interval between the third time and the fourth time after the third time. Then, a second fused information may be determined based on the medical image at the fourth time and detection and segmentation information of the object at the fourth time.
- a synthetic information pair may be formed based on the first fused information and the synthetic fused information at the fourth time, and a real information pair may be formed based on the first fused information and the second fused information,
- the synthetic information pair and the real information pair may be discriminated using the discriminator module 500 , and then the model parameters to be trained of the generator module 300 may be adjusted based on the outcome of the discriminator module 500 .
- the generated synthetic information pair and the real information pair may be used as the input of the discriminator module 500 of the GAN.
- the discriminator module 500 is configured to discriminate between the real information pair and the synthetic information pair.
- the discriminator module 500 and the generator module 300 hold.
- the generator module 300 may expect to generate images that look real, for outputting as the prognostic image at the second time.
- the discriminator module 500 may be configured to distinguish between real information pair and. synthetic information pair. Both of the two modules may be trained in an iterative manner, Unlike non-task specific GAN, any image generated by the generator module 300 will pass through the discriminator module 500 .
- the trained framework may generate a prognostic image that is more realistic in clinic sense.
- the method of prognosis management of the present disclosure also incorporates the segmentation information, thus ensuring that the GAN may focus on the region of the lesion.
- the generator module 300 may be implemented by any general-purpose encoder-decoder CNN. As shown in FIG. 6 , the generator module 300 may include a second encoder 601 and a second decoder 602 . The dimension of the input and output features of generator module 300 may be the same as that of the initial head CT image. In the last layer of the second encoder 601 , the encoded features may be flattened into the form of a one-dimensional feature vector, so that the non-image information may be attached to the encoded image features as an additional channel. The specific non-image information may include, for example, clinical information and scanning interval, and the like. Then, the encoded features may be decoded to data in the dimension of the initial image with the second decoder 602 .
- the discriminator module 500 may be implemented using a CNN framework with a multi-layer perception (MLP) to discriminate whether the input is real/authentic information or synthetic information, and may output a binary result to indicate that.
- MLP multi-layer perception
- the generator-discriminator s intended to minimize the joint loss.
- An example of a loss function is provided as following Equation (1):
- x′ and x represent synthetic data and real data respectively.
- D may represent the loss of the discriminator module
- G may represent the loss of the generator module.
- the specific loss function may take various forms, including but not limited to minimax loss, binary cross entropy loss or any form of distance distribution loss. The above loss function is only an example, and other forms of loss functions may also be used by the training process.
- FIG. 7 shows an exemplary framework of the discriminator module 500 , which may include a third encoder 701 and a full connection layer 702 .
- the real information pair and the synthetic information pair can be used as the input of the third encoder 701 , and whether the result is either real or synthetic may be discriminated by the full connection layer 702 .
- the prediction may be performed by only applying the generator module 300 , and the discriminator module 500 may be an auxiliary module that provides supervision only in the training stage.
- the possible progression of the hematoma morphology at the second time may be generated by the trained generator module 300 based on the time interval between the initial scan and the subsequent scan, including generating the prognosis image at the second time, and further simulating the prognosis morphology of the object at the second time.
- Users such as radiologists
- the duration between the initial scan and the subsequent scan, the non-image data, etc. may be input by the user through the user interface (UI).
- the method of prognosis management of the present disclosure may perform prediction through the prediction model based on the available medical information of the patient, and may generate a prognosis image at the second time reflecting the prognosis morphology of the object at the second time, thus providing effective assistance to doctors for diagnosis in a very intuitive manner. Furthermore, by using a specially designed GAN, the generated image of prognostic morphology may be more realistic in clinic, thus assisting the doctors to improve their diagnosis.
- the embodiment of the present disclosure also may provide a device for prognosis management based on the medical information of the patient.
- the device may include a processor 801 , a memory 802 and a communication bus.
- the communication bus may be used to realize the connection and communication between the processor 801 and the memory 802 .
- the processor 801 may be a processing device including one or more general-purpose processing devices such as a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), and the like.
- the processor may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor running other instruction sets, or a processor running a combination of instruction sets.
- the processor can also be one or more dedicated processors specialized for specific processing, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a system on a chip (SoC), and the like.
- the prognosis management device 800 may further include an input/output 803 , which is also connected to the communication bus.
- the input/output 803 may be used for the processor 801 to acquire externally input medical information of the patient, and the input/output 803 may also be used to input the medical information of the patient into the storage 802 .
- a display unit 804 may also be connected to the communication bus, and the display unit 804 may be used to display the operating process of the prognosis management device and/or the output of the prediction result.
- the processor 801 may also be used to execute one or more computer programs stored in the storage 802 , for example, a prediction program may be stored in the memory, and executed by the processor 1401 to perform the steps of the method for prognosis management based on medical information of patients according to various embodiments of the present disclosure.
- the embodiment of the present disclosure also may provide a system for prognosis management based on the medical information of the patient, wherein the system may include an interface, which may be configured to receive the medical information including medical image(s) acquired by medical imaging devices.
- the interface may be a hardware interface or an API interface of software, or the combination of both, which is not specifically limited herein.
- the system for prognosis management may include a processor, which may be configured to execute the method for prognosis management based on medical information of a patient according to any embodiment of the present disclosure.
- Embodiments of the present disclosure also may provide a non-transitory computer-readable storage medium storing computer instructions and when the computer instructions executed by the processor, implementing the steps of the method for prognosis management based on medical information of a patient according to any embodiment of present disclosure.
- a computer-readable medium may be a non-transitory computer-readable medium such as a read only memory (ROM), a random access memory (RAM), a phase change random access memory (PRAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), an electrically erasable programmable read only memory (EEPROM), other types of random access memory (RAM), a flash disk or other forms of flash memory, a cache, a register, a static memory, a compact disc read-only memory (CD-ROM), a digital versatile disc (MID) or other optical memory, a cassette tape or other magnetic storage device, or any other possible non-transitory medium used to store information or instructions that can be accessed by computer devices, and the like.
- ROM read only memory
- RAM random access memory
- PRAM phase change random access memory
- SRAM static random access memory
- DRAM dynamic random access memory
- EEPROM electrically erasable programmable read only memory
- RAM random access memory
- flash disk or other forms of flash memory
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Pathology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Mathematical Physics (AREA)
- Veterinary Medicine (AREA)
- General Engineering & Computer Science (AREA)
- Surgery (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Physiology (AREA)
- Cardiology (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Quality & Reliability (AREA)
- Vascular Medicine (AREA)
- Probability & Statistics with Applications (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The disclosure relates to a method, a system, and a computer-readable medium for prognosis management based on medical information of a patient. The method may include receiving the medical information including at least a medical image of the patient reflecting a morphology of an object associated with the patient at a first time, The method may further include predicting a progression condition of the object at a second time based on the medical information of the first time, where the progression condition is indicative of a prognosis risk, and the second time is after the first time. The method may also include generating a prognosis image at the second time reflecting the morphology of the object at the second time based on the medical information of the first time. The method may additionally include providing the progression condition of the object at the second time and the prognosis image at the second time to an information management system for presentation to a user.
Description
- This application is a continuation-in-part to U.S. application Ser. No. 17/489,682, entitled “System and Method for Prognosis Management Based on Medical Information of Patient,” filed Sep. 29, 2021, the content of which is hereby incorporated in reference in its entirety.
- The present disclosure relates to medical data processing technology, and more particularly, to systems and methods for prognosis management based on medical information of patient.
- In the medical field, effective treatments rely on accurate diagnosis and diagnosis accuracy usually depends on the quality of medical image analysis, especially the detection of target objects (such as organs, tissues, target sites, and the like). Compared with conventional two-dimensional imaging, volumetric (3D) imaging, such as volumetric CT, may capture more valuable medical information, thus contributing to more accurate diagnosis. Conventionally, target objects are usually detected manually by experienced medical personnel (such as radiologists), which make it tedious, time-consuming and error-prone,
- One such exemplary medical condition that needs to be accurately detected is intracerebral hemorrhage (ICH). ICH is a critical and life-threatening disease and leads to millions of deaths globally per year, The condition is typically diagnosed using non-contrast computed tomography (NCCT). Intracerebral hemorrhage is typically classified into one of the five subtypes: intracerebral, subdural, epidural, intraventricular and subarachnoid. Hematoma enlargement (RE), namely the spontaneous enlargement of hematoma after onset of ICH, occurs in about one third of ICH patients and is an important risk factor for poor treatment outcomes. Predicting the risk of HE by visual examination of head CT images and patient clinical history information is a challenging task for radiologists. Existing clinical practice cannot predict and assess the risk of ICH patients (for example risk of hematoma enlargement) in an accurate and prompt manner. Accordingly, there is also a lack of accurate and efficient risk management approach.
- The present disclosure provides a method and a device for prognosis management based on medical information of a patient, which may realize automatic prediction for progression condition of an object associated with the prognosis outcome using the existing medical information, and may generate prognosis image reflecting prognosis morphology of an object at the second time, so as to aid users (such as doctors and radiologists) in improving assessment accuracy and management efficiency of progression condition of an object, and assist users in making decisions.
- In a first aspect, an embodiment according to the present disclosure provides a method for prognosis management based on medical information of a patient. The method may include receiving the medical information including at least a medical image of the patient reflecting a morphology of an object associated with the patient at a first time. The method may further include predicting, by a processor, a progression condition of the object at a second time based on the medical information of the first time, where the progression condition is indicative of a prognosis risk, and the second time is after the first time. The method may also include generating, by the processor, a prognosis image at the second time reflecting the morphology of the object at the second time based on the medical information of the first time. Besides, the method may additionally include providing the progression condition of the object at the second time and the prognosis image at the second time to an information management system for presentation to a user.
- In a second aspect, an embodiment of the present disclosure provides a system for prognosis management based on medical information of a patient. The system may comprise an interface configured to receive the medical information including at least a medical image of the patient reflecting a morphology of an object associated with the patient at a first time. The system may also comprise a processor configured to predict a progression condition of the object at a second time based on the medical information of the first time, wherein the progression condition is indicative of a prognosis risk, wherein the second time is after the first time. The processor may be further configured to generate a prognosis image at a second time reflecting the morphology of the object at the second time based on the medical information of the first time, Besides, the processor may be also configured to provide the progression condition of the object at the second time and the prognosis image at the second time for presentation to a user.
- In a third aspect, an embodiment of the present disclosure provides a non-transitory computer-readable medium storing computer instructions thereon. The computer instructions, when executed by the processor, may implement the method for prognosis management based on medical information of a patient according to any embodiment of the present disclosure. The method may include receiving the medical information including at least a medical image of the patient reflecting a morphology of an object associated with the patient at a first time. The method may further include predicting, by a processor, a progression condition of the object at a second time based on the medical information of the first time, where the progression condition is indicative of a prognosis risk, and the second time is after the first time The method may also include generating, by the processor, a prognosis image at the second time reflecting the morphology of the object at the second time based on the medical information of the first time. Besides, the method may additionally include providing the progression condition of the object at the second time and the prognosis image at the second time to an information management system for presentation to a user.
- With the systems and methods for prognosis management according to embodiments of the present disclosure, the progression condition of an object associated with the prognosis outcome at a later time be predicted automatically by using medical information of the patient at an earlier time, and prognosis image reflecting prognosis morphology of the object at the later time may be generated simultaneously. The progression condition and the prognosis image may be provided to an information management system and/or intuitively presented to the users (such as doctors and radiologists). Accordingly, assessment accuracy and management efficiency of progression condition of the object may be improved.
- In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having letter suffixes or different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments, and together with the description and claims, serve to explain the disclosed embodiments. Such embodiments are demonstrative and not intended to be exhaustive or exclusive embodiments of the present method or device.
-
FIG. 1 illustrates an exemplary flowchart of a method for prognosis management, according to an embodiment of the present disclosure. -
FIG. 2 illustrates an exemplary user interface, according to an embodiment of the present disclosure. -
FIG. 3 illustrates an exemplary framework for generating a prognosis image at a future time using a Generative Adversarial Network (GAN), according to an embodiment of the present disclosure. -
FIG. 4 illustrates an exemplary framework for detection and segmentation of HE, according to an embodiment of the present disclosure. -
FIG. 5 illustrates an exemplary framework for training of GAN, according to an embodiment of the present disclosure. -
FIG. 6 illustrates an exemplary framework of a generator of GAN, according to an embodiment of the present disclosure. -
FIG. 7 illustrates an exemplary framework of a discriminator of GAN, according to an embodiment of the present disclosure. -
FIG. 8 illustrates a block diagram of a prognosis management device, according to an embodiment of the present disclosure. - The disclosure will be described in detail with reference to the drawings and specific embodiments.
- As used in this disclosure, works like “first”, “second” do not indicate any particular order, quantity or importance, but are only used to distinguish.
- To predict and assess the risk of ICH patients in an accurate and prompt manner in clinical practice, the embodiments of the present disclosure provide systems and methods for prognosis management based on the medical information of the patient. As shown in
FIG. 1 , in step S101, the method of prognosis management of the present disclosure may acquire, by a processor, medical information including at least medical image(s) of the patient at a first time. For example, the medical information of a patient at the first time may be input through a user interface, or may be read from a database, for example, acquired from a local distribution center, or loaded based on a directory of a database. The source from which the medical information of the patient at the first time may be selected does not have specific limitations. Various types of medical information of patients may be utilized, which may include e.g., medical (such as chest X-ray, MRI, ultrasound, etc.) images, medical inspection reports, test results, medical advice, etc. The types of medical information of patients are not specifically limited herein. The medical image may be medical images in DICOM-format, such as CT images, or medical images in other formats, which are not limited specifically. - Next, in step S102, the progression condition of the object at the second time associated with progression outcome may be predicted by a processor based on the acquired medical information, where the second time is temporally after the first time. Unlike using the medical information at current time to perform prediction of the object at the current time, the medical information of the patient at current time is used to predict the progression condition of the object at a certain time in the future, thus facilitating the prognosis management for the patient. More details of the prediction performed by step S103 are described in U.S. application Ser. No. 17/489,682, entitled “System and Method for Prognosis Management Based on Medical Information of Patient,” filed Sep. 29, 2021, the content of which is hereby incorporated in reference in its entirety.
- Subsequently, in step S103, a prognosis image at a second time, which reflects prognosis morphology of the object at the second time, may be generated by the processor based on the acquired medical information and a time interval between the first time and the second time. And then, in step S104, the progression condition at the second time and the prognosis image at the second tune may be provided by the processor to an information management system. In some embodiments, the information management system may be a centralized system that stores and manages patient medical information. For example, the information management system may store the multi-modality images of a patient, non-image clinical data of the patient, as well as the prognosis prediction results and simulated prognosis images of the patient. The information management system may be accessed by a user to monitor the patient's progression condition. In some embodiments, the information management system may present the prediction results via a user interface.
- In some embodiments, the object may be a site of lesion or a body of lesion in medical image(s), example, the object instance may be a nodule, a tumor, or any other lesion or medical conditions that may be captured by a medical image. Accordingly, if a patient has nodules, the predicted progression condition of the object in this embodiment can also be the progression condition of the nodules of the patient in the future. Besides, the object also may be the patient has nodules or tumors. In some embodiments, the medical information of the patient at current time may be used to perform prediction of the progression condition of the object in the future, and to simulate and generate (synthesize) the prognosis image reflecting prognosis morphology of the object at the future time. By providing the user with more vivid and intuitive prognosis morphology, the method for prognosis management of the disclosure may improve the diagnosis. Furthermore, by intuitively presenting the progression condition of the object at the second time together with (in combination with) the prognosis image at the second time, sophisticated information may be provided to users for more informative diagnosis decisions.
- Various types of medical information of patients may be utilized. In some embodiments, the medical information of the patient at the first time includes medical images of the patient at the first time. The medical image may be medical images in DICOM-format, such as CT images, or medical images in other modalities, without limitation. In some embodiments, the medical information may further include non-image clinical data. The medical information may also include non-image clinical data. That is, the prediction may be performed based on the combination of medical images and non-image clinical data, to obtain the progression condition of the object at the second time associated with the prognosis outcome. The non-image clinical data may be, for example, clinical data, clinical reports, or other data that does not contain medical images. With the supplementation of non-image clinical data, the condition of the patient at the first time may be more effectively indicated, and the progression condition may be predicted based on the medical information in a prompt manner. In some embodiments, the non-image clinical data may be acquired from various types of data sources according to clinical use. For example, in some embodiments, the non-image clinical data may be acquired from structured clinical data, such as clinical feature items, or narrative clinical reports, or a combination of both. Alternatively or additionally, if a narrative and unstructured clinical report may be provided, it may be converted into structured clinical information items by automated processing methods, such as natural language processing (NLP) according to the required format of the clinical data, to obtain the non-image clinical data. Through this format conversion, various types of data, such as narrative and unstructured clinical reports, etc., may be converted and unified into non-image clinical data which can be processed by a processor, thus reducing the complexity of data processing by the processor.
- The method for prognosis management according to of the present disclosure may provide the progression condition of the object at the second time and the prognosis image at the second time to the information management system, which may be accessible by users. In some embodiments, the time interval between the first time and the second time may also be presented by the processor along with at least one of the corresponding progression condition of the object at the second time and the corresponding prognosis image at the second time. Take the hematoma as an example object, as shown in
FIG. 2 , the time interval of 26 hours and the progression conditions of the object and the prognosis image at the second time, for example, after 26 hours, may be presented in an associated manner, in the corresponding areas of the user interface. By intuitively displaying the time interval to the user, it may assist a busy doctor to efficiently perform searches and determination of the decision at the first time in a prompt manner, therefore, saving valuable time for doctors and patients, and improving the diagnosis efficiency of doctors. - The specific second time may be the time that the doctor needs to monitor or observe a certain condition and the time interval can be set accordingly as the difference between the second time and the first time, such as 24 hours, 48 hours or 72 hours, and the like. For example, in
FIG. 2 , the time interval of 26 hours is illustrated. It is contemplated that other time interval can be used depending on the observation needs for the prognostic management. In some embodiments, the user can adjust the time interval, and the processor may adjust the second time accordingly. In response to the input of the user, the progression condition of the object and the prognosis image at the adjusted second time may be predicted and provided to the information management system for presentation to the user. For example, if the user expects to observe the possible progression condition of theobject 3 hours, 4 hours, 12 hours, or even a week or several months after the first time, then the user can input the time interval and the processor may respectively determine the second time and then predict the progression condition of the object and simulate the prognosis image at the second time. Accordingly, the user can observe the progression condition of the object at a future time with higher degree of concern, to aid the diagnosis of the doctor more efficiently. In some embodiments, the second time may be an arbitrary future time. For example, when predicting hematoma enlargement, the expansion risk of hematoma at arbitrary future time (future without limitation on the particular time) can be predicted, that is, the expansion risk of hematoma in the future, may be predicted. The enlargement risk of the hematoma in the future is an important reference index for the diagnosis of intracerebral hemorrhage (ICH), which can provide sufficient guidance for the decision of the doctor. - Various manners may be adopted to present the progression condition of the object at the second time and the prognosis image at the second time to the user. As an example, a prognosis management report may be output (or printed), or the information on prognosis management may be transmitted through a short message or email, etc. to the user. Besides, the outcome of the prognosis management may also be presented to the user e.g., by the information management system, through a user interface to the user. In some embodiments, the medical image of the patient reflecting the morphology of the object at the first time may be presented in one part of a user interface to the user. As shown in
FIG. 2 , the user interface may include five parts (parts 201-205), each of which may be separated by dividing lines. In thefirst part 201, the medical image of the patient reflecting the morphology of the object at the first time may be presented to the user. Take the hematoma as an example object again, in thefirst part 201 inFIG. 2 , brain images in DICOT-format may include both sectional images and a 3D image of the patient (John Smith) at the same time reflecting the morphology of the object at the first time, where the first time is 23 hours ago as indicated in thefourth part 204. When the object includes a plurality of object instances such as hematoma instances, thefirst part 201 may present the details of each hematoma instance. For example, in some embodiments, volume, subtype and location of each object instance may be presented associated with the medical image of the patient at the first time in the first part of a user interface. For example, three numbered hematoma instances,hematoma 1,hematoma 3 andhematoma 4 are included inFIG. 2 . Therefore, the hematoma information at the first time may be presented, such as the volume, the subtype and the location ofhematoma 1,hematoma 3 andhematoma 4, respectively. Through presentation of the visual and textual information of each hematoma instance, it may assist the users to intuitively determine the priority of treatment for each hematoma. For example, doctors and hospitals may focus resources on one or more vital hematomas, while deferring the treatment time, for hematomas in non-vital parts, thus improving the efficiency of using medical resources. - In some embodiments, the non-image clinical data of the patient associated with the progression of the object at the first time may be presented to the user in a second part of the user interface. For example, in
FIG. 2 , the non-image clinical data of patient-John Smith is presented in thesecond part 202, which may include the data associated with the progression of the object. For example, if the object is a nodule, the content presented in thesecond part 202 may be the data associated with the progression of the nodule, such as age, gender, genetic history, etc. As another example, if the object is a tumor, the content presented in thesecond part 202 may be the data associated with the progression of the tumor, such as age, gender, smoking history, etc. In case that the object is a hematoma, the non-image clinical data of the patient associated with the progression of the object may include gender, age, time period from onset to first inspection, BMI, diabetes history, smoking history, drinking history, blood pressure and history of cardiovascular disease of the patient. InFIG. 2 , the non-image clinical data can be presented in thethird part 203, such as John Smith, male, 36 years old, 23 hours from onset to first inspection, John's diabetes history, smoking history, drinking history existed, normal blood pressure, no hypertension, and hyperlipemia. Alternatively or additionally, the drugs the patient is currently taking may be presented, to further assist the doctor in making decisions. Labels or links may also be provided to present more non-image clinical data of the patient in response to the click operation of the user. - Take the hematoma as an example again, in some embodiments, the progression condition may include the enlargement risk of the hematoma for hematoma instance or the patient, and the first time is after onset of intracerebral hemorrhage. That is, when the object is hematomas, the progression condition of the object may include the enlargement risk of a certain hemorrhage or the patient. HE, namely the spontaneous enlargement of hematoma after onset of ICH, occurs in about one third of ICH patients and is an important risk factor for poor treatment outcomes. Therefore, for hemorrhage, the primary concern of the doctor is whether the intracerebral hemorrhage occurred, thus the first time may be after onset of intracerebral hemorrhage, when doctors may deem helpful to observe hematoma enlargement, such that the diagnostic needs of doctors may be better meet. As shown in
FIG. 2 , the enlargement risks of three hematomas including hematoma 2,hematoma 3 and hematoma 5 after 23 hours are presented in thefifth part 205. A predetermined threshold may be set for the corresponding risk, and when the predicted enlargement risk is larger than the predetermined threshold, the level of the risk may be further presented. For example, hematoma 2 andhematoma 3 may be hematomas with high enlargement risks, and thus may be labeled as high risk; hematoma 5 may a hematoma with low enlargement risk, and accordingly may be labeled as low risk. Alternatively or additionally, as shown inFIG. 2 , the specific value of predicted enlargement risks may be presented, and at least one preset threshold may be set to sort the enlargement risk. For example, when the predicted enlargement risk value exceeds the preset threshold, it may be considered as high risk, and when it is below the threshold, it may be considered as low risk. Similarly, a medium risk value range can also be set, which is not specifically limited herein. Alternatively or additionally, the enlargement risk of hematoma or the risk level of the patient may be presented, meanwhile, may be given. For example, inFIG. 2 , the risk of enlargement of hematoma of the patient is 95% (high risk). The predicted enlargement risk itself may also be a numerical range, such as 85%-95%, which is not specifically limited herein. The method of prognosis management of the present disclosure provides an efficient risk management scheme for the pain point, which can effectively assist doctors in the prognosis management of patients. - In some embodiments, as shown in
FIG. 2 , the prognosis image of the patient at the second time may be presented in thefourth part 204 of the user interface. For example,FIG. 2 shows that the user expects to predict the progression condition of the object after 23 hours, thus the prognosis image of the object after 23 hours may be presented through thefourth part 204. In some embodiments, the image displayedfourth part 204 may be in a corresponding type as the image presented in thefirst part 201. For example, if a sectional image and a 3D image are simultaneously presented in thefirst part 201 inFIG. 2 , then the corresponding simulated sectional image and the 3D image at the second time may he presented in thefourth part 204, so that the user can perform a side-by-side comparison. - The presented prognostic image reflecting the prognostic morphology at the second time may be presented as a two-dimensional sectional image, a 3D image, or a combination of a two-dimensional image section and a 3D image. In the case of presenting a 3D image, image operations such as scaling, rotation and generation of a local image may be performed according to the operation instructions of the user. For example, in some embodiments, the presented medical images and prognostic images may include a coronal plane image, a sagittal plane image, an axial plane image and a 3D image. The coronal plane image, the sagittal plane image and the axial plane image are representative sections. Meanwhile, the 3D image may be presented, and the operation such as resealing, extraction of local sections, etc. may be performed according to instruction of the user, so that the doctor can access sectional images of other regions of interest.
- In some embodiments, the progression condition of the object may include one or more of the following: enlargement risk of an object instance or the patient, deterioration risk of an object instance or the patient, expansion risk of an object instance or the patient, metastasis risk of an object instance or the patient, recurrence risk of an object instance or the patient, location of an object instance, volume of an object instance, and subtype of an object instance. An object instance may be an occurrence of the target object of the patient, such as a hematoma instance. The enlargement risk of each hematoma may be presented individually (e enlargement risks for
hematomas 2, 3, and 5 are shown separately) and/or in a collective manner (e.g., a collective hematoma enlargement risk for the patient is also shown) in thefifth part 205. - As another example of the user interface, as shown in
FIG. 2 , other patient information, such as name, hospital, and information related to the first time can also be presented in thefirst part 201. For example, inFIG. 2 , the first time point was 23 hours ago. Independently or additionally, several buttons may be provided, which the users may click to perform operations such as selecting other time points, comparing among multiple time points, or selecting other patients. - In some embodiments, the method of prognosis management may predict the progression condition of the object at the second time associated with the prognosis outcome. The specific prediction process may be implemented in combination with deep learning network. For example, in some embodiments, the prognosis image at the second time may be generated based on the acquired medical information and the time interval by performing the following steps: generating the prognosis image at the second time using a Generative Adversarial Network (GAN) based on the acquired medical information and the time interval. That is, in the prediction stage, a GAN generator may be used to generate the prognosis image. Take hematoma as an example again, the simulated head image at the second time may be generated by GAN, to provide the doctors with a more intuitive manner to assess the potential risk in the future for the ICH patient.
-
FIG. 3 illustrates an exemplary framework for generating a prognosis image at a future time using GAN, according to the embodiment of the present disclosure. In some embodiments, the GAN may include agenerator module 300 and discriminator module. Specifically, the prognosis image at the second time may be generated using the GAN based on the acquired medical information and the time interval by performing the following steps: first, acquiring detection and segmentation information of the object corresponding to the medical image at the first time; and then, fusing, the medical image at the first time and the corresponding detection and segmentation information of the object, to obtain a first fused information. Take hematoma as an example of the object, as shown inFIG. 3 , the fusion may be performed based on the detection and segmentation information of the hematoma instances and the initial head CT image, As the example shown inFIG. 4 , the detection and segmentation of the hematoma may be implemented by a mask RCNN such as a multi-task encoder-decoder network, which may be used to perform voxel-level classification tasks and regression tasks. As an example, the mask RCNN may includefirst encoder 401 andfirst decoder 402. As inFIG. 4 , the head image data of the hematoma patient may be input into thefirst encoder 401 of the mask RCNN, and then the output of thefirst encoder 401 may be used as the input of thefirst decoder 402 to obtain the detection and segmentation information of each hematoma instance. As an example, the detection and segmentation information may include the center point, size, subtype, bleed position and volume associated with the hematoma. The obtained detection and segmentation information of the hematoma may be fused with the initial head CT image to obtain the initial first fused information. Then, the prognosis image at the second time may be generated using the trainedgenerator module 300 based on the first fused information and the time interval between the first time and the second time. - In some embodiments, the GAN may be trained based on the training data through the following steps. As an example, a training set may be constructed for the GAN, and the training set may include a plurality of training data. Each training data item may include medical image(s) at a third time and detection and segmentation information of the object at the third time, a sample time interview between the third time and a fourth time after the third time, and medical image(s) at the fourth time and detection and segmentation information of object at the fourth time, As an example, during the training of the GAN, the medical image at the third time and detection and segmentation information of the object at the third time may be determined firstly, and the first fused information may be determined based on the medical image at the third time and detection and segmentation information at the third time. In some embodiments, the mask RNN may be adopted for detection and segmentation, which is not described in detail herein. As shown in
FIG. 5 , during the training of the GAN, a synthetic fused information at the fourth time may be determined using thegenerator module 300 based on the first fused information and the time interval between the third time and the fourth time after the third time. Then, a second fused information may be determined based on the medical image at the fourth time and detection and segmentation information of the object at the fourth time. After that, a synthetic information pair may be formed based on the first fused information and the synthetic fused information at the fourth time, and a real information pair may be formed based on the first fused information and the second fused information, The synthetic information pair and the real information pair may be discriminated using thediscriminator module 500, and then the model parameters to be trained of thegenerator module 300 may be adjusted based on the outcome of thediscriminator module 500. The generated synthetic information pair and the real information pair may be used as the input of thediscriminator module 500 of the GAN. Thediscriminator module 500 is configured to discriminate between the real information pair and the synthetic information pair. Thediscriminator module 500 and thegenerator module 300 hold. opposite training objectives, namely thegenerator module 300 may expect to generate images that look real, for outputting as the prognostic image at the second time. In contrast, thediscriminator module 500 may be configured to distinguish between real information pair and. synthetic information pair. Both of the two modules may be trained in an iterative manner, Unlike non-task specific GAN, any image generated by thegenerator module 300 will pass through thediscriminator module 500. The trained framework may generate a prognostic image that is more realistic in clinic sense. Besides, the method of prognosis management of the present disclosure also incorporates the segmentation information, thus ensuring that the GAN may focus on the region of the lesion. - In some embodiments, the
generator module 300 may be implemented by any general-purpose encoder-decoder CNN. As shown inFIG. 6 , thegenerator module 300 may include asecond encoder 601 and asecond decoder 602. The dimension of the input and output features ofgenerator module 300 may be the same as that of the initial head CT image. In the last layer of thesecond encoder 601, the encoded features may be flattened into the form of a one-dimensional feature vector, so that the non-image information may be attached to the encoded image features as an additional channel. The specific non-image information may include, for example, clinical information and scanning interval, and the like. Then, the encoded features may be decoded to data in the dimension of the initial image with thesecond decoder 602. - In some embodiments, the discriminator module 500 may be implemented using a CNN framework with a multi-layer perception (MLP) to discriminate whether the input is real/authentic information or synthetic information, and may output a binary result to indicate that. In the training stage, the generator-discriminator s intended to minimize the joint loss. An example of a loss function is provided as following Equation (1):
- where x′ and x represent synthetic data and real data respectively. may represent the total loss of the generator module-discriminator module. D may represent the loss of the discriminator module, and G may represent the loss of the generator module. The specific loss function may take various forms, including but not limited to minimax loss, binary cross entropy loss or any form of distance distribution loss. The above loss function is only an example, and other forms of loss functions may also be used by the training process.
-
FIG. 7 shows an exemplary framework of thediscriminator module 500, which may include athird encoder 701 and afull connection layer 702. The real information pair and the synthetic information pair can be used as the input of thethird encoder 701, and whether the result is either real or synthetic may be discriminated by thefull connection layer 702. In the inference stage, the prediction may be performed by only applying thegenerator module 300, and thediscriminator module 500 may be an auxiliary module that provides supervision only in the training stage. The possible progression of the hematoma morphology at the second time may be generated by the trainedgenerator module 300 based on the time interval between the initial scan and the subsequent scan, including generating the prognosis image at the second time, and further simulating the prognosis morphology of the object at the second time. Users (such as radiologists) may evaluate the condition of the patient based on these predictions. Alternatively or additionally, the duration between the initial scan and the subsequent scan, the non-image data, etc., may be input by the user through the user interface (UI). - The method of prognosis management of the present disclosure may perform prediction through the prediction model based on the available medical information of the patient, and may generate a prognosis image at the second time reflecting the prognosis morphology of the object at the second time, thus providing effective assistance to doctors for diagnosis in a very intuitive manner. Furthermore, by using a specially designed GAN, the generated image of prognostic morphology may be more realistic in clinic, thus assisting the doctors to improve their diagnosis.
- The embodiment of the present disclosure also may provide a device for prognosis management based on the medical information of the patient. As shown in
FIG. 8 , the device may include aprocessor 801, amemory 802 and a communication bus. The communication bus may be used to realize the connection and communication between theprocessor 801 and thememory 802. Theprocessor 801 may be a processing device including one or more general-purpose processing devices such as a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), and the like. More specifically, the processor may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor running other instruction sets, or a processor running a combination of instruction sets. The processor can also be one or more dedicated processors specialized for specific processing, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a system on a chip (SoC), and the like. In some embodiments, theprognosis management device 800 may further include an input/output 803, which is also connected to the communication bus. The input/output 803 may be used for theprocessor 801 to acquire externally input medical information of the patient, and the input/output 803 may also be used to input the medical information of the patient into thestorage 802. As shown inFIG. 8 , adisplay unit 804 may also be connected to the communication bus, and thedisplay unit 804 may be used to display the operating process of the prognosis management device and/or the output of the prediction result. Theprocessor 801 may also be used to execute one or more computer programs stored in thestorage 802, for example, a prediction program may be stored in the memory, and executed by the processor 1401 to perform the steps of the method for prognosis management based on medical information of patients according to various embodiments of the present disclosure. - The embodiment of the present disclosure also may provide a system for prognosis management based on the medical information of the patient, wherein the system may include an interface, which may be configured to receive the medical information including medical image(s) acquired by medical imaging devices. Specifically, the interface may be a hardware interface or an API interface of software, or the combination of both, which is not specifically limited herein. The system for prognosis management may include a processor, which may be configured to execute the method for prognosis management based on medical information of a patient according to any embodiment of the present disclosure.
- Embodiments of the present disclosure also may provide a non-transitory computer-readable storage medium storing computer instructions and when the computer instructions executed by the processor, implementing the steps of the method for prognosis management based on medical information of a patient according to any embodiment of present disclosure. A computer-readable medium may be a non-transitory computer-readable medium such as a read only memory (ROM), a random access memory (RAM), a phase change random access memory (PRAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), an electrically erasable programmable read only memory (EEPROM), other types of random access memory (RAM), a flash disk or other forms of flash memory, a cache, a register, a static memory, a compact disc read-only memory (CD-ROM), a digital versatile disc (MID) or other optical memory, a cassette tape or other magnetic storage device, or any other possible non-transitory medium used to store information or instructions that can be accessed by computer devices, and the like.
- In addition, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments having equivalent elements, modifications, omissions, combinations (for example, schemes in which various embodiments intersect), adaptations or changes based on the present disclosure. The elements in the claims will be broadly interpreted based on the language adopted in the claims, and are not limited to the examples described in this specification or during the implementation of this application, and the examples thereof will be interpreted as non-exclusive. Therefore, the embodiments described in this specification are intended to be regarded as examples only, with the true scope and spirit being indicated by the following claims and the full range of equivalents thereof.
Claims (20)
1. A method for prognosis management based on medical information of a patient, comprising:
receiving the medical information including at least a medical image of the patient reflecting a morphology of an object associated with the patient at a first time;
predicting, by a processor, a progression condition of the object at a second time based on the medical information of the first time, wherein the progression condition is indicative of a prognosis risk, wherein the second time is after the first time;
generating, by the processor, a prognosis image at the second time reflecting the morphology of the object at the second time based on the medical information of the first time; and
providing the progression condition of the object at the second time and the prognosis image at the second time to an information management system for presentation to a user.
2. The method of claim therein the medical information further includes non-image clinical data associated with a progression of the object.
3. The method of claim 1 , further comprising:
presenting, by the information management system, a time interval between the first time and the second time in an associated manner with at least one of the progression condition of the object at the second time or the prognosis image at the second time.
4. The method of claim 1 , further comprising:
adjusting the second time based on an input of the user; and
predicting the progression condition of the object at the adjusted second time and generating the prognosis image at the adjusted second time, in response to the input of the user.
5. The method of claim 2 , further comprising:
presenting the medical image of the patient at the first time in a first part of a user interface;
presenting the non-image clinical data of the patient at the first time in a second part of the user interface; and
presenting the prognosis image of the patient at the second time in a third part of the user interface.
6. The method of claim 5 , further comprising:
presenting volume, subtype and location of the object associated with the medical image of the patient at the first time in the first part of the user interface.
7. The method of claim 5 , wherein the object includes a hematoma, and the prognosis risk includes an enlargement risk of the hematoma, and the first time is after onset of an intracerebral hemorrhage.
8. The method of claim 7 , wherein the non-image clinical data associated with the progression of the object includes at least one of gender, age, a time period from onset to a first inspection, a BMI, a diabetes history, a smoking history, a drinking history, a blood pressure, or a history of cardiovascular disease of the patient.
9. The method of claim 5 , wherein the medical image of the first time and the prognosis image of the second time are each presented in at least one of a coronal plane view, sagittal plane view, axial plane view, or 3D view.
10. The method of claim 1 , wherein the prognosis risk includes at least one of an enlargement risk of the object, a deterioration risk of the object, an expansion risk of the object, a metastasis risk of the object, a recurrence risk of the object, a location of the object, a volume of the object, and a subtype of the object
11. The method of claim 1 , wherein generating the prognosis image at the second time based on the medical information of the first time further comprises:
generating the prognosis image at the second time using a Generative Adversarial Network (GAN), based on the medical information of the first time and a time interval between the first time and the second time.
12. The method of claim 11 , wherein the GAN includes a generator and a discriminator, and generating the prognosis image at the second time using the GAN based on the medical information of the first time and the time interval further comprises:
acquiring detection and segmentation information of the object corresponding to the medical image at the first time;
fusing the medical image at the first time and the corresponding detection and segmentation information of the object, to obtain a first fused information; and
generating the prognosis image at the second time using the trained generator module, based on the first fused information and the time interval between the first time and the second time.
13. The method of claim 12 , wherein the GAN is trained based on training data, each item of which including a medical image and detection and segmentation information of the object at a third time, a time interval between the third time and a fourth time after the third time, and a medical image and detection and segmentation information of object at the fourth time, wherein training of the GAN comprises:
determining the first fused information based on the medical image and detection and segmentation information of the object at the third time;
determining a synthetic fused information at the fourth time using the generator, based on the first fused information and the time interval between the third time and the fourth time after the third time;
determining a second fused information based on the medical image and detection and segmentation information of the object at the fourth time;
forming a synthetic information pair based on the first fused information and the synthetic fused information at the fourth time;
forming a real info anon pair based on the first fused info anon and the second fused information;
discriminating the synthetic information pair and the real information pair using the discriminator; and
adjusting parameters of the generator based on the discriminating outcome of the discriminator.
14. A system for prognosis management based on medical information of a patient, comprising:
an interface configured to receive the medical information including at least a medical image of the patient reflecting a morphology of an object associated with the patient at a first time; and
a processor configured to:
predict a progression condition of the object at a second time based on the medical information of the first time, wherein the progression condition is indicative of a prognosis risk, wherein the second ti is after the first time;
generate a prognosis image at the second time reflecting the morphology of the object at the second time based on the medical information of the first time; and
provide the progression condition of the object at the second time and the prognosis mage at the second time for presentation to a user.
15. The system of claim 4 , further comprising an information management system configured to:
present a time interval between the first time and the second time in an associated manner with at least one of the progression condition of the object at the second time or the prognosis image at the second time.
16. The system of claim 15 , wherein the information management systems further configured to:
present the medical age of the patient at the first time in a first part of a user interface;
present non-image clinical data associated with a progression of the object of the patient at the first time in a second part of the user interface; and
present the prognosis image of the patient at the second time in a third part of the user interface.
17. The system of claim 16 , wherein the object includes a hematoma, and the prognosis risk includes an enlargement risk of the hematoma, and the first time is after onset of an intracerebral hemorrhage.
18. The system of claim 14 , wherein to generate the prognosis image at the second time based on the acquired medical information, the processor is further configured to:
generate the prognosis image at the second time using a Generative Adversarial Network (GAN), based on the acquired medical information and a time interval between the first time and the second time.
19. The system of claim 18 , wherein the GAN includes a generator and a discriminator, and to generate the prognosis image at the second time using the GAN based on the acquired medical information and the time interval, the processor is further configured to:
acquire detection and segmentation information of the object corresponding to the medical image at the first time;
fuse the medical image at the first time and the corresponding detection and segmentation information of the object, to obtain a first fused information; and
generate the prognosis image at the second time using the trained generator module, based on the first fused information and the time interval between the first time and the second time.
20. A non-transitory computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by at least one processor, performs a method for prognosis management based on medical information of a patient, comprising:
receiving the medical information including at least a medical image of the patient reflecting a morphology of an object associated with the patient at a first time;
predicting a progression condition of the object at a second time based on the acquired medical information of the first time, wherein the progression condition is indicative of a prognosis risk, wherein the second time is after the first time;
generating a prognosis image at the second time reflecting the morphology of the object at the second time based on the acquired medical information of the first time; and
providing the progression condition of the object at the second time and the prognosis image at the second time to an information management system for presentation to a user.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/501,041 US20230099284A1 (en) | 2021-09-29 | 2021-10-14 | System and method for prognosis management based on medical information of patient |
CN202210197090.2A CN115985492A (en) | 2021-10-14 | 2022-03-02 | System and method for prognosis management based on medical information of patient |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/489,682 US20230098121A1 (en) | 2021-09-29 | 2021-09-29 | System and method for prognosis management based on medical information of patient |
US17/501,041 US20230099284A1 (en) | 2021-09-29 | 2021-10-14 | System and method for prognosis management based on medical information of patient |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/489,682 Continuation-In-Part US20230098121A1 (en) | 2021-09-29 | 2021-09-29 | System and method for prognosis management based on medical information of patient |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230099284A1 true US20230099284A1 (en) | 2023-03-30 |
Family
ID=85722006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/501,041 Abandoned US20230099284A1 (en) | 2021-09-29 | 2021-10-14 | System and method for prognosis management based on medical information of patient |
Country Status (1)
Country | Link |
---|---|
US (1) | US20230099284A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118822957A (en) * | 2024-06-20 | 2024-10-22 | 中煤(天津)地下工程智能研究院有限公司 | Lesion temporal evolution method, medium and device based on multimodal medical images |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130150726A1 (en) * | 2009-12-15 | 2013-06-13 | The Henry M. Jackson Foundation For The Advancement Of Military Medicine Inc. | Method for detecting hematoma, portable detection and discrimination device and related systems and apparatuses |
US20130325493A1 (en) * | 2012-05-29 | 2013-12-05 | Medical Avatar Llc | System and method for managing past, present, and future states of health using personalized 3-d anatomical models |
US20170323070A1 (en) * | 2016-05-09 | 2017-11-09 | Global Tel*Link Corporation | System and Method for Integration of Telemedicine into Mutlimedia Video Visitation Systems in Correctional Facilities |
US20220058839A1 (en) * | 2018-12-31 | 2022-02-24 | Oregon Health & Science University | Translation of images of stained biological material |
US20220246307A1 (en) * | 2019-08-13 | 2022-08-04 | Sony Group Corporation | Surgery support system, surgery support method, information processing apparatus, and information processing program |
WO2022261442A1 (en) * | 2021-06-11 | 2022-12-15 | Northwestern University | Systems and methods for prediction of hematoma expansion using automated deep learning image analysis |
-
2021
- 2021-10-14 US US17/501,041 patent/US20230099284A1/en not_active Abandoned
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130150726A1 (en) * | 2009-12-15 | 2013-06-13 | The Henry M. Jackson Foundation For The Advancement Of Military Medicine Inc. | Method for detecting hematoma, portable detection and discrimination device and related systems and apparatuses |
US20130325493A1 (en) * | 2012-05-29 | 2013-12-05 | Medical Avatar Llc | System and method for managing past, present, and future states of health using personalized 3-d anatomical models |
US20170323070A1 (en) * | 2016-05-09 | 2017-11-09 | Global Tel*Link Corporation | System and Method for Integration of Telemedicine into Mutlimedia Video Visitation Systems in Correctional Facilities |
US20220058839A1 (en) * | 2018-12-31 | 2022-02-24 | Oregon Health & Science University | Translation of images of stained biological material |
US20220246307A1 (en) * | 2019-08-13 | 2022-08-04 | Sony Group Corporation | Surgery support system, surgery support method, information processing apparatus, and information processing program |
WO2022261442A1 (en) * | 2021-06-11 | 2022-12-15 | Northwestern University | Systems and methods for prediction of hematoma expansion using automated deep learning image analysis |
Non-Patent Citations (1)
Title |
---|
Heming Yao et al., Automated hematoma segmentation and outcome prediction for patients with traumatic brain injury, 107 ARTIFICIAL INTELLIGENCE IN MEDICINE, 101910 (2020) (Year: 2020) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118822957A (en) * | 2024-06-20 | 2024-10-22 | 中煤(天津)地下工程智能研究院有限公司 | Lesion temporal evolution method, medium and device based on multimodal medical images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Rubin | Artificial intelligence in imaging: the radiologist’s role | |
JP6585772B2 (en) | Methods and systems for analyzing, prioritizing, visualizing, and reporting medical images | |
Ali et al. | A systematic review of automated melanoma detection in dermatoscopic images and its ground truth data | |
RU2687760C2 (en) | Method and system for computer stratification of patients based on the difficulty of cases of diseases | |
US10504227B1 (en) | Application of deep learning for medical imaging evaluation | |
CN105074708B (en) | The summary view of the background driving of radiology discovery | |
US20190156947A1 (en) | Automated information collection and evaluation of clinical data | |
US11996182B2 (en) | Apparatus and method for medical image reading assistant providing representative image based on medical use artificial neural network | |
RU2686627C1 (en) | Automatic development of a longitudinal indicator-oriented area for viewing patient's parameters | |
KR102289277B1 (en) | Medical image diagnosis assistance apparatus and method generating evaluation score about a plurality of medical image diagnosis algorithm | |
CN112868020A (en) | System and method for improved analysis and generation of medical imaging reports | |
US20200265276A1 (en) | Copd classification with machine-trained abnormality detection | |
KR102360615B1 (en) | Medical image diagnosis assistance apparatus and method using a plurality of medical image diagnosis algorithm for endoscope images | |
JP2018509689A (en) | Context generation of report content for radiation reports | |
EP3939003B1 (en) | Systems and methods for assessing a likelihood of cteph and identifying characteristics indicative thereof | |
Yeasmin et al. | Advances of AI in image-based computer-aided diagnosis: A review | |
EP3362925B1 (en) | Systems and methods for generating correct radiological recommendations | |
CN111226287A (en) | Method for analyzing a medical imaging dataset, system for analyzing a medical imaging dataset, computer program product and computer readable medium | |
US20220285011A1 (en) | Document creation support apparatus, document creation support method, and program | |
US20210313047A1 (en) | Processing medical images | |
US20230099284A1 (en) | System and method for prognosis management based on medical information of patient | |
US20240078089A1 (en) | System and method with medical data computing | |
US20150201887A1 (en) | Predictive intervertebral disc degeneration detection engine | |
US20230098121A1 (en) | System and method for prognosis management based on medical information of patient | |
Lin et al. | Identifying Acute Aortic Syndrome and Thoracic Aortic Aneurysm from Chest Radiography in the Emergency Department Using Convolutional Neural Network Models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SHENZHEN KEYA MEDICAL TECHNOLOGY CORPORATION, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GAO, FENG;YANG, HAO-YU;PAN, YUE;AND OTHERS;SIGNING DATES FROM 20210927 TO 20210928;REEL/FRAME:057790/0724 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |