Survival Prediction Across Diverse Cancer Types Using Neural Networks (2024)

Xu YanTrine UniversityUSAyancontinue@gmail.com,Weimin WangHong Kong University of Science and Technology Chinawangwaynemin@gmail.com,MingXuan XiaoSouthWest JiaoTong UniversityChina553556963albert@gmail.com,Yufeng LiUniversity of SouthamptonUKliyufeng0913@gmail.comandMin GaoTrine UniversityUSAmingao4460@gmail.com

(2024)

Abstract.

Gastric cancer and Colon adenocarcinoma represent widespread and challenging malignancies with high mortality rates and complex treatment landscapes. In response to the critical need for accurate prognosis in cancer patients, the medical community has embraced the 5-year survival rate as a vital metric for estimating patient outcomes. This study introduces a pioneering approach to enhance survival prediction models for gastric and Colon adenocarcinoma patients. Leveraging advanced image analysis techniques, we sliced whole slide images (WSI) of these cancers, extracting comprehensive features to capture nuanced tumor characteristics. Subsequently, we constructed patient-level graphs, encapsulating intricate spatial relationships within tumor tissues. These graphs served as inputs for a sophisticated 4-layer graph convolutional neural network (GCN), designed to exploit the inherent connectivity of the data for comprehensive analysis and prediction. By integrating patients’ total survival time and survival status, we computed C-index values for gastric cancer and Colon adenocarcinoma, yielding 0.57 and 0.64, respectively. Significantly surpassing previous convolutional neural network models, these results underscore the efficacy of our approach in accurately predicting patient survival outcomes. This research holds profound implications for both the medical and AI communities, offering insights into cancer biology and progression while advancing personalized treatment strategies. Ultimately, our study represents a significant stride in leveraging AI-driven methodologies to revolutionize cancer prognosis and improve patient outcomes on a global scale.

Artificial Intelligence, Neural Network, Graph Convolutional Neural Network, Survival Prediction , Deep Learning

journalyear: 2024copyright: acmlicensedconference: 2024 The 7th International Conference on Machine Vision and Applications; March 12–14, 2024; Singapore, Singaporebooktitle: 2024 The 7th International Conference on Machine Vision and Applications (ICMVA 2024), March 12–14, 2024, Singapore, Singaporedoi: 10.1145/3653946.3653966isbn: 979-8-4007-1655-3/24/03

1. Introduction

In the contemporary healthcare landscape, the scourge of cancer continues to loom large, with its pervasive impact felt across the globe. Among the myriad of cancer types, gastric cancer and colon adenocarcinoma stand out due to their widespread prevalence, high mortality rates, and the intricate complexity of their treatment protocols. These malignancies not only pose a significant threat to patient life but also present formidable challenges for medical practitioners striving to combat them effectively. The grim reality is that these cancers are ranked among the leading causes of mortality worldwide, underscoring the urgent need for advanced diagnostic and prognostic tools that can guide clinical decision-making and improve patient outcomes.

The critical importance of accurate cancer prognosis cannot be overstated, as it forms the cornerstone of personalized medicine, enabling clinicians to devise treatment plans that are specifically tailored to the unique characteristics of each patient’s cancer. Prognostic assessments play a vital role in determining the course of treatment, ranging from surgical interventions and chemotherapy to targeted therapies and palliative care(Balkwill etal., 2012)(Saltz etal., 2018). By accurately predicting the likely progression of the disease, physicians can optimize treatment regimens, mitigate potential side effects, and enhance the quality of life for cancer patients. Moreover, an effective prognosis aids in the allocation of medical resources, ensuring that patients who are most likely to benefit from aggressive treatments receive the attention they need, while also identifying those for whom palliative care would be more appropriate.

Despite the advances in medical science and the development of innovative treatment methodologies, the prediction of cancer outcomes remains a daunting challenge. Traditional prognostic models often rely on a limited set of clinical and histopathological parameters, which, while useful, do not fully capture the complex biological and molecular interactions that drive cancer progression. Furthermore, the heterogeneity of tumors, even within the same cancer type, adds another layer of complexity to the prognosis, making it difficult to achieve a high degree of accuracy in predicting patient outcomes.

Recognizing these challenges, the present study seeks to address the limitations of conventional prognostic models by introducing a novel approach that leverages the power of artificial intelligence (AI) and advanced imaging analysis(Li etal., 2024)(Liu etal., 2023a). By extracting detailed features from whole slide images of gastric and colon adenocarcinomas, and constructing patient-level graphs that encapsulate the intricate spatial relationships within tumor tissues, we aim to provide a more nuanced and comprehensive understanding of tumor characteristics. This, in turn, enables the development of a sophisticated graph convolutional neural network model that harnesses the connectivity of the data to offer precise and reliable predictions of patient survival outcomes.

As we delve into this research, it is our hope that the methodologies and findings presented herein will not only contribute to the enhancement of cancer prognosis but also serve as a beacon for future studies seeking to harness AI and machine learning technologies in the fight against cancer. By pushing the boundaries of what is currently possible in cancer prognostics, we endeavor to pave the way for more personalized, effective, and compassionate care for patients facing the daunting challenge of gastric and colon adenocarcinomas.

2. RELATED WORK

The advent of deep learning has sparked significant advancements in cancer prognosis, particularly in leveraging sophisticated computational models to analyze complex biomedical data. Notably, the application of deep learning techniques in survival analysis of Whole Slide Images (WSI) has garnered considerable attention. Existing methodologies encompass diverse approaches, such as incorporating COX proportional hazard functions in neural networks for survival prediction(Che etal., 2023)(Lu etal., 2018), and leveraging clustering techniques like K-Means at the WSI level to inform convolutional neural network predictions (Zhu etal., 2017)(Wu etal., 2023).

However, amidst these advancements, the potential of graph convolutional neural networks (GCNs)(Hu etal., 2022) (Liu etal., 2023b) stands out as a transformative approach to cancer prognosis. This paper underscores the significance of employing GCNs as the primary model for predicting the survival outcomes of gastric cancer and Colon adenocarcinoma patients. By abstracting WSI slices into graph structures, GCNs offer a novel paradigm for analyzing cancer pathology data. Unlike traditional deep learning methods, GCNs not only capture the features of individual slices within the WSI but also integrate information from adjacent slices, facilitating enhanced perception of the tumor microenvironment and its surrounding context.

The adoption of GCNs represents a paradigm shift in cancer prognosis, offering unparalleled capabilities to extract intricate spatial dependencies and interactions from WSI data. This approach not only enhances the accuracy of survival prediction models but also sheds light on previously unexplored aspects of cancer biology and progression(Ma etal., 2023)(Guo and Wen, 2021). Moreover, in the United States, where cancer prevalence and mortality rates remain significant, the integration of advanced AI techniques like GCNs holds immense promise for improving patient outcomes, optimizing healthcare resource allocation, and advancing precision oncology initiatives(Ye etal., 2023)(Zhang etal., 2021). Thus, this research direction underscores the critical importance of embracing innovative AI-driven methodologies to address the multifaceted challenges posed by cancer in the modern healthcare landscape.

3. Methodology

3.1. Graph Convolutional Neural Network

Before the introduction of Graph Convolutional Neural Networks (GCN), deep learning was predominantly based on Convolutional Neural Networks (CNN). The core of CNN lies in the convolutional kernels performing operations similar to dot products by shifting on input images for feature extraction(YANG etal., 2017)(Dai etal., 2023b). In contrast, GCN integrates features between nodes by connections on the graph’s nodes and edges. GCN to a certain extent achieves feature fusion among various node features.

The input of the GCN model is graph data(Xu etal., 2013)(Lu etal., 2023). Assuming each image has N nodes and M edges, each node has D-dimensional features, then each node will have a D × D-dimensional feature matrix X, and the adjacency matrix A and feature matrix X of N nodes form the input of GCN. The GCN network used in this paper consists of 4 layers, with each layer representing a round of feature learning process.

3.2. C-index

C-index, also known as Concordance Index (HarrellJr etal., 1996), was originally proposed by Professor Frank E Harrell Jr. of Vanderbilt University in 1996 and is commonly used to evaluate the predictive results of survival models. The calculation method of C-index is as follows: pairwise compare among n patients, with a total of C(n, 2) pairs, and then divide the number of pairs where the predicted survival probability is consistent with the actual survival status by C(n, 2). The proportion obtained is the concordance index, which essentially calculates the probability of consistency between the predicted results and the actual status. According to the calculation method above, the value of the concordance index should be between 0.5 and 1.

4. Experimentation

4.1. Data source

The dataset used in this experiment consists of Whole Slide Image (WSI) data for gastric cancer (STAD) and Colon adenocarcinoma (COAD). All the data mentioned above are obtained from the website https://portal.gdc.cancer.gov/.

WSI, which stands for Whole Slide Image, refers to images that cover the entire specimen at high resolution, typically at the level of millions of pixels. The data used in this experiment are WSI images of gastric cancer and Colon adenocarcinoma. Since the research problem addressed in this paper is the 1-year survival rate of gastric cancer and Colon adenocarcinoma, WSI images of cancer patients and survival information are utilized. Each cancer patient may have multiple WSI images, and each cancer patient has their own survival information, including: Overall Survival time (OS.time), which refers to the time from the diagnosis of cancer patients to the last follow-up; Overall Survival status (OS), where OS is either 1 or 0, with 1 indicating the patient’s death and 0 indicating the patient’s survival; Survival time refers to the time from diagnosis to the last follow-up, measured in months. The final label for each patient is determined based on OS.time and OS, as shown in Figure 1.

Survival Prediction Across Diverse Cancer Types Using Neural Networks (1)

4.2. Data Preprocessing

For the WSI images of gastric cancer and Colon adenocarcinoma, as shown in Figure 2, we conducted the following preprocessing steps:

Survival Prediction Across Diverse Cancer Types Using Neural Networks (2)

WSI (Whole Slide Images), also known as whole slide digital slices, are typically obtained by scanning pathological images of tumors. The pixels of WSI images are extremely large, usually in the range of tens of megabytes to several gigabytes. A single WSI image contains a wealth of pathological information. As shown in Figure 3, this is a WSI image of Colon adenocarcinoma.

Survival Prediction Across Diverse Cancer Types Using Neural Networks (3)

After segmenting the WSI, we extracted 1024-dimensional features from the sliced WSI images using the ResNet50 network, which was pre-trained on the ImageNet dataset.

We construct the Graph of WSI at the patient level. For several slices of a WSI, these slices are defined as points on the 2D coordinates of the WSI, forming the nodes of the WSI graph. The 1024-dimensional features extracted by the ResNet network are used as the features for each node. Each slice is connected to its surrounding 8 points on the 2D coordinate plane, forming the edges of the WSI graph.

4.3. Construction of Graphs

As shown in the diagram below, taking the red node as an example, it is connected to the surrounding 8 nodes, forming the edges of the entire WSI image. A patient may have multiple WSI images, and the construction shown in the diagram below is applied to each of the patient’s WSI images. Subsequently, the graphs formed by these WSI images are combined into a single graph, representing the Graph of the patient. Figure 4 illustrates an example of the construction of nodes and edges for WSI images.

Survival Prediction Across Diverse Cancer Types Using Neural Networks (4)

4.4. Network Training

We utilized the Patch-GCN (Li etal., 2018) network for training. Due to the high density of edges resulting from connecting each node on every WSI image to its surrounding 8 nodes, increasing the number of layers in the network imposes a heavy burden on training(Mou etal., 2023). Additionally, due to the propagation mechanism of GCN, as the number of layers in the network increases, the aggregation of moral information for each node also increases redundantly, resulting in wasted training resources(Hu etal., 2023)(Liang etal., 2022). Considering these factors, we set the number of layers in the graph convolutional neural network to 4 layers. Cox function is used for regression in the final layer of the network to perform survival analysis.

4.5. Experimental Results

We constructed graphs and trained graph convolutional neural networks using data from Stomach adenocarcinoma(STAD) cancer and Colon adenocarcinoma(COAD). The data included WSI image data and 5-year survival data for both types of cancer patients. The patient-level data were divided into training and testing sets with a ratio of 4:1 for 5-fold cross-validation. The C-index values of the five trained models for each cancer were averaged to obtain the final C-index value. Additionally, ROC curves for both types of cancer were plotted.

Cancer types and Neural Networks ModelsC-index

STAD with GCN

0.64

STAD with CNN

0.62

COAD with GCN

0.57

COAD with CNN

0.53

Furthermore, to further illustrate the performance of the graph convolutional neural network (GCN) used in this paper, we also trained a convolutional neural network (CNN) on the raw data (WSI), using the graph CNN network proposed by Ruoyu Li et al. (Chen etal., 2021). As shown in Table 1, the final C-index values obtained using the graph convolutional neural network for predicting gastric cancer and Colon adenocarcinoma were 0.64 and 0.57, respectively. The experimental results demonstrate that the graph convolutional neural network (GCN) used in this paper indeed outperforms the convolutional neural network in predicting the survival probability of gastric cancer and Colon adenocarcinoma.

In addition, we plotted the AUC curves for the predictions of the two cancers, as shown in Figure 5 and Figure 6.

Survival Prediction Across Diverse Cancer Types Using Neural Networks (5)
Survival Prediction Across Diverse Cancer Types Using Neural Networks (6)

5. Conclusion

In this study, we have successfully demonstrated the effectiveness of our approach in predicting the five-year survival information of gastric cancer (STAD) and Colon adenocarcinoma (COAD) patients. By performing preprocessing operations, including segmentation and feature extraction, on Whole Slide Images (WSI), we have enhanced the quality of input data for subsequent analysis. The construction of graphs for each WSI allowed us to capture the complex relationships between different regions within the tissue samples, providing a more comprehensive representation of the pathological characteristics.

Our utilization of graph convolutional neural networks (GCNs) for survival analysis yielded promising results, with C-index values of 0.64 for gastric cancer and 0.57 for Colon adenocarcinoma. These values demonstrate significant improvement compared to traditional convolutional neural networks (CNNs), indicating the superiority of our approach in capturing the intricate spatial dependencies and interactions present in WSI data.

In previous survival analyses of WSI images, most studies usedConvolutional Neural Networks (CNN) for training and obtainingresults. Common methods include Multiple Instance Learning (MIL)and weakly supervised methods(Dai etal., 2023a). Although these methods can solve many classification and regression tasks on WSI(Xu etal., 2023), they do not achieve ”global” learning of WSI. In other words, previous methods did notintegrate the features between different patches in WSI for learning.However, the Graph Convolutional Neural Network (GCN) hasimproved this aspect by utilizing the graph structure at the WSIlevel, treating each slice as a node in the graph, completing featurelearning between nodes through connections between nodes, andthus achieving global learning, to some extent, it can also be seenas ”multi-scale” learning.

The contributions of our research extend beyond the field of medical oncology to the broader realm of artificial intelligence (AI) and machine learning(Xu etal., 2019)(Tianbo etal., 2023). By leveraging advanced techniques such as GCNs, we have showcased the potential of graph-based models in biomedical image analysis. Our methodology not only enhances the accuracy of survival prediction models but also opens up new avenues for utilizing graph-based architectures in various medical imaging tasks.

Furthermore, our findings hold significant implications for personalized medicine and clinical decision-making. Accurate prediction of cancer survival probabilities enables clinicians to tailor treatment strategies to individual patients, leading to improved patient outcomes and better allocation of healthcare resources. Our research represents a crucial step towards harnessing the power of AI to revolutionize cancer care and underscores the importance of interdisciplinary collaborations between medicine and AI research communities.

References

  • (1)
  • Balkwill etal. (2012)FrancesR. Balkwill, Melania Capasso, and Thorsten Hagemann. 2012.The tumor microenvironment at a glance.Journal of Cell Science 125, 23 (12 2012), 5591–5596.https://doi.org/10.1242/jcs.116392arXiv:https://journals.biologists.com/jcs/article-pdf/125/23/5591/2117779/jcs-125-23-5591.pdf
  • Che etal. (2023)Chang Che, Bo Liu, Shulin Li, Jiaxin Huang, and Hao Hu. 2023.Deep Learning for Precise Robot Position Prediction in Logistics.Journal of Theory and Practice of Engineering Science 3, 10 (Oct. 2023), 36–41.https://doi.org/10.53469/jtpes.2023.03(10).05
  • Chen etal. (2021)RichardJ. Chen, MingY. Lu, Muhammad Shaban, Chengkuan Chen, TiffanyY. Chen, Drew F.K. Williamson, and Faisal Mahmood. 2021.Whole Slide Images are 2D Point Clouds: Context-Aware Survival Prediction Using Patch-Based Graph Convolutional Networks. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, Marleen deBruijne, PhilippeC. Cattin, Stéphane Cotin, Nicolas Padoy, Stefanie Speidel, Yefeng Zheng, and Caroline Essert (Eds.). Springer International Publishing, Cham, 339–349.
  • Dai etal. (2023a)Weinan Dai, Yifeng Jiang, Chengjie Mou, and Chongyu Zhang. 2023a.An Integrative Paradigm for Enhanced Stroke Prediction: Synergizing XGBoost and XDeepFM Algorithms. In Proceedings of the 2023 6th International Conference on Big Data Technologies (¡conf-loc¿, ¡city¿Qingdao¡/city¿, ¡country¿China¡/country¿, ¡/conf-loc¿) (ICBDT ’23). Association for Computing Machinery, New York, NY, USA, 28–32.https://doi.org/10.1145/3627377.3627382
  • Dai etal. (2023b)Weinan Dai, Chengjie Mou, Jun Wu, and Xuesong Ye. 2023b.Diabetic Retinopathy Detection with Enhanced Vision Transformers: The Twins-PCPVT Solution. In 2023 IEEE 3rd International Conference on Electronic Technology, Communication and Information (ICETCI). IEEE, 403–407.
  • Guo and Wen (2021)Qianyu Guo and Jing Wen. 2021.Multi-level Fusion Based Deep Convolutional Network for Image Quality Assessment. In Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10–15, 2021, Proceedings, Part VI. Springer, 670–678.
  • HarrellJr etal. (1996)FrankE HarrellJr, KerryL Lee, and DanielB Mark. 1996.Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors.Statistics in medicine 15, 4 (1996), 361–387.
  • Hu etal. (2023)Hao Hu, Shulin Li, Jiaxin Huang, Bo Liu, and Chang Che. 2023.Casting Product Image Data for Quality Inspection with Xception and Data Augmentation.Journal of Theory and Practice of Engineering Science 3, 10 (Oct. 2023), 42–46.https://doi.org/10.53469/jtpes.2023.03(10).06
  • Hu etal. (2022)Zhirui Hu, Jinyang Li, Zhenyu Pan, Shanglin Zhou, Lei Yang, Caiwen Ding, Omer Khan, Tong Geng, and Weiwen Jiang. 2022.On the design of quantum graph convolutional neural network in the nisq-era and beyond. In 2022 IEEE 40th International Conference on Computer Design (ICCD). IEEE, 290–297.
  • Li etal. (2018)Ruoyu Li, Jiawen Yao, Xinliang Zhu, Yeqing Li, and Junzhou Huang. 2018.Graph CNN for survival analysis on whole slide pathological images. In International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 174–182.
  • Li etal. (2024)Sai Li, Peng Kou, Miao Ma, Haoyu Yang, Shuo Huang, and Zhengyi Yang. 2024.Application of Semi-Supervised Learning in Image Classification: Research on Fusion of Labeled and Unlabeled Data.IEEE Access 12 (2024), 27331–27343.https://doi.org/10.1109/ACCESS.2024.3367772
  • Liang etal. (2022)Zichen Liang, Hu Cao, Chu Yang, Zikai Zhang, and Guang Chen. 2022.Global-local feature aggregation for event-based object detection on eventkitti. In 2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). IEEE, 1–7.
  • Liu etal. (2023a)Yongfei Liu, Haoyu Yang, and Chenwei Wu. 2023a.Unveiling patterns: A study on semi-supervised classification of strip surface defects.IEEE Access 11 (2023), 119933–119946.
  • Liu etal. (2023b)Zhuo Liu, Yunan Yang, Zhenyu Pan, Anshujit Sharma, Amit Hasan, Caiwen Ding, Ang Li, Michael Huang, and Tong Geng. 2023b.Ising-cf: A pathbreaking collaborative filtering method through efficient ising machine learning. In 2023 60th ACM/IEEE Design Automation Conference (DAC). IEEE, 1–6.
  • Lu etal. (2018)Cheng Lu, David Romo-Bucheli, Xiangxue Wang, Andrew Janowczyk, Shridar Ganesan, Hannah Gilmore, David Rimm, and Anant Madabhushi. 2018.Nuclear shape and orientation features from H&E images predict survival in early-stage estrogen receptor-positive breast cancers.Laboratory investigation 98, 11 (2018), 1438–1448.
  • Lu etal. (2023)Yawen Lu, Qifan Wang, Siqi Ma, Tong Geng, YingjieVictor Chen, Huaijin Chen, and Dongfang Liu. 2023.Transflow: Transformer as flow learner. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18063–18073.
  • Ma etal. (2023)Danqing Ma, Bo Dang, Shaojie Li, Hengyi Zang, and Xinqi Dong. 2023.Implementation of computer vision technology based on artificial intelligence for medical image analysis.International Journal of Computer Science and Information Technology 1, 1 (2023), 69–76.
  • Mou etal. (2023)Chengjie Mou, Weinan Dai, Xuesong Ye, and Jun Wu. 2023.Research On Method Of User Preference Analysis Based on Entity Similarity and Semantic Assessment. In 2023 8th International Conference on Signal and Image Processing (ICSIP). 1029–1033.https://doi.org/10.1109/ICSIP57908.2023.10271084
  • Saltz etal. (2018)Joel Saltz, Rajarsi Gupta, Le Hou, Tahsin Kurc, Pankaj Singh, Vu Nguyen, Dimitris Samaras, KennethR Shroyer, Tianhao Zhao, Rebecca Batiste, etal. 2018.Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images.Cell reports 23, 1 (2018), 181–193.
  • Tianbo etal. (2023)Song Tianbo, Hu Weijun, Cai Jiangfeng, Liu Weijia, Yuan Quan, and He Kun. 2023.Bio-inspired Swarm Intelligence: a Flocking Project With Group Object Recognition. In 2023 3rd International Conference on Consumer Electronics and Computer Engineering (ICCECE). 834–837.https://doi.org/10.1109/ICCECE58074.2023.10135464
  • Wu etal. (2023)Jun Wu, Xuesong Ye, Chengjie Mou, and Weinan Dai. 2023.Fineehr: Refine clinical note representations to improve mortality prediction. In 2023 11th International Symposium on Digital Forensics and Security (ISDFS). IEEE, 1–6.
  • Xu etal. (2023)Hao Xu, Shuang Song, and Ze Mao. 2023.Characterizing the Performance of Emerging Deep Learning, Graph, and High Performance Computing Workloads Under Interference.arXiv:2303.15763[cs.PF]
  • Xu etal. (2019)Hao Xu, Qingsen Wang, Shuang Song, LizyKurian John, and Xu Liu. 2019.Can we trust profiling results? Understanding and fixing the inaccuracy in modern profilers. In Proceedings of the ACM International Conference on Supercomputing. 284–295.
  • Xu etal. (2013)Hao Xu, Min Zhu, and Yanbo Wu. 2013.The union of time reversal and turbo equalization on underwater acoustic communication. In 2013 OCEANS-San Diego. IEEE, 1–5.
  • YANG etal. (2017)Huanhuan YANG, Tianrui LI, and Xindi CHEN. 2017.Visualization of time series data based on spiral graph.Journal of Computer Applications 37, 9 (2017), 2443.
  • Ye etal. (2023)Xuesong Ye, Jun Wu, Chengjie Mou, and Weinan Dai. 2023.MedLens: Improve Mortality Prediction Via Medical Signs Selecting and Regression. In 2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI). IEEE, 169–175.
  • Zhang etal. (2021)Zikai Zhang, Bineng Zhong, Shengping Zhang, Zhenjun Tang, Xin Liu, and Zhaoxiang Zhang. 2021.Distractor-aware fast tracking via dynamic convolutions and mot philosophy. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 1024–1033.
  • Zhu etal. (2017)Xinliang Zhu, Jiawen Yao, Feiyun Zhu, and Junzhou Huang. 2017.WSISA: Making Survival Prediction from Whole Slide Histopathological Images. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6855–6863.https://doi.org/10.1109/CVPR.2017.725
Survival Prediction Across Diverse Cancer Types Using Neural Networks (2024)

References

Top Articles
Latest Posts
Article information

Author: Madonna Wisozk

Last Updated:

Views: 6296

Rating: 4.8 / 5 (68 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Madonna Wisozk

Birthday: 2001-02-23

Address: 656 Gerhold Summit, Sidneyberg, FL 78179-2512

Phone: +6742282696652

Job: Customer Banking Liaison

Hobby: Flower arranging, Yo-yoing, Tai chi, Rowing, Macrame, Urban exploration, Knife making

Introduction: My name is Madonna Wisozk, I am a attractive, healthy, thoughtful, faithful, open, vivacious, zany person who loves writing and wants to share my knowledge and understanding with you.