Posts by Collection

portfolio

publications

Channel boosting based detection and segmentation for cancer analysis in histopathological images

Published in 2022 19th International Bhurban Conference on Applied Sciences and Technology (IBCAST), 2022

The human immune system plays a vital role in cancer prevention, with Tumor Infiltrating Lymphocytes (TILs) serving as key indicators of cancer prognosis. Manual counting of TILs under a microscope is labor-intensive, subjective, and time-consuming. To address this, we propose an automated diagnostic system called PVTCB-Lymph-Det. This system incorporates channel boosting with a Pyramid Vision Transformer and CBAM-enhanced ResNet-50 for effective feature extraction. It tackles the challenges posed by lymphocyte morphological variations, clustering, and artifacts. The model achieves an F-score of 88.92% for lymphocyte detection. PVTCB-Lymph-Det shows promise in assisting pathologists with accurate and efficient diagnosis.

Recommended citation: M. L. Ali, Z. Rauf, A. R. Khan and A. Khan, "Channel boosting based detection and segmentation for cancer analysis in histopathological images," 2022 19th International Bhurban Conference on Applied Sciences and Technology (IBCAST), Islamabad, Pakistan, 2022, pp. 1-6, doi: 10.1109/IBCAST54850.2022.9990330.
Download Paper

CB-HVT Net: A Channel-Boosted Hybrid Vision Transformer Network for Lymphocyte Detection in Histopathological Images

Published in IEEE Access, 2023

This study presents a novel Channel Boosted Hybrid Vision Transformer (CB-HVT) architecture for detecting lymphocytes in histopathological images. By combining the local feature learning capabilities of CNNs with the global contextual awareness of Vision Transformers, the model effectively addresses challenges such as overlapping boundaries, artifacts, and morphological diversity of lymphocytes. The network incorporates multiple specialized modules, including channel generation, exploitation, and merging components, alongside a region-aware attention mechanism and a detection head. A feature fusion block with attention enhances discriminative learning. The CB-HVT was evaluated on two benchmark datasets (LYSTO and NuClick), achieving F-Scores of 0.88 and 0.82 respectively, and demonstrated robust performance on unseen test sets—highlighting its potential for real-time clinical application in pathology.

Recommended citation: M. L. Ali, Z. Rauf, A. Khan, A. Sohail, R. Ullah and J. Gwak, "CB-HVT Net: A Channel-Boosted Hybrid Vision Transformer Network for Lymphocyte Detection in Histopathological Images," in IEEE Access, vol. 11, pp. 115740-115750, 2023, doi: 10.1109/ACCESS.2023.3324383
Download Paper

Natural Human-Computer Interface Based on Gesture Recognition with YOLO to Enhance Virtual Lab Users’ Immersive Feeling

Published in American Society for Engineering Education (ASEE), 2024

Hand tracking and gesture recognition are rapidly developing fields with many applications in human-computer interface (HCI). This technology enables computers to recognize and respond to hand movements and gestures, creating a more natural and intuitive interface. With the increasing popularity of augmented reality and virtual reality devices, the demand for advanced hand tracking and gesture recognition technologies is growing. The purpose of this research is to study the current state of the art in hand tracking and gesture recognition and to develop new and improved techniques for HCI applications with ‘You only look once’ models that result in the improvement of the user’s immersive feeling in the virtual world. The research results will be used in a virtual electrical power lab along with the learning management system. To evaluate the implementation, the surveys will be administered before and after the classes. The research will contribute to advancing the technologies by developing new and improved hand tracking and gesture recognition algorithms and integrating them into HCI applications.

Recommended citation: Ali, M.L. and Zhang, Z., 2024, June. Natural Human-Computer Interface Based on Gesture Recognition with YOLO to Enhance Virtual Lab Users’ Immersive Feeling. In Proceedings of the 2024 ASEE Annual Conference & Exposition, Portland, OR, USA (pp. 23-26).

The YOLO Framework: A Comprehensive Review of Evolution, Applications, and Benchmarks in Object Detection

Published in Computers, MDPI, 2024

This paper presents a comprehensive review of the YOLO (You Only Look Once) object detection framework, covering its evolution from YOLOv1 to YOLOv11. It highlights YOLO’s transformation from a simple real-time detector to a versatile architecture suitable for diverse applications. Key advancements such as anchor boxes, residual connections, and optimized training strategies are discussed across versions. The review also addresses differences between official YOLO versions and community-led releases like YOLOv5 and YOLOv8. Architectural changes are examined in terms of accuracy, efficiency, and scalability. The paper explores domain-specific adaptations using lightweight backbones and Transformers. Overall, it serves as a valuable resource for researchers and practitioners in the field.

Recommended citation: M. L. Ali and Z. Zhang, "The YOLO Framework: A Comprehensive Review of Evolution, Applications, and Benchmarks in Object Detection", Computers 2024, 13, 336. https://doi.org/10.3390/computers13120336
Download Paper

Published in , 1900

talks

teaching