Investigation of Vascular Biometrics with Machine Learning [B1.1]
Description
Understanding of the structure of the blood vessels is important both in medi- cal and security contexts. In medical applications, it can be important to trace back along the vessel path from a haemorrhage and to understand which parts of the vascular networks might be involved in an invasive procedure. Blood vessels are also becoming increasing more important in biometric identification scenar- ios, since their robust structure and potential uniqueness make them a strong candidate as a biometric trait. In this project, we will build a dataset of annotated medical images and investigate what properties of the vasculature might be useful as a biometric. We will develop a method using machine learning and image pro- cessing to interrogate these images and annotations, extracting useful features such as segment length and tortuosity, and examining their viability as a biometric trait.
Reading
- H-unique project: https://www.lancaster.ac.uk/security-lancaster/research/h-unique/
- Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. InInternational Conference on Medical image computing and computer-assisted intervention 2015 Oct 5 (pp. 234-241). Springer, Cham.
- Ilginis T, Clarke J, Patel PJ. Ophthalmic imaging. British medical bulletin. 2014 Sep 1;111(1).
Project Type Research/ Experimental |
Constraints and Requirements The student should have some familiarity with Python / Matlab. Some knowledge of machine learning / convolutional neural networks would be a bonus but is not essential. |
Supervisor Bryan M. Williams |
Industry linked No |
Investigation of Novel Biometric Traits with Machine Learning [B1.2]
Description
In many real cases of crime, perpetrator identification is hampered by the lack of availability of identifiable traits such as DNA, fingerprints and face. This leads us to search for novel biometric traits to act as clues or signs of identification. Our multi-million pound H-Unique project is investigating visible traits in the hand to build a multi-model biometric based on visible hand anatomy, and there are many other potentially unique traits from our bodies to our behaviour which are yet unknown or unproven. In this project, we will identify one such potential trait, e.g. visible vein patterns in the foot, how a person walks or gestures. We will collate a dataset and develop an approach to operationalize this trait using techniques from machine learning and computer vision. We will investigate methods of uniqueness and comparison to examine their potential as a biometric trait. This work will contribute to ongoing research in the areas of computer vision, segmentation, biometrics, forensic anthopology, security and forensic imaging.
Reading
- H-unique project: https://www.lancaster.ac.uk/security-lancaster/research/h-unique/
- Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention 2015 Oct 5 (pp. 234-241). Springer, Cham.
- Liu Z, Zhang Z, Wu Q, Wang Y. Enhancing person re-identification by integrating gait biometric. Neurocomputing. 2015 Nov 30;168:1144-56.
Project Type Research/ Experimental |
Constraints and Requirements The student should have some familiarity with Python / Matlab. Some knowledge of machine learning / convolutional neural networks would be a bonus but is not essential. |
Supervisor Bryan M. Williams |
Industry linked No |
Temporal Scene Comparison and Similarity Detection [B1.3]
Description
Measuring similarity is a very important concept in data analysis and machine learning as well as analysing crime scenes or obtained caches of image data. In the complex sphere of image and video analysis, the comparison of the millions (or billions) of pixels involved is enabled through methods of interpretation such as convolution and deep neural networks. In this project, we will investigate techniques for scene comparison, particuarly when the image is taken from different angles and objects within the scene have been moved. The question of similarity will be investigated; i.e. what amount of change is permissible for a scene to be considered the same. This can involve either abstract comparison (e.g. abstract feature vectors) or more explicity approachs, such as object detection and classification. This work will contribute to ongoing research in the areas of computer vision, segmentation, abstract feature analysis, security and forensic imaging.
Reading
- Di Gesu V, Starovoitov V. Distance-based functions for image comparison. Pattern Recognition Letters. 1999 Feb 1;20(2):207-14.
- https://en.wikipedia.org/wiki/EncroChat
- Vyas R, Williams BM, Rahmani H, Boswell-Challand R, Jiang Z, Angelov P, Black S. Ensemble-Based Bounding Box Regression for Enhanced Knuckle Localization. Sensors. 2022 Feb 17;22(4):1569.
Project Type Research/ Experimental |
Constraints and Requirements The student should have some familiarity with Python / Matlab. Some knowledge of machine learning / convolutional neural networks would be a bonus but is not essential. |
Supervisor Bryan M. Williams |
Industry linked No |
Pathology and Feature Extraction in Ophthalmic Images to Characterise the Architecture of the Eye [B2.1]
Description
Medical images, in particular fundus images in ophthalmology, contain a wealth of information that can give vital clues relating to health status, the severity and presence of disease, prognosis, and can inform treatment strategies. While most developed countries have experts who are capable of interpreting these images, there many remaining issues that give rise to the need for automation to support clinical work; there is an insufficient number of medical experts available to carry out such complex and time-consuming analysis of large numbers of patients, which is particularly problematic in the fall-out of the COVID-19 pandemic, and people in remote regions and developing nations do not have ready access to such expertise. The automation of pathology extraction and analysis has been a hot research topic for decades, and many approaches have been proposed. However, they are typically limited to considering individual structures and pathologies, which limits their potential in real-world scenarios.
In this project, we will investigate techniques for structure segmentation and extraction, with the aim of building up a full extraction of the eye, including pathologies. We will investigate and build techniques from image analysis, machine learning and deep learning to create a holistic approach. This work will contribute to ongoing research in the areas of computer vision, medical imaging, segmentation and ophthalmology.
Reading
- https://en.wikipedia.org/wiki/Fundus_photography
- Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention 2015 Oct 5 (pp. 234-241). Springer, Cham.
- Al-Bander B, Williams BM, Al-Nuaimy W, Al-Taee MA, Pratt H, Zheng Y. Dense fully convolutional segmentation of the optic disc and cup in colour fundus for glaucoma diagnosis. Symmetry. 2018 Apr;10(4):87.
Project Type Research/ Experimental |
Constraints and Requirements The student should have some familiarity with Python / Matlab. Some knowledge of machine learning / convolutional neural networks would be a bonus but is not essential. |
Supervisor Bryan M. Williams |
Industry linked No |
Disease Diagnosis from Multimodal and Multidimensional Extracted Features [B2.2]
Description
The automated analysis and diagnosis of diseases through medical imagery is an important and popular research topic. As automated diagnostic approaches become more feasible technically for implementation in hospitals, clinics and health centres, there is an important emphasis on the ability of such models to explain their reasoning and provide this in an intuitive way to clinicains, technicians and patients. Most approaches to automated classification and disease detection are not capable of this and rely on post-hoc designed methods, which provide limited information to the user. A key approach, which is gaining traction in the research community, is to build explainability into models and frameworks, for example by the “two-step” approach, whereby pathologies and other relevant features are extracted and used for a meaningful and interpretable comparison.
In this project, we will investigate and develop techniques for diagnosis and decision-making based on the analysis of multimodal and multidimensional feature data. Using Glaucoma as a case study, we will build techniques based on machine learning, signal processing and deep learning, to examine the apparent relevance of features such as cup-to-disc profiles and nerver fibre layer defects and combine them to give a single diagnosis. We will also investigate boundary issues and confidence of the result. This work will contribute to ongoing research in the areas of computer vision, medical imaging, abstract feature analysis, automated diagnosis, and ophthalmolgy.
Reading
- Williams BM, Borroni D, Liu R, Zhao Y, Zhang J, Lim J, Ma B, Romano V, Qi H, Ferdousi M, Petropoulos IN. An artificial intelligence-based deep learning algorithm for the diagnosis of diabetic neuropathy using corneal confocal microscopy: a development and validation study. Diabetologia. 2020 Feb;63(2):419-30.
- MacCormick IJ, Williams BM, Zheng Y, Li K, Al-Bander B, Czanner S, Cheeseman R, Willoughby CE, Brown EN, Spaeth GL, Czanner G. Accurate, fast, data efficient and interpretable glaucoma diagnosis with automated spatial analysis of the whole cup to disc profile. PloS one. 2019 Jan 10;14(1):e0209409.
- Coan L, Williams B, Venkatesh KA, Upadhyaya S, Czanner S, Venkatesh R, Willoughby CE, Kavitha S, Czanner G. Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review. arXiv preprint arXiv:2204.05591. 2022 Apr 12.
- Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision 2017 (pp. 618-626).
Project Type Research/ Experimental |
Constraints and Requirements The student should have some familiarity with Python / Matlab. Some knowledge of machine learning / convolutional neural networks would be a bonus but is not essential. |
Supervisor Bryan M. Williams |
Industry linked No |
Automated Grading from Mammogram Images to Support Breast Cancer Diagnosis [B2.3]
Description
The screening of breast cancer, which is vital for capturing cancer early and reducing the likelihood of complications and need for surgery, is a complex operation, requiring the effective analysis of a large number of medical tests. This is a challenging task for the NHS and the large number of radiologists required to undertake the analysis, made worse in the fall-out of the COVID-19 pandemic. Screening typically involves the capture of a breast mammogram (a form of X-ray) which is used to search for signs of cancer. If cancer is suspected, then a biopsy may be taken. This can give a more reliable insight into whether cancer is present and its severity, but it is an invasive technique and can induce anxiety in patients.
Due to the complexities in analysing mammograms, a sizable number of patients who undergo a biopsy turn out to not have cancer. There is significant potential and a strong drive to improve the analysis of mammograms through automated image analysis and processing techniques to make faster and more accurate diagnoses based on mammogram images, which will reduce the need for invasive biopsy.
This project will evaluate and build on techniques for the automated analysis of breast mammograms. This will include pathology localization techniques through pre-trained and semi-supervised computer vision approaches; potentially automated segmentation of calcifications; intelligent decision-making techniques and evaluation methodology. This work will contribute to ongoing research in the areas of computer vision, medical imaging, automated decision-making, cancer diagnosis and radiology.
Reading
- Debelee TG, Schwenker F, Ibenthal A, Yohannes D. Survey of deep learning in breast cancer image analysis. Evolving Systems. 2020 Mar;11(1):143-63.
- Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention 2015 Oct 5 (pp. 234-241). Springer, Cham.
- Vyas R, Williams BM, Rahmani H, Boswell-Challand R, Jiang Z, Angelov P, Black S. Ensemble-Based Bounding Box Regression for Enhanced Knuckle Localization. Sensors. 2022 Feb 17;22(4):1569.
Project Type Research/ Experimental |
Constraints and Requirements The student should have some familiarity with Python / Matlab. Some knowledge of machine learning / convolutional neural networks would be a bonus but is not essential. |
Supervisor Bryan M. Williams |
Industry linked No |
Three-dimensional Medical Image Reconstruction and Quality Refinement [B2.4]
Description
Much of the analysis techniques developed in computer vision, image analysis and medical imaging are based on two-dimensional imaging data, due to the constraints of existing technology. More recently, techniques have been developed and become available for three dimensional X-ray imaging, including breast mammography. This imaging modality provides important improvements beyond two-dimensional image capture, allowing calcifications and other pathologies to be measured and examined far more accurately than has been otherwise possible. However, such techniques require accurate reconstruction methodology and can often suffer from reduced signal quality and resolution. Image processing techniques are often developed to accommodate lower resolution in order to maintain processing speeds.
This project aims to examine and improve the quality of three-dimensional mammogram imaging technology by (i) studying intelligent image quality analysis techniques, (ii) evaluating and building on techniques in computer vision and machine learning for more accurate three-dimension reconstruction, (iii) improving the quality of the reconstructed volumes by utlising the available information more accurately. This work will contribute to ongoing research in the areas of computer vision, medical imaging, cancer diagnosis and radiology.
Reading
- Sabottke CF, Spieler BM. The effect of image resolution on deep learning in radiography. Radiology: Artificial Intelligence. 2020 Jan 22;2(1):e190015.
- Debelee TG, Schwenker F, Ibenthal A, Yohannes D. Survey of deep learning in breast cancer image analysis. Evolving Systems. 2020 Mar;11(1):143-63.
- Yang W, Zhang X, Tian Y, Wang W, Xue JH, Liao Q. Deep learning for single image super-resolution: A brief review. IEEE Transactions on Multimedia. 2019 May 28;21(12):3106-21.
Project Type Research/ Experimental |
Constraints and Requirements The student should have some familiarity with Python / Matlab. Some knowledge of machine learning / convolutional neural networks would be a bonus but is not essential. |
Supervisor Bryan M. Williams |
Industry linked No |
Analysis and Classification of THz signals to Detect Coating Thickness [B3.1]
Description
Pharmaceutical film coating processes are used to ensure quality and the effectiveness of functional properties in tablet manufacturing. However, monitoring the effectiveness of this coating process and ensuring that a pharmaceutical tablet is evenly and correctly coated is a challenging task. We have been using a technique called teraherz imaging, which is capable of capuring one-dimensional waveforms from which can extract boundary information to monitor the coating process. However, these waveforms are noisy and difficult to interpret.
This project aims to evaluate and develop approaches to analysing one-dimensional imaging signals in order to improve monitoring. We will work with many aspects of artificial intelligence, computer vision and signal processing to (i) propose effective signal classification approaches to determine the surface being demonstrated by the waveform, (ii) develop approaches to correctly interpret and deconstruct these waveforms to obtain information of coating thickness, (iii) investigate the explainability of these approaches and potential to extract additional information. This work will contribute to ongoing research in the areas of computer vision, signal processing and analysis, explainable AI and manufactuting.
Reading
- https://en.wikipedia.org/wiki/Terahertz_tomography
- Li X, Williams B, May RK, Zhong S, Evans MJ, Gladden LF, Zeitler JA, Lin H. Optimising Terahertz Waveform Selection of a Pharmaceutical Film Coating Process Using Recurrent Network. IEEE Transactions on Terahertz Science and Technology. 2022 Apr 1.
- Li X, Bawuah P, Williams BM, Zeitler JA, Lin H. Studying pharmaceutical tablets mixing process inside a perforated pan-coater using in-line terahertz sensing. In 2020 45th International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMW-THz) 2020 Nov 8 (pp. 01-02). IEEE.
- Graves A. Long short-term memory. Supervised sequence labelling with recurrent neural networks. 2012:37-45.
- Van Houdt G, Mosquera C, Nápoles G. A review on the long short-term memory model. Artificial Intelligence Review. 2020 Dec;53(8):5929-55.
Project Type Research/ Experimental |
Constraints and Requirements The student should have some familiarity with Python / Matlab. Some knowledge of machine learning / convolutional neural networks would be a bonus but is not essential. |
Supervisor Bryan M. Williams |
Industry linked No |
Weakly-supervised Tablet Tracking and Analysis in Videography [B3.2]
Description
The automated analysis of videography presents many challenges in computer vision research, given the complex relationships in three or more dimensions and the associated computational complexity. Yet, the analysis and interpretation of such data is particularly important for many tasks including research in pharmaceutical film coating processes, which are used to ensure quality and the effectiveness of functional properties in tablet manufacturing.
This project aims to develop techniques for tablet tracking from video when moving through a coating drum at varying speeds and with other types of tablet, which need to be identified and distinguished. Motion blur will a common issue to be tackled due to the limited ability of existing camera methodology. This project will involve several aspects of computer vision and artificial intelligence, including (i) blur detection and compensation, weakly-supervised segmentation (iii) object detection and tracking, (iv) video analysis, and possibly (v) three-dimension coordinate estimation from two-dimensional projection, which has a wide variety of potential applications. This work will contribute to ongoing research in the areas of computer vision, motion tracking, blur detection, weakly-supervised machine learning, deconvolution, video processing and potentially photogrammetry.
Reading
- Li X, Williams B, May RK, Zhong S, Evans MJ, Gladden LF, Zeitler JA, Lin H. Optimising Terahertz Waveform Selection of a Pharmaceutical Film Coating Process Using Recurrent Network. IEEE Transactions on Terahertz Science and Technology. 2022 Apr 1.
- Williams BM, Ghanbari B, Chen K, Rada L. A Fast Discrete Homotopy Solution Method for Two Problems in Image Deconvolution. In International Online Conference on Intelligent Decision Science 2020 Aug 7 (pp. 583-594). Springer, Cham.
- Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention 2015 Oct 5 (pp. 234-241). Springer, Cham.
Project Type Research/ Experimental |
Constraints and Requirements The student should have some familiarity with Python / Matlab. Some knowledge of machine learning / convolutional neural networks would be a bonus but is not essential. |
Supervisor Bryan M. Williams |
Industry linked No |