2022- 2023 IEEE MATLAB Image Processing Projects

onlineClass

For Outstation Students, we are having online project classes both technical and coding using net-meeting software

For details, Call: 9886692401/9845166723

DHS Informatics provides academic projects based on IEEE MATLAB Projects | MATLAB Project on Image Processing with best and latest IEEE papers implementation. Below mentioned are the 2022-2023 best IEEE MATLAB Projects | MATLAB Project on Image Processing for CSE, ECE, EEE and Mechanical engineering students. To download the abstracts of MATLAB Projects click here.

For further details call our head office at +91 98866 92401 / 98451 66723, we can send synopsis and IEEE papers based on students interest. For more details please visit our head office and get registered.

We believe in quality service with commitment to get our students full satisfaction.

We are in this service for more than 15 years and all our customers are delighted with our service.

Abstract: Accurate segmentation of retinal vessels is a basic step in diabetic retinopathy (DR) detection. Most methods based on deep convolutional neural network (DCNN) have small receptive fields, and hence they are unable to capture global context information of larger regions, with difficult to identify pathological. The final segmented retina vessels contain more noise with low classification accuracy. Therefore, in this paper, we propose a DCNN structure named as D-Net. In the encoding phase, we reduced the loss of feature information by reducing the downsampling factor, which reduced the difficulty of tiny thin vessels segmentation. We use the combined dilated convolution to effectively enlarge the receptive field of the network and alleviate the “grid problem” that exists in the standard dilated convolution. In the proposed multi-scale information fusion module (MSIF), parallel convolution layers with different dilation rates are used, so that the model can obtain more dense feature information and better capture retinal vessel information of different sizes. In the decoding module, the skip layer connection is used to propagate context information to higher resolution layers, so as to prevent low-level information from passing the entire network structure. Finally, our method was verified on DRIVE, STARE, and CHASE dataset. The experimental results show that our network structure outperforms some state-of-art method, such as N 4 -fields, U-Net, and DRIU in terms of accuracy, sensitivity, specificity, and ${AUC_{ROC}}$ . Particularly, D-Net outperforms U-Net by 1.04 %, 1.23 %, and 2.79% in DRIVE, STARE, and CHASE dataset, respectively.

Abstract: Medical images play a very important role in making the right diagnosis for the doctor and in the patient’s treatment process. Using intelligent algorithms makes it possible to quickly distinguish the lesions of medical images, and it is especially important to extract features from images. Many studies have integrated various algorithms into medical images. For medical image feature extraction, a large amount of data is analyzed to obtain processing results, helping doctors to make more accurate case diagnosis. In view of this, this paper takes tumor images as the research object, and first performs local binary pattern feature extraction of the tumor image by rotation invariance. As the image shifts and the rotation changes, the image is stationary relative to the coordinate system. The method can accurately describe the texture features of the shallow layer of the tumor image, thereby enhancing the robustness of the image region description. Focusing on image feature extraction based on convolutional neural network (CNN), the basic framework of CNN is built. In order to break the limitations of machine vision and human vision, the research is extended to multi-channel input CNN for image feature extraction. Two convolution models of Xception and Dense Net are built to improve the accuracy of the CNN algorithm. It can be seen from the experimental results that the CNN algorithm shows high accuracy in tumor image feature extraction. In this paper, the CNN algorithm is compared with several classical algorithms in the local binary mode. The CNN algorithm has more accurate feature extraction ability for tumor CT images on a larger data basis. Furthermore, the advantages of CNN algorithms in this field are demonstrated.

Abstract: Diabetic retinopathy (DR) is an important cause of blindness worldwide. However, DR is hard to be detected in the early stages, and the diagnostic procedure can be time-consuming even for the experienced experts. Therefore, a computer-aided diagnosis method based on deep learning algorithms is proposed to automated diagnose the referable diabetic retinopathy by classifying color retinal fundus photographs into two grades. In this paper, a novel convolutional neural network model with the Siamese-like architecture is trained with a transfer learning technique. Different from the previous works, the proposed model accepts binocular fundus images as inputs and learns their correlation to help to make a prediction. In the case with a training set of only 28 104 images and a test set of 7024 images, an area under the receiver operating curve of 0.951 is obtained by the proposed binocular model, which is 0.011 higher than that obtained by the existing monocular model. To further verify the effectiveness of the binocular design, a binocular model for five-class DR detection is also trained and evaluated on a 10% validation set. The result shows that it achieves a kappa score of 0.829 which is higher than that of the existing non-ensemble model.

Abstract: Computerized healthcare has undergone rapid development thanks to the advances in medical imaging and machine learning technologies. Especially, recent progress on deep learning opens a new era for multimedia based clinical decision support. In this paper, we use deep learning with brain network and clinical relevant text information to make early diagnosis of Alzheimer’s Disease (AD). The clinical relevant text information includes age, gender and ApoE gene of the subject. The brain network is constructed by computing the functional connectivity of brain regions using resting-state functional magnetic resonance imaging (R-fMRI) data. A targeted autoencoder network is built to distinguish normal aging from mild cognitive impairment, an early stage of AD. The proposed method reveals discriminative brain network features effectively and provides a reliable classifier for AD detection. Compared to traditional classifiers based on R-fMRI time series data, about 31.21% improvement of the prediction accuracy is achieved by the proposed deep learning method, and the standard deviation reduces by 51.23% in the best case that means our prediction model is more stable and reliable compared to the traditional methods. Our work excavates deep learning’s advantages of classifying high-dimensional multimedia data in medical services, and could help predict and prevent AD at an early stage.

Abstract: Human activity recognition (HAR) based on sensor networks is an important research direction in the fields of pervasive computing and body area network. Existing researches often use statistical machine learning methods to manually extract and construct features of different motions. However, in the face of extremely fast-growing waveform data with no obvious laws, the traditional feature engineering methods are becoming more and more incapable. With the development of deep learning technology, we do not need to manually extract features and can improve the performance in complex human activity recognition problems. By migrating deep neural network experience in image recognition, we propose a deep learning model (InnoHAR) based on the combination of inception neural network and recurrent neural network. The model inputs the waveform data of multi-channel sensors end-to-end. Multi-dimensional features are extracted by inception-like modules by using various kernel-based convolution layers. Combined with GRU, modeling for time series features is realized, making full use of data characteristics to complete classification tasks. Through experimental verification on three most widely used public HAR datasets, our proposed method shows consistent superior performance and has good generalization performance, when compared with the state-of-the-art.

Abstract: The emotions evolved in human face have a great influence on decisions and arguments about various subjects. In psychological theory, emotional states of a person can be classified into six main categories: surprise, fear, disgust, anger, happiness and sadness. Automatic extraction of these emotions from the face images can help in human computer interaction as well as many other applications. Machine learning algorithms and especially deep neural network can learn complex features and classify the extracted patterns. In this paper, a deep learning based framework is proposed for human emotion recognition. The proposed framework uses the Gabor filters for feature extraction and then a Convolutional Neural Network (CNN) for classification. The experimental results show that the proposed methodology increases both of the speed training process of CNN and the recognition accuracy.

Abstract: The diagnosis of breast cancer histology images with hematoxylin and eosin stained is non-trivial, labor-intensive and often leads to a disagreement between pathologists. Computer-assisted diagnosis systems contribute to help pathologists improve diagnostic consistency and ef_ciency.With the recent advances in deep learning, convolutional neural networks (CNNs) have been successfully used for histology images analysis. The classi_cation of breast cancer histology images into normal, benign, and malignant sub-classes is related to cells’ density, variability, and organization along with overall tissue structure and morphology. Based on this, we extract both smaller and larger size patches from histology images, including cell-level and tissue-level features, respectively. However, there are some sampled cell-level patches that do not contain enough information that matches the image tag. Therefore, we propose a patches’ screening method based on the clustering algorithm and CNN to select more discriminative patches. The approach proposed in this paper is applied to the 4-class classi_cation of breast cancer histology images and achieves 95% accuracy on the initial test set and 88.89% accuracy on the overall test set. The results are competitive compared to the results of other state-of-the-art methods

Abstract: A real-world animal biometric system that detects and describes animal life in image and video data is an emergingsubject in machine vision. These systems develop computer vision approaches for the classification of animals. A novel methodfor animal face classification based on score-level fusion of recently popular convolutional neural network (CNN) features andappearance-based descriptor features is presented. This method utilises a score-level fusion of two different approaches; oneuses CNN which can automatically extract features, learn and classify them; and the other one uses kernel Fisher analysis (KFA) for its feature extraction phase. The proposed method may also be used in other areas of image classification and objectrecognition. The experimental results show that automatic feature extraction in CNN is better than other simple featureextraction techniques (both local- and appearance-based features), and additionally, appropriate score-level combination ofCNN and simple features can achieve even higher accuracy than applying CNN alone. The authors showed that the score-levelfusion of CNN extracted features and appearance-based KFA method have a positive effect on classification accuracy. Theproposed method achieves 95.31% classification rate on animal faces which is significantly better than the other state-of-the-artmethods.

 IEEE MATLAB Projects

S.NO MATLAB BASED PROJECTS LIST SYNOPSIS
1
PORTABLE CAMERA-BASED ASSISTIVE TEXT AND PRODUCT LABEL READING FROM HAND-HELD OBJECTS FOR BLIND PERSONS
Title
2
FAST AND ADAPTIVE DETECTION OF PULMONARY NODULES IN THORACIC CT IMAGES USING HIERARCHICAL VECTOR QUANTIZATION
Title
3
AUTOMATIC CLASSIFICATION OF INTRACARDIAC MASSES IN ECHO CARDIOGRAPH BASED ON SPARSE REPRESENTATION
Title
4
CLOTHING COLOR & PATTERN RECOGNITION FOR VISUALLY IMPAIRED PEOPLE
Title
5
TRANSMISSION LINE FAULTS CLASSIFICATION USING WAVELET/FFT TRANSFORM
Title
6
LOSSLESS IMAGE COMPRESSION TECHNIQUE USING COMBINATION OF LZW AND BCH CODES
Title
7
COMPLETE BLOOD CHECK USING IMAGE PROCESSING
Title
8
ROSE HARVESTING SYSTEM USING IMAGE PROCESSING
Title
9
SLEEP DETECTION SYSTEM USING MATLAB IMAGE PROCESSING
Title
10
LIFI COMMUNICATION OF TEXT, AUDIO AND IMAGE
Title
11
AN INTEGRATED VISION-BASED ARCHITECTURE FOR HOME SECURITY SYSTEM
Title
12
AUTOMATIC VEHICLE IDENTIFICATION BYNUMBER PLATE RECOGNITION
Title
13
DIGITAL SWITCHING CONTROLLER FOR DC-DC CONVERTERS
Title
14
A VIRTUAL TOUCH EVENT METHOD USING COLOUR RECOGNITION
Title
15
VISUAL INSPECTION AND CRACK DETECTION OF RAILROAD TRACKS
Title
16
MATLAB AND EMBEDDED BASED SYSTEM FOR FACE RECOGNITION SYSTEM
Title
17
AUDIO/SOUND COMMAND BASED APPLICATION CONTROL (HOME AUTOMATION BASED ON VOICE)
Title
18
AUDIO/SOUND COMMAND BASED APPLICATION CONTROL (HOME AUTOMATION BASED ON VOICE)
Title
19
A DISCRIMINATIVELY TRAINED FULLY CONNECTED CONDITIONAL RANDOM FIELD MODEL FOR BLOOD VESSEL SEGMENTATION IN FUNDUS IMAGES
Title
20
PREDICTING BRADYCARDIA IN PRETERM INFANTS USING POINT PROCESS ANALYSIS OF HEART RATE
Title
21
SMARTPHONE BASED WOUND ASSESSMENT SYSTEM FOR PATIENTS WITH DIABETES
Title
22
AUTOMATED DIAGNOSIS OF GLAUCOMA USING EMPIRICAL WAVELET TRANSFORM AND CORRENTROPY FEATURES EXTRACTED FROM FUNDUS IMAGES
Title
23
HEART RATE VARIABILITY EXTRACTION FROM VIDEOS SIGNALS: ICA VS. EVM COMPARISON
Title
24
ICAP: AN INDIVIDUALIZED MODEL COMBINING GAZE PARAMETERS AND IMAGE-BASED FEATURES TO PREDICT RADIOLOGISTS’ DECISIONS WHILE READING MAMMOGRAMS
Title
25
TOWARDS PHOTOPLETHYSMOGRAPHY-BASED ESTIMATION OF INSTANTANEOUS HEART RATE DURING PHYSICAL ACTIVITY
Title
26
A SMART PHONE IMAGE PROCESSING APPLICATION FOR PLANT DISEASE DIAGNOSIS
Title
27
RETINAL DISEASE SCREENING THROUGH LOCAL BINARY PATTERNS
Title
28
AUTOMATED MELANOMA RECOGNITION IN DERMOSCOPY IMAGES VIA VERY DEEP RESIDUAL NETWORKS
Title
29
VIDEO-BASED HEARTBEAT RATE MEASURING METHOD USING BALLISTOCARDIOGRAPHY
Title
30
WALSH–HADAMARD-BASED 3-D STEGANOGRAPHY FOR PROTECTING SENSITIVE INFORMATION IN POINT-OF-CARE
Title

About IEEE MATLAB Projects

IEEE MATLAB Projects | MATLAB Projects on Image Processing

MATLAB combines a desktop environment tuned for iterative analysis and design processes with a programming language that expresses matrix and array mathematics directly.

What can you do with MATLAB?

Using MATLAB, you can:

  • Analyze data
  • Develop algorithms
  • Create models and applications

The language, apps, and built-in math functions enable you to quickly explore multiple approaches to arrive at a solution. MATLAB lets you take your ideas from research to production by deploying to enterprise applications and embedded devices, as well as integrating with Simulink® and Model-Based Design.

who uses ?

Millions of engineers and scientists in industry and academia use MATLAB. You can use MATLAB for a range of applications, including deep learning and machine learning, signal processing and communications, image and video processing, control systems, test and measurement, computational finance, and computational biology.

IEEE MATLAB Projects | MATLAB Projects on Image Processing
IEEE MATLAB Projects | MATLAB Projects on Image Processing