IEEE deep learning projects | deep learning projects | Bangalore| IEEE machine Learning Projects | IEEE python Projects
DHS Informatics provides academic projects based on IEEE deep learning projects with best and latest IEEE papers implementation. Below mentioned are the 2020 – 2021 best IEEE python Machine Learning Projects for CSE, ECE, EEE and Mechanical engineering students. To download the abstracts of IEEE deep learning projects domain project click here.
For further details call our head office at +91 98866 92401 / 98451 66723, we can send synopsis and IEEE deep learning projects papers based on students interest. For more details please visit our head office and get registered.
We believe in quality service with commitment to get our students full satisfaction.
We are in this service for more than 15 years and all our customers are delighted with our service.
Abstract:
Plant diseases have an effect on the expansion of their species, therefore early detection is extremely vital. However, until the occurrence of the Machine Learning set, i.e., Deep Learning, this research space tends to have strong potential in terms of improved precision, several kinds of Deep learning instrumentation have been used for the identification and classification of plant diseases. Several of the advanced / changed Deep Learning structures square measure used at the side of several visual techniques to discover and differentiate the symptoms of plant diseases. Additionally, many operational metrics for the testing of those structures / ways square measure used. This review provides an entire summary of the Deep Learning models accustomed to visualize three plant diseases. Additionally, some analysis gaps are known that square measure additional specific in designation diseases in plants, even before their symptoms become obvious.
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Abstract:
The Parkinson’s disease (PD) is one of the top most prevalent degenerative disease which is caused by the loss of neurons that produce dopamine. Magnetic Resonance Imaging (MRI) is capable of capturing changes in the structure of the brain caused due to deficiency of dopamine in subjects of Parkinson’s disease. Early diagnosis of these type of diseases using computer-aided systems is an area of eminent importance and extensive research amongst researchers. Deep learning models can effectively assist the clinicians in the PD diagnosis and obtain an objective patient group classification in coming years. In this paper, detection of PD is done using deep learning algorithm to discriminate between PD and controlled subjects, which is difficult and time taking if done manually. According to research, the chance of curing increases significantly if appropriate steps are taken early and precious time could be saved if detection process is carried by a computer. By making use of the Convolutional Neural Network (CNN) and the LeNet-5 architecture, the MRI data of PD subjects was successfully classified from normal controls.
Abstract:
Nowadays, credit cards are becoming more and more widely used for both online and offline transactions. But along with this trend comes more credit card fraud. According to the Nilson report the global loss to credit card fraud is expected to reach $35 billion this year, so there is a desperate need for accurate and efficient fraud detection systems. In this paper, we propose a deep-learning-based method to tackle this problem. We employed multiple techniques, including feature engineering, memory compression, mixed precision, and ensemble loss to boost the performance of our model. The model is trained and evaluated on the IEEE-CIS fraud dataset provided by Vesta Corporation consisting of over 1 million records. Experiments show that our model outperforms traditional machine-learning-based methods like Bayes and SVM.
Abstract:
Speaker Recognition has been one of the most interesting yet challenging problems in the field of machine learning and artificial intelligence. It is used in the areas of human voice authentication for security purpose and identifying a person from a group of speakers. It has been a arduous task to teach a machine the differences in human voices, especially when people belong to different background like gender, language, accent, etc. This paper has used the deep learning approach to build and train two models, Artificial Neural Network (“ANN”) and Convolutional Neural Network (“CNN”), and compared their results. In former, the neural networks are fed on diverse extracted features from audio collection, whereas the latter is trained on spectrograms. Finally, transfer learning approach is deployed on both to get a viable output using limited data.
Abstract:
India has in recent years witnessed significant tragedies related to crowds. Statistics indicate that over 70 per cent of Indian crowd-related accidents happened during religious festivities. A devastating humanitarian disaster may occur if crowd safety measures are not enforced and the massive crowds need to be given special attention. Manual crowd control requires extensive human intervention and is more vulnerable to human error and is a time-consuming activity too. In this paper we emphasize on L&T Smart World AI-based crowd management system implemented during the world’s largest Kumbh Mela 2019 gathering in Prayagraj using Artificial Intelligence to solve circumstances that go beyond human capability. The data gathered provides the core for a framework for effective crowd management or evacuation strategies to minimize the risk of overwhelmed and dangerous conditions. Deep learning provides the solution to the dense crowd count and management problems. The crowd control analytics system of L&T Smart World has succeeded in maintaining the safety of 23 crore pilgrims visited during the 50 days of Holy Kumbh Mela in Prayagraj, India, demonstrates the efficacy of the solution implemented.
Abstract:
Artificial intelligence has found its use in various fields during the course of its development, especially in recent years with the enormous increase in available data. Its main task is to assist making better, faster and more reliable decisions. Artificial intelligence and machine learning are increasingly finding their application in medicine. This is especially true for medical fields that utilize various types of biomedical images and where diagnostic procedures rely on collecting and processing a large number of digital images. The application of machine learning in processing of medical images helps with consistency and boosts accuracy in reporting. This paper describes the use of machine learning algorithms to process chest X-ray images in order to support the decision making process in determining the correct diagnosis. Specifically, the research is focused on the use of deep learning algorithm based on convolutional neural network in order to build a processing model. This model has the task to help with a classification problem that is detecting whether a chest X-ray shows changes consistent with pneumonia or not, and classifying the X-ray images in two groups depending on the detection results.
Abstract:
Brain is the controlling center of our body. With the advent of time, newer and newer brain diseases are being discovered. Thus, because of the variability of brain diseases, existing diagnosis or detection systems are becoming challenging and are still an open problem for research. Detection of brain diseases at an early stage can make a huge difference in attempting to cure them. In recent years, the use of artificial intelligence (AI) is surging through all spheres of science, and no doubt, it is revolutionizing the field of neurology. Application of AI in medical science has made brain disease prediction and detection more accurate and precise. In this study, we present a review on recent machine learning and deep learning approaches in detecting four brain diseases such as Alzheimer’s disease (AD), brain tumor, epilepsy, and Parkinson’s disease. 147 recent articles on four brain diseases are reviewed considering diverse machine learning and deep learning approaches, modalities, datasets etc. Twenty-two datasets are discussed which are used most frequently in the reviewed articles as a primary source of brain disease data. Moreover, a brief overview of different feature extraction techniques that are used in diagnosing brain diseases is provided. Finally, key findings from the reviewed articles are summarized and a number of major issues related to machine learning/deep learning-based brain disease diagnostic approaches are discussed. Through this study, we aim at finding the most accurate technique for detecting different brain diseases which can be employed for future betterment.
Abstract:
Novel coronavirus (COVID-19) outbreak, has raised a calamitous situation all over the world and has become one of the most acute and severe ailments in the past hundred years. The prevalence rate of COVID-19 is rapidly rising every day throughout the globe. Although no vaccines for this pandemic have been discovered yet, deep learning techniques proved themselves to be a powerful tool in the arsenal used by clinicians for the automatic diagnosis of COVID-19. This paper aims to overview the recently developed systems based on deep learning techniques using different medical imaging modalities like Computer Tomography (CT) and X-ray. This review specifically discusses the systems developed for COVID-19 diagnosis using deep learning techniques and provides insights on well-known data sets used to train these networks. It also highlights the data partitioning techniques and various performance measures developed by researchers in this field. A taxonomy is drawn to categorize the recent works for proper insight. Finally, we conclude by addressing the challenges associated with the use of deep learning methods for COVID-19 detection and probable future trends in this research area. The aim of this paper is to facilitate experts (medical or otherwise) and technicians in understanding the ways deep learning techniques are used in this regard and how they can be potentially further utilized to combat the outbreak of COVID-19
Abstract:
Collaborative filtering (CF) is one of the most practical approaches on recommendation systems by predicting users’ preferences for items based on the user-item interaction information. Besides the connections between users and items, social networks among users can provide auxiliary information to improve the performance of recommender systems. Here, we propose an end-to-end deep learning framework by learning latent social features to embed in a CF approach. First, representation learning is employed on the rating matrix to extract the latent social features. Then, a novel deep learning approach based on cascade tree forest is used in the recommendation process. Experiments on real-world datasets from different domains demonstrate that the proposed Collaborative Deep Forest Learning (CDFL) outperforms the state-of-the-art CF recommendation methods.
Abstract:
Traditional sensor-based fire detection systems cannot be alerted until the heat actually reach to the sensors. Therefore, it is evident to make a fast, robust and reliable system which can detect fire at an early stage. We propose a method that is able to detect fire by analyzing videos acquired by surveillance cameras. Recent development in Deep Learning has been proved to be highly effective in the field of computer vision. Transfer Learning (a deep learning methodology) has emerged to be extremely helpful for the applications with scarcity of training data. Using Transfer Learning we leveraged the Deep Learning models already trained on ImageNet Dataset. To solve the fire detection problem, we fine-tuned these models using our own curated dataset which consists of videos downloaded from the internet and our own recorded videos. Specifically, we chose pre trained InceptionV3 and MobileNetV2 models for transfer learning and in between comparison. We have also shown comparison between transfer learning and full model training on the same data set. We found that transfer learned models perform way better than fully trained models when trained on limited dataset. Our models also outperformed the state-of-the-art hand-crafted features-based fire detection systems
Abstract:
In imbalanced network traffic, malicious cyber-attacks can often hide in large amounts of normal data. It exhibits a high degree of stealth and obfuscation in cyberspace, making it difficult for Network Intrusion Detection System(NIDS) to ensure the accuracy and timeliness of detection. This paper researches machine learning and deep learning for intrusion detection in imbalanced network traffic. It proposes a novel Difficult Set Sampling Technique(DSSTE) algorithm to tackle the class imbalance problem. First, use the Edited Nearest Neighbor(ENN) algorithm to divide the imbalanced training set into the difficult set and the easy set. Next, use the KMeans algorithm to compress the majority samples in the difficult set to reduce the majority. Zoom in and out the minority samples’ continuous attributes in the difficult set synthesize new samples to increase the minority number. Finally, the easy set, the compressed set of majority in the difficult, and the minority in the difficult set are combined with its augmentation samples to make up a new training set. The algorithm reduces the imbalance of the original training set and provides targeted data augment for the minority class that needs to learn. It enables the classifier to learn the differences in the training stage better and improve classification performance. To verify the proposed method, we conduct experiments on the classic intrusion dataset NSL-KDD and the newer and comprehensive intrusion dataset CSE-CIC-IDS2018. We use classical classification models: random forest(RF), Support Vector Machine(SVM), XGBoost, Long and Short-term Memory(LSTM), AlexNet, Mini-VGGNet. We compare the other 24 methods; the experimental results demonstrate that our proposed DSSTE algorithm outperforms the other methods.
Abstract:
Deep learning techniques have found their applications in various domains, and they are being widely used in medical treatments and diagnostics. To diagnose diseases viz. pneumonia, the examination of chest X-ray images are often conducted, and the efficiency of diagnosis can be significantly improved with the use of computer-aided diagnostic systems. Deep learning algorithms are used in this paper for the classification of chest X-ray images to diagnose pneumonia. Deep convolutional generative adversarial networks were trained for augmentation of synthetic images to oversample the dataset for the model to perform better. Then transfer learning was used with convolutional neural networks by utilising VGG16 as the base model for image classification. The model was able to achieve 94.5% accuracy on the validation set. In comparison with the naïve models, the accuracy of the proposed model was found to be significantly higher.
Abstract:
Coronavirus (Covid-19) is spreading fast, infecting people through contact in various forms including droplets from sneezing and coughing. Therefore, the detection of infected subjects in an early, quick and cheap manner is urgent. Currently available tests are scarce and limited to people in danger of serious illness. The application of deep learning to chest X- ray images for Covid-19 detection is an attractive approach. However, this technology usually relies on the availability of large labelled datasets, a requirement hard to meet in the context of a virus outbreak. To overcome this challenge, a semi-supervised deep learning model using both labelled and unlabelled data is proposed. We develop and test a semi-supervised deep learning framework based on the Mix Match architecture to classify chest X-rays into Covid-19, pneumonia and healthy cases. The presented approach was calibrated using two publicly available datasets. The results show an accuracy increase of around 15% under low labelled / unlabelled data ratio. This indicates that our semi-supervised framework can help improve performance levels towards Covid-19 detection when the amount of high-quality labelled data is scarce. Also, we introduce a semi-supervised deep learning boost coefficient which is meant to ease the scalability of our approach and performance comparison.
Abstract:
Anomaly detection on attributed networks attracts considerable research interests due to wide applications of attributed networks in modeling a wide range of complex systems. Recently, the deep learning-based anomaly detection methods have shown promising results over shallow approaches, especially on networks with high-dimensional attributes and complex structures. However, existing approaches, which employ graph autoencoder as their backbone, do not fully exploit the rich information of the network, resulting in suboptimal performance. Furthermore, these methods do not directly target anomaly detection in their learning objective and fail to scale to large networks due to the full graph training mechanism. To overcome these limitations, in this article, we present a novel Contrastive self-supervised Learning framework for Anomaly detection on attributed networks (CoLA for abbreviation). Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair, which can capture the relationship between each node and its neighboring substructure in an unsupervised way. Meanwhile, a well-designed graph neural network (GNN)-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure and measure the agreement of each instance pairs with its outputted scores. The multiround predicted scores by the contrastive learning model are further used to evaluate the abnormality of each node with statistical estimation. In this way, the learning model is trained by a specific anomaly detection-aware target. Furthermore, since the input of the GNN module is batches of instance pairs instead of the full network, our framework can adapt to large networks flexibly. Experimental results show that our proposed framework outperforms the state-of-the-art baseline methods on all seven benchmark data sets.
Abstract:
In this paper we detail the construction of a video processing system dedicated to identifying and understanding facial expressions of persons. Our approach implies detection of faciall and marks and analysis of their position to identify emotions. The paper describes a system based on three convolutional neural networks and how to combine them to give more accurate results in the field of facial expression recognition. We adapted the networks which were initially constructed to work on colored or grayscale images to work with black and white images containing facial landmarks. The training, validation and query datasets were also adapted and preprocessed from consecrated computer vision datasets, with the addition of several images acquired by ourselves. We present and comment our experimental results, pointing out advantages and disadvantages.
Abstract:
Human Activity Recognition (HAR) is a field that infers human activities from raw time-series signals acquired through embedded sensors of smartphones and wearable devices. It has gained much attraction in various smart home environments, especially to continuously monitor human behaviors in ambient assisted living to provide elderly care and rehabilitation. The system follows various operation modules such as data acquisition, pre-processing to eliminate noise and distortions, feature extraction, feature selection, and classification. Recently, various state-of-the-art techniques have proposed feature extraction and selection techniques classified using traditional Machine learning classifiers. However, most of the techniques use rustic feature extraction processes that are incapable of recognizing complex activities. With the emergence and advancement of high computational resources, Deep Learning techniques are widely used in various HAR systems to retrieve features and classification efficiently. Thus, this review paper focuses on providing profound concise of deep learning techniques used in smartphone and wearable sensor-based recognition systems. The proposed techniques are categorized into conventional and hybrid deep learning models described with its uniqueness, merits, and limitations. The paper also discusses various benchmark datasets used in existing techniques. Finally, the paper lists certain challenges and issues that require future research and improvements.
Abstract:
Distracted drivers contribute to a significant proportion of road accidents all over the world. Activities such as texting on cellphones, eating, and reaching for something at the back of the vehicle lead to drivers not paying attention to the road and may result in traffic accidents. This paper proposes the use of residual neural networks (ResNet) with spatio-temporal three-dimensional (3D) kernels to perform distracted driver behaviour recognition. Recently, convolutional neural networks (CNN) with 3D kernels have become an effective tool for action recognition. The 3D kernels extract spatio-temporal features from videos to perform tasks such as human activity recognition. ResNets are a variant of CNNs that utilise skip-connections to realise the training of very deep networks. The large number of parameters in 3D ResNets exposes the possibility of overfitting. Using a large video dataset is thus essential, as it avoids the occurrence of overfitting. This paper examines how different datasets and network depths influence the performance of 3D ResNets. The results are overwhelmingly positive. The findings present a significant positive correlation between the accuracy of a model and the network depth. Furthermore, the quality of the dataset greatly determines the model’s ability to generalise effectively.
Abstract:
Nowadays the large amount of sensitive data set is a traverse through different devices and communication channels and continuously new invention in computer network technology, security problem comes out increasingly, and it has become difficult to ignore because the day is day intruders discover new attacks so it is necessary to protect from those attacks and Intrusion Detection System play important role in network security. That’s why to develop the advanced Intrusion Detection System (IDSs). Traditional Intrusion Detection Technologies faces a high false-positive rate, low accuracy, and many more problems. However, to detect the new attacks and analyze a large amount of data is one of the challenges in IDS. To improve the security of people now, adopt advanced technologies for protection. Deep learning is the rising area which deals with the large scale of data in a promising way. Also, various complex applications have been accomplishing by deep learning. Deep learning is a vast field of machine learning based on a hierarchical concept that teaches a computer to do what humans can do naturally: learn by observation. So, it will be composed of layers which might be multiple layers are called hidden layers which are neurons which is help to get the output using those hidden layers. Deep learning can govern broad-scale data and has shown efficient output in that field. This approach is helped with its mechanism to large data with efficient training time and provides good accuracy. However, there are some limitations to using deep learning in IDS. The proposed research work will discuss how deep-learning is used or which algorithm is used based on deep learning in the Intrusion Detection System to enhance the performance, its features, and also different approaches, limitations, and future implementations are discussed.
Abstract:
Accurate breast cancer detection using automated algorithms remains a problem within the literature. Although a plethora of work has tried to address this issue, an exact solution is yet to be found. This problem is further exacerbated by the fact that most of the existing datasets are imbalanced, i.e., the number of instances of a particular class far exceeds that of the others. In this paper, we propose a framework based on the notion of transfer learning to address this issue and focus our efforts on histopathological and imbalanced image classification. We use the popular VGG-19 as the base model and complement it with several state-of-the-art techniques to improve the overall performance of the system. With the ImageNet dataset taken as the source domain, we apply the learned knowledge in the target domain consisting of histopathological images. With experimentation performed on a large-scale dataset consisting of 277,524 images, we show that the framework proposed in this paper gives superior performance than those available in the existing literature. Through numerical simulations conducted on a supercomputer, we also present guidelines for work in transfer learning and imbalanced image classification
Abstract:
Machine learning assisted diagnosis of acromegaly from facial photographs has been proved feasible in recent years. According to our previous research, facial and limb changes exist in patients with acromegaly at early stage. We aimed to facilitate the process of early self-screening for acromegaly from hand photographs by using a deep-learning approach. In this study, a dataset containing hand photographs of 635 acromegaly patients and 192 normal people were used to train a Deep Convolution Neural Network (DCNN). We augmented these images with tailed raw data. The prediction is performed in an end-to-end paradigm without manual pre-processing, from the input photograph to the final prediction. The trained models were evaluated on a separate dataset to validate the effectiveness. Different kinds of advanced DCNN architecture were explored in this novel task and they showed significant performance compared with the results from human doctors specialized in pituitary adenoma. We further used heat-map to provide visual explanations to illustrate how the DCNN diagnosed the acromegaly. The final result of our experiment showed a sensitivity of 0.983, a specificity of 0.920, a PPV of 0.966, a NPV of 0.958 and a F1-score of 0.974. In our method, the sensitivity was higher than doctors’ predictions, which indicates that our method could effectively help people detect acromegaly by themselves. Furthermore, our algorithm paid more attention to fingers and joints on which human doctors focused. This is the first study to investigate whether it is possible to detect acromegaly by machine learning from hand photographs and compare the result with human doctors specialized in pituitary adenoma. This study provided an easy-to-use tool for early self-screening of acromegaly for people without medical knowledge, so that acromegaly patients can get more timely treatment.
Abstract:
Edge nodes are crucial for detection against multitudes of cyber attacks on Internet-of-Things endpoints and is set to become part of a multi-billion industry. The resource constraints in this novel network infrastructure tier constricts the deployment of existing Network Intrusion Detection System with Deep Learning models (DLM). We address this issue by developing a novel light, fast and accurate `Edge-Detect’ model, which detects Distributed Denial of Service attack on edge nodes using DLM techniques. Our model can work within resource restrictions i.e. low power, memory and processing capabilities, to produce accurate results at a meaningful pace. It is built by creating layers of Long Short-Term Memory or Gated Recurrent Unit based cells, which are known for their excellent representation of sequential data. We designed a practical data science pipeline with Recurring Neural Network to learn from the network packet behavior in order to identify whether it is normal or attack-oriented. The model evaluation is from deployment on actual edge node represented by Raspberry Pi using current cybersecurity dataset (UNSW2015). Our results demonstrate that in comparison to conventional DLM techniques, our model maintains a high testing accuracy of ~99% even with lower resource utilization in terms of cpu and memory. In addition, it is nearly 3 times smaller in size than the state-of-art model and yet requires a much lower testing time.
Abstract:
COVID-19 is an infectious disease that has invaded the world since 2019 starting from China. It is caused by a new member of the corona viruses’ family that attacks the respiratory system of living body leading to fever, cough, general tiredness and to death in worst case scenarios. The disease is commonly detected by RT-PCR tests, which is time consuming and relatively expensive. The need for faster, cheaper, and more precise diagnostic tool raised the demand for the utilization of technology and artificial intelligence in this purpose. This study aims to build a robust deep learning algorithm using convolutional neural networks (CNNs) that is capable to classify chest X-ray images into COVID-19, viral pneumonia, and normal cases. X-ray is a safe and inexpensive imaging system that is available at all hospitals and healthcare centers. A novel CNN model built from scratch (COV-X) has been introduced and shown 94% accuracy in the classification purpose. VGG16, VGG19, RESNET50, XCEPTION, and MOBILENET are pre-trained models that have shown an accuracy of 95%, 95%, 33%, 65%, and 77%, respectively. Findings prove that deep learning is an effective technique for early detection of COVID-19, it provides automatic detection with high reliability to help the healthcare professions and avoid the pandemic from spreading more.
Abstract:
The COVID-19 pandemic is devastatingly affecting the health and well-being of the worldwide population. A basic advance in the battle against it resides in effective screening of infected patients, with one of the key screening approaches such as radiological imaging based on chest radiography. Faced with this challenge, various artificial intelligence (AI) frameworks, mostly based on deep learning, have been proposed and results have been getting better and very promising as the precision of positive cases recognition is constantly refined. In the light of previous work on automated X-ray image screening, we train several deep convolutional networks for the classification of chest pathologies into: normal, pneumonia, and COVID-19. We use three open-source and one private dataset for the validation of our findings. Unfortunately, data scarcity remains a big challenge hurdling COVID-19 automatic recognition research. In our case, we used a total of 518 COVID-19 positive X-ray images. We evaluate different architectures for COVID-19 recognition with different deep neural architecture.
How to handle viva voce part-1
Tell me about your project viva voce part-2
IEEE deep learning projects | deep learning projects | Bangalore| IEEE machine Learning Projects | IEEE python Projects | IEEE deep learning projects | deep learning projects | Bangalore| IEEE machine Learning Projects | IEEE python Projects |IEEE deep learning projects | deep learning projects | Bangalore| IEEE machine Learning Projects | IEEE python Projects | IEEE deep learning projects | deep learning projects | Bangalore| IEEE machine Learning Projects | IEEE python Projects |IEEE deep learning projects | deep learning projects | Bangalore| IEEE machine Learning Projects | IEEE python Projects | IEEE deep learning projects | deep learning projects | Bangalore| IEEE machine Learning Projects | IEEE python Projects
IEEE deep learning projects