2019-2020 IEEE Machine Learning Projects / Artificial Intelligence Projects

IEEE MACHINE LEARNING PROJECTS / ARTIFICIAL INTELLIGENCE PROJECTS IN BANGALORE

For Outstation Students, we are having online project classes both technical and coding using net-meeting software

For details, Call: 9886692401/9845166723

DHS Informatics providing latest 2019-2020 IEEE projects on IEEE Machine Learning Projects/ Artificial Intelligence projects for the final year engineering students. DHS Informatics trains all students in IEEE Machine Learning Projects/ Artificial Intelligence projects techniques to develop their project with good idea what they need to submit in college to get good marks. DHS Informatics offers placement training in IEEE Machine Learning Projects/ Artificial Intelligence projects at Bangalore and the program name is OJT – On Job Training, job seekers as well as final year college students can join in this placement training program and job opportunities in their dream IT companies. We are providing IEEE Machine Learning Projects/ Artificial Intelligence projects for B.E / B.TECH, M.TECH, MCA, BCA, DIPLOMA students from more than two decades.

Abstract:

In recent years, people have been paying more and more attention to air quality because it directly affects people’s health and daily life. Effective air quality prediction has become one of the hot research issues. However, this paper is suffering many challenges, such as the instability of data sources and the variation of pollutant concentration along time series. Aiming at this problem, we propose an improved air quality prediction method based on the LightGBM model to predict the PM2.5 concentration at the 35 air quality monitoring stations in Beijing over the next 24 h. In this paper, we resolve the issue of processing the high-dimensional large-scale data by employing the LightGBM model and innovatively take the forecasting data as one of the data sources for predicting the air quality. With exploring the forecasting data feature, we could improve the prediction accuracy with making full use of the available spatial data. Given the lack of data, we employ the sliding window mechanism to deeply mine the high-dimensional temporal features for increasing the training dimensions to millions. We compare the predicted data with the actual data collected at the 35 air quality monitoring stations in Beijing. The experimental results show that the proposed method is superior to other schemes and prove the advantage of integrating the forecasting data and building up the high-dimensional statistical analysis.

Abstract:

Health care field has a vast amount of data, for processing those data certain techniques are used. Data mining is one of the techniques often used. Heart disease is the Leading cause of death worldwide. This System predicts the arising possibilities of Heart Disease. The outcomes of this system provide the chances of occurring heart disease in terms of percentage. The datasets used are classified in terms of medical parameters. This system evaluates those parameters using data mining classification technique. The datasets are processed in python programming using two main Machine Learning Algorithm namely Decision Tree Algorithm and Naive Bayes Algorithm which shows the best algorithm among these two in terms of accuracy level of heart disease.

Abstract:

Collaborative filtering (CF) algorithms have been widely used to build recommender systems since they have distinguishing capability of sharing collective wisdoms and experiences. However, they may easily fall into the trap of the Matthew effect, which tends to recommend popular items and hence less popular items become increasingly less popular. Under this circumstance, most of the items in the recommendation list are already familiar to users and therefore the performance would seriously degenerate in finding cold items, i.e., new items and niche items. To address this issue, in this paper, a user survey is first conducted on the online shopping habits in China, based on which a novel recommendation algorithm termed innovator based CF is proposed that can recommend cold items to users by introducing the concept of innovators. Specifically, innovators are a special subset of users who can discover cold items without the help of recommender system. Therefore, cold items can be captured in the recommendation list via innovators, achieving the balance between serendipity and accuracy. To confirm the effectiveness of our algorithm, extensive experiments are conducted on the dataset provided by Alibaba Group in Ali Mobile Recommendation Algorithm Competition, which is collected from the real e-commerce environment and covers massive user behavior log data.

Abstract:

In the neighborhood-based collaborative filtering recommendation algorithm, the accuracy of the similarity calculation determines the quality of the recommendation algorithm directly. The traditional similarity measure only considers influence of common rated items among users, and ignores the attribute characteristics of users’ rated items. Low-precision similarity metrics reduce performance of recommended systems, when the dataset is extremely sparse. In order to solve above problems, this paper proposes a similarity measure model considering users’ preferences for item attributes. The model fully considers the user’s preferences for item attributes and co-rated items, and the number of co-rated items. The model establishes more connections between users and items, so as to mine user interests effectively and make it more in line with the actual application. The experimental results show that the model proposed by this paper is superior to other comparison methods in accuracy and diversity, which effectively improves the performance of the recommended algorithm.

Abstract:

We develop a novel framework, named as l -injection, to address the sparsity problem of recommender systems. By carefully injecting low values to a selected set of unrated user-item pairs in a user-item matrix, we demonstrate that top- N recommendation accuracies of various collaborative filtering (CF) techniques can be significantly and consistently improved. We first adopt the notion of pre-use preferences of users toward a vast amount of unrated items. Using this notion, we identify uninteresting items that have not been rated yet but are likely to receive low ratings from users, and selectively impute them as low values. As our proposed approach is method-agnostic, it can be easily applied to a variety of CF algorithms. Through comprehensive experiments with three real-life datasets (e.g., Movielens, Ciao, and Watcha), we demonstrate that our solution consistently and universally enhances the accuracies of existing CF algorithms (e.g., item-based CF, SVD-based CF, and SVD++) by 2.5 to 5 times on average. Furthermore, our solution improves the running time of those CF methods by 1.2 to 2.3 times when its setting produces the best accuracy.

Abstract:

Personalized recommendation is crucial to help users find pertinent information. It often relies on a large collection of user data, in particular users’ online activity (e.g., tagging/rating/checking-in) on social media, to mine user preference. However, releasing such user activity data makes users vulnerable to inference attacks, as private data (e.g., gender) can often be inferred from the users’ activity data. In this paper, we proposed PrivRank, a customizable and continuous privacy-preserving social media data publishing framework protecting users against inference attacks while enabling personalized ranking-based recommendations. Its key idea is to continuously obfuscate user activity data such that the privacy leakage of user-specified private data is minimized under a given data distortion budget, which bounds the ranking loss incurred from the data obfuscation process in order to preserve the utility of the data for enabling recommendations. An empirical evaluation on both synthetic and real-world datasets shows that our framework can efficiently provide effective and continuous protection of user-specified private data, while still preserving the utility of the obfuscated data for personalized ranking-based recommendation. Compared to state-of-the-art approaches, PrivRank achieves both a better privacy protection and a higher utility in all the ranking-based recommendation use cases we tested.

Abstract:

Stream clustering methods have been repeatedly used for spam filtering in order to categorize input messages/tweets into spam and non spam clusters. These methods assume each cluster contains a number of neighbor small (micro) clusters, where each micro cluster has a symmetric distribution. Nonetheless, this assumption is not necessarily correct and big micro clusters might have asymmetric distribution. To enhance the assigning accuracy of former methods in their online phase, we suggest replacing the Euclidean distance by a set of classifiers in order to assign incoming samples to the most relative micro cluster with arbitrary distribution. Here, a set of incremental Naïve Bayes (INB) classifier is trained for micro clusters whose population exceeds a threshold. These INBs can capture the mean and boundary of micro clusters, while the Euclidean distance just considers the mean of clusters and acts inaccurate for asymmetric big micro clusters. In this paper, DenStream was promoted by the proposed framework, called here as INB-DenStream. To show the effectiveness of INB-DenStream, state-of-the-art methods such as DenStream, StreamKM++, and CluStream were applied to the Twitter datasets and their performance was determined in terms of purity, general precision, general recall, F1 measure, parameter sensitivity, and computational complexity. The compared results implied the superiority of our method to the rivals in almost the datasets.

Abstract:

Collaborative filtering plays an important role in promoting the service recommendation ecosystem, and the matrix decomposition technology has been proven to be one of the most effective recommendation methods. However, the traditional collaborative filtering algorithm has great shortcomings in the recommendation of cold start items, especially the emergence of new items will be largely ignored. This not only has a very bad impact on the development of the item, but also greatly reduces the diversity of the recommendation system. The rise of mobile devices has also brought a large number of mobile applications, and these emerging applications need to be promoted in order to maintain the robustness of the application system. In order to solve this problem, we propose a method of combining the attribute information of the item with the historical rating matrix to predict the potential preferences of the user. It combines the attribute and time information into a matrix decomposition model. By testing our method on the movielens and the climbed JD dataset, the experimental results show that, compared with the baseline method, the proposed method achieves a significant improvement in recommendation accuracy. Therefore, this method is an effective way to solve the cold start problem of new items.

Abstract:

Due to the significant air pollution problem, monitoring and prediction for air quality have become increasingly necessary. To provide real-time fine-grained air quality monitoring and prediction in urban areas, we have established our own Internet-of-Things-based sensing system in Peking University. Due to the energy constraint of the sensors, it is preferred that the sensors wake up alternatively in an asynchronous pattern, which leads to a sparse sensing dataset. In this paper, we propose a novel approach to predict the real-time fine-grained air quality based on asynchronous sensing. The sparse dataset and the spatial-temporal-meteorological relations are modeled into the correlation graph, in which way the prediction procedures are carefully designed. The advantage of the proposed solution over existing ones is evaluated over the dataset collected by our air quality monitoring system.

Abstract:

Spam has become the platform of choice used by cyber-criminals to spread malicious payloads such as viruses and trojans. In this paper, we consider the problem of early detection of spam campaigns. Collaborative spam detection techniques can deal with large scale email data contributed by multiple sources; however, they have the well-known problem of requiring disclosure of email content. Distance-preserving hashes are one of the common solutions used for preserving privacy of email content while enabling message classification for spam detection. However, distance-preserving hashes are not scalable, thus making large scale collaborative solutions difficult to implement. As a solution, we propose Spamdoop, a Big Data privacy-preserving collaborative spam detection platform built on top of a standard Map Reduce facility. Spamdoop uses a highly parallel encoding technique that enables the detection of spam campaigns in competitive times. We evaluate our system’s performance using a huge synthetic spam base and show that our technique performs favorably against the creation and delivery overhead of current spam generation tools.

Serendipitous drug usage refers to the unexpected relief of comorbid diseases or symptoms when taking medication for a different known indication. Historically, serendipity has contributed significantly to identifying many new drug indications. If patient-reported serendipitous drug usage in social media could be computationally identified, it could help generate and validate drug repositioning hypotheses. We investigated deep neural network models for mining serendipitous drug usage from social media. We used the word2vec algorithm to construct word-embedding features from drug reviews posted in a WebMD patient forum. We adapted and redesigned the convolutional neural network, long short-term memory network, and convolutional long short-term memory network by adding contextual information extracted from drug-review posts, information-filtering tools, medical ontology, and medical knowledge. We trained, tuned, and evaluated our models with a gold-standard dataset of 15714 sentences (447 [2.8%] describing serendipitous drug usage). Additionally, we compared our deep neural networks to support vector machine, random forest, and AdaBoost.M1 algorithms. Context information helped to reduce the false-positive rate of deep neural network models. If we used an extremely imbalanced dataset with limited instances of serendipitous drug usage, deep neural network models did not outperform other machine-learning models with n-gram and context features. However, deep neural network models could more effectively use word embedding in feature construction, an advantage that makes them worthy of further investigation. Finally, we implemented natural-language processing and machine-learning methods in a web-based application to help scientists and software developers mine social media for serendipitous drug usage.

The intelligence of Smart Cities (SC) is represented by its ability in collecting, managing, integrating, analyzing, and mining multi-source data for valuable insights. In order to harness multi-source data for an informed place design, this paper presents “Public Sentiments and Activities in Places” multi-source data analysis flow (PSAP) in an Informed Design Platform (IDP). In terms of key contributions, PSAP implements 1) an Interconnected Data Model (IDM) to manage multi-source data independently and integrally, 2) an efficient and effective data mining mechanism based on multi-dimension and multi-measure queries (MMQs), and 3) concurrent data processing cascades with Sentiments in Places Analysis Mechanism (SPAM) and Activities in Places Analysis Mechanism (APAM), to fuse social network data with other data on public sentiment and activity comprehensively. As proved by a holistic evaluation, both SPAM and APAM outperform compared methods. Specifically, SPAM improves its classification accuracy gradually and significantly from 72.37 to about 85 percent within nine crowd-calibration cycles, and APAM with an ensemble classifier achieves the highest precision of 92.13 percent, which is approximately 13 percent higher than the second best method. Finally, by applying MMQs on “Sentiment&Activity Linked Data”, various place design insights of our testbed are mined to improve its livability.

Abstract:

Clustering techniques have been widely adopted in many real world data analysis applications, such as customer behavior analysis, targeted marketing, digital forensics, etc. With the explosion of data in today’s big data era, a major trend to handle a clustering over large-scale datasets is outsourcing it to public cloud platforms. This is because cloud computing offers not only reliable services with performance guarantees, but also savings on in-house IT infrastructures. However, as datasets used for clustering may contain sensitive information, e.g., patient health information, commercial data, and behavioral data, etc, directly outsourcing them to public cloud servers inevitably raise privacy concerns. In this paper, we propose a practical privacy-preserving K-means clustering scheme that can be efficiently outsourced to cloud servers. Our scheme allows cloud servers to perform clustering directly over encrypted datasets, while achieving comparable computational complexity and accuracy compared with clusterings over unencrypted ones. We also investigate secure integration of MapReduce into our scheme, which makes our scheme extremely suitable for cloud computing environment. Thorough security analysis and numerical analysis carry out the performance of our scheme in terms of security and efficiency. Experimental evaluation over a 5 million objects dataset further validates the practical performance of our scheme.

IEEE MACHINE LEARNING PROJECTS / ARTIFICIAL INTELLIGENCE PROJECTS IN BANGALORE

Project CODE
TITLES
BASEPAPER
SYNOPSIS
LINKS
 1.  IEEE 2018:Application of machine learning in recommendation  systems Title Title Title
 2.  IEEE 2018:Breast Cancer Diagnosis Using Adaptive Voting  Ensemble Machine Learning Algorithm Title Title Title
 3.  IEEE 2018:Classifying Depressed Users With Multiple Instance Learning from Social Network Data Title Title Title
 4.  IEEE 2018:Research on Personalized Referral Service and Big Data Mining for E-commerce  with Machine Learning Title Title Title
 5.  IEEE 2018: Zaman Serisi Verilerini Kullanarak Makine  Öğrenmesi Yöntemleri ile Bitcoin Fiyat Tahmini  Prediction of Bitcoin Prices with Machine Learning  Methods using Time Series Data Title Title Title
 6.  IEEE 2018:Supervised Machine Learning Algorithms for Credit Card Fraudulent  Transaction Detection: A Comparative Study Title Title Title
 7.  IEEE 2018:Machine Learning Approach for Brain Tumor Detection Title Title Title
 8.  IEEE 2018:Leveraging Deep Preference Learning for Indexing and Retrieval of  Biomedical Images Title Title Title
 9.  IEEE 2018:Animal classification using facial images with score-level fusion Title Title Title
10.  IEEE 2018:Credit card fraud detection using Machine Learning  Techniques Title Title Title
11.  IEEE 2018:Phishing Web Sites Features Classification Based on  Extreme Learning Machine Title Title Title
12.  IEEE 2018:Predictive Analysis of Sports Data using Google Prediction API Title Title Title
13.  IEEE 2017:Point-of-interest Recommendation for Location Promotion in Location-based Social Networks Title Title Title
14. IEEE 2017:NetSpam: a Network-based Spam Detection Framework for Reviews in Online Social Media Title Title Title
15. IEEE 2017:SocialQ&A: An Online Social Network Based Question and Answer System Title Title Title
16. IEEE 2017:Modeling Urban Behavior by Mining Geotagged Social Data Title Title Title
17. IEEE 2016:SPORE: A Sequential Personalized Spatial Item Recommender System Title Title Title
18. IEEE 2016: Truth Discovery in Crowd sourced Detection of Spatial Events Title Title Title
19. IEEE 2016: Sentiment Analysis of Top Colleges in India Using Twitter Data Title Title Title

DHS Informatics believes in students’ stratification, we first brief the students about the technologies and type of IEEE Machine Learning Projects/ Artificial Intelligence projects and other domain projects. After complete concept explanation of the IEEE Machine Learning/Artificial Intelligence projects, students are allowed to choose more than one IEEE Machine Learning  projects for functionality details. Even students can pick one project topic from IEEE Machine Learning Projects/ Artificial Intelligence projects and another two from other domains like Machine Learning /AI, Data Science, image process, information forensic, big data,block chain etc. DHS Informatics is a pioneer institute in Bangalore / Bengaluru; we are supporting project works for other institute all over India. We are the leading final year project center in Bangalore / Bengaluru and having office in five different main locations Jayanagar, Yelahanka, Vijayanagar, RT Nagar & Indiranagar.

We allow the ECE, CSE, ISE final year students to use the lab and assist them in project development work; even we encourage students to get their own idea to develop their final year projects for their college submission.

MACHINE LEARNING /AI

Machine learning (project) is an application of artificial intelligence (project) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning (project) focuses on the development of computer programs that can access data and use it learn for themselves.

IEEE Machine Learning Projects | Artificial Intelligence Projects Bangalore | Artificial Intelligence
IEEE Machine Learning Projects | Artificial Intelligence Projects Bangalore
&#x;&#x;
hi 123