An International Publisher for Academic and Scientific Journals
Author Login 
Scholars Journal of Engineering and Technology | Volume-13 | Issue-07
Integrating Artificial Intelligence and Machine Learning Techniques in Cloud Computing for Scalable Data Management
Raheela Firdaus, Ayesha komal, Muhammad Irshad Javed, Sumayya Bibi, Haider Raza Khan, Qazi Syed Muhammad Ali, Mohammed Alaa H. Altemimi, Kinza Urooj, Umm e Habiba
Published: July 4, 2025 | 63 38
Pages: 436-453
Downloads
Abstract
The sudden spread of cloud computing has revealed severe shortcomings in the traditional data management systems, especially their failure to automatically process the speed, variety and amount of contemporary datasets. Despite the elastic nature of the cloud platforms, the static nature and manual management make the platforms inefficient in resource utilization, latency unpredictably and limited scaling with dynamic workloads. Although artificial intelligence (AI) and machine learning (ML) have transformative potential to intelligent automation, research to this point has mainly concentrated on individual application cases, as opposed to delivery processes or end-to-end assimilation with cloud infrastructures. My work closes that gap by designing and experimentally testing the very first Artificial Intelligence-based framework to directly incorporate ML models in cloud infrastructures to support self-optimizing data management. We systematically tested 15 ML algorithms (such as neural networks, gradient boosting, and support vector machines) in three GCP, AWS, and Azure clouds at different workloads to find out which of these algorithms perform the best under different loads. Key performance indicators in terms of latency, throughput, CPU/memory usage, and scalability were compared using multiple regression analysis (MANOVA) with variables visualized using principal component analysis (PCA). As our findings indicate, Google Cloud Platform (GCP) has shown the best latency score (226.45 ms, p<0.01), whereas Microsoft Azure has gained optimal scores in the scalability assessment (4.31/5). Neural networks boosted throughput to a large degree (195.67 MBps, Cohen s d>1.5), and gradient boosting models optimized scalability (d=0.790.9). Some important correlations were that latency was highly predicted by memory usage (r=0.87, p<0.01), and throughput positively affected scalability (r=0.29, p<0.05). These results offer strong empirical support to the fact that the appli