Hadeel Alobaidy, Tasnime Tantawy and Shahad Talal, Department of Computer Science, University of Bahrain, Bahrain, Zallaq
This research aims to develop an optimal metro railway that connects the most popular spots for tourist places in the Kingdom of Bahrain using Dijkstra’s algorithm, complemented by a recommendation system using modified collaborative filtering and haversine algorithm. The proposed system employs software engineering principles involving agile methodology as a software process model for enhancing the adaptability and flexibility of the proposed system. The application embeds the railway route generated by Dijkstra’s algorithm to enhance the recommendations provided by the collaborative filtering algorithm, resulting in an accurate system with extraordinary potential for travellers and business owners in the future.
Bahrain Metro, Software Engineering, Agile process model, Recommendation System, Human-Computer Interaction, User- experience.
Efe Batur Giritli1, Yekta Said Can1, Alper Sen and Fatih Alagöz2, 1R&D Center, Tatilsepeti, Istanbul, TURKEY, 2Computer Engineering Dept., Bogazici University, Istanbul, Turkey
In the rapidly evolving landscape of the global tourism industry, efficient and reliable online accommodation booking systems are paramount. This study delves into the crucial role of model-based testing in enhancing the efficiency and reliability of online accommodation booking systems within the dynamic landscape of the global tourism industry. By using a behaviour driven test automation tool GraphWalker, the research employs advanced modeling techniques to meticulously capture the system’s nuances. In emphasizing the significance of modelbased testing, this research not only contributes to improved online booking experiences but also underscores its broader relevance in the domain, serving as a valuable resource for industry practitioners and researchers alike.
model based testing, graphwalker, selenium web driver, model based booking.
Larissa Luize de Faria Cardoso, Electrical Engineering Department Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil
Sustainable investments, guided by ESG (Environmental, Social, and Governance) criteria, have become a central focus for investors worldwide. The integration of ESG criteria into investment decisions has been shown to lead to better financial performance and lower long-term risk for companies. This study aims to develop and apply a Genetic Algorithm (GA) to optimize investment portfolios that balance financial, return, ESG criteria, and risk. The proposed methodology creates a robust and adaptable model suitable for real-world sustainable investment scenarios. By using data from companies such as Apple, Microsoft, and Tesla, this study demonstrates the effectiveness of GAs in achieving an optimal portfolio allocation. The results highlight the potential of GAs to consider multiple objectives simultaneously and provide a balanced solution that meets financial and sustainability goals.
Sustainable Investments, ESG, Genetic Algorithms, Portfolio Optimization, Financial Performance.
Moti Schneider1 and Arthur Yosef2, 1Tel Aviv-Yaffo Academic College, Israel, 2Netanya Academic College, Israel
This study presents a method to assign relative weights when constructing Fuzzy Cognitive Maps (FCMs). We introduce a method of computing relative weights of directed edges based on actual past behaviour (historical data) of the relevant concepts. There is also a discussion addressing the role of experts in the process of constructing FCMs. The method presented here is intuitive, and does not require any restrictive assumptions. The weights are estimated during the design stage of FCM and before the recursive simulations are performed.
FCM, relative importance (weight), Fuzzy Logic, Soft Computing, Neural Networks.
Geetesh More, Suprio Ray and Kenneth B Kent
Existing systems used in big data processing are becoming less energy-efficient and fail to scale in terms of power consumption and area. Under big data application scenarios, the movement of large volumes of data influences performance, power efficiency, and reliability, which are the three fundamental attributes of a computing system. With data volumes and data sources continuing to increase, there has been a thrust to rethink how businesses approach data handling and storage, with one of the main objectives being maximizing performance and speed without increasing complexity and overall costs. Large-scale data centers require highly efficient server and storage systems. Existing approaches where CPUs are employed have several limitations. Traditional CPU technology limits performance, as frequency scaling approaches for performance improvements are no longer applicable. This has shifted the interest toward multicore processing. However, multicore processing again has many limitations such as I/O and memory bandwidth. FPGAs have several advantages over CPUs or GPUs. They can be used to accelerate the performance of large-scale data systems. In this paper, we survey various near-memory database acceleration on FPGAs where compute and data-centric operations are moved closer to FPGAs.
Field programmable gate array (FPGA), big data, in-memory computing, near-memory computing, Graphic Processing Unit (GPUs), SQL.
Ervin Vladić, Benjamin Mehanović, Mirza Novalić, Dino Kečo and Dželila Mehanović, Department of Information Technology, International Burch University, Sarajevo, Bosnia and Herzegovina
Over the last 10 years, social media platforms have grown into powerful machines for opinion -sharing and conversation-starting. At the same time, developments in machine learning (ML) and artificial intelligence (AI) have resulted in new approaches for analyzing huge amounts of data generated by users. To classify the sentiments represented in tweets, this research investigates sentiment analysis, a subfield of natural language processing (NLP), and machine learning. This paper provides an extensive analysis of sentiment classification on social media data by integrating approaches including data loading and overview, regulating class imbalance, text preparation and tokenization, sentiment analysis and visualization, and model evaluation. The increasing amount of user-generated information on social media has led businesses, researchers, and people to look for insights into consumer feedback, public opinion, and market trends. This has led to a rise in the popularity of sentiment analysis. Linear SVC and Logistic Regression have been determined to be the most successful machine learning models for sentiment analysis. While Logistic Regression gets 83% training accuracy and 78% testing accuracy, Linear SVC obtains 90% training accuracy and 77% testing accuracy.
Sentiment, Classification, Social media, Natural Language Processing, Models.
Mahmud Adeleye1 and Krishna Chaitanya Rao Kathala2 1Oxford Brookes University, Oxford, OX3 0BP, United Kingdom, 2University of Massachusetts Amherst, 01002, United States of America
This paper reviews the literature on context window size extension techniques in Large Language Models (LLMs), examining their potential, limitations, and impact on performance. We survey existing research on models ranging from 8,000 to 2 million token context windows, synthesizing findings on their performance across various benchmarks and tasks. The synthesis reveals a complex relationship between context size and model performance, highlighting challenges such as the "lost in the middle" phenomenon. We discuss emerging techniques addressing these issues, including Position Interpolation and Parallel Context Windows. Our review of performance analyses on benchmarks like MMLU and HumanEval provides insights into LLMs task-specific capabilities. We conclude by outlining future research directions, emphasizing the need for more ef icient processing of long contexts and standardized evaluation methods. This survey of ers a foundation for understanding the current landscape of context windows in LLMs and potential avenues for future advancements in this rapidly evolving field.
Large Language Models (LLMs), Context Window Size Extension, Long-context Processing, Performance Evaluation.
Reihaneh Maarefdoust, Xin Zhang, Behrooz Mansour, Yuqi Song, Department of Computer Science, University of Southern Maine, Portland, USA
The discovery of new materials has been a protracted and labor-intensive endeavor, relying on iterative trial-and-error methodologies. Recently, materials informatics has been transforming this process by employing advanced data science and computational tools to expedite the discovery of novel materials, such as generative design material formulas, and predict material properties. However, predicting crystal three-dimensional structures remains a challenging task rooted in both the fundamental nature of materials and the limitations of current computational methods. Inspired by the power and success of artificial intelligence (AI) models, especially the deep learning techniques and natural language processing (NLP) algorithms, we consider capturing complex atom descriptions and relationships as text information and explore whether we can use the ability of language models to predict atomic coordinates. In this work, we explore multiple text generation models and employ the Longformer-Encoder-Decoder (LED) model to construct preliminary crystal structures based on detailed atom descriptions. Subsequently, these structures are further refined by a random forest regressor, which generates the final crystal configurations. Our experiments show this method excels in capturing the intricate atom relationships and ef ectively translating these associations into the specified crystal formats. We also focus on optimizing data representation for both atom descriptions and crystal structures and use clear metrics to evaluate accuracy and stability. Our results indicate that this method has promising potential and could improve the prediction of material crystal structures. Our source code can be accessed freely at https://anonymous.4open.science/r/Crystal-Structure-Prediction-7E1D.
Material informatics, Crystal structure prediction, Text generation, Longformer-Encoder-Decoder