Lubabalo Mbangata
Biography
Mr. Lubabalo Mbangata is a Lecturer in the Department of Information Systems at the Durban University of Technology, South Africa, and a Ph.D. candidate in Information Systems and Technology at the University of KwaZulu-Natal. With extensive academic and industry experience, he has expertise in project management, business analysis, curriculum development, and digital learning systems. His research focuses on database normalization, e-learning performance, machine learning, and cloud computing adoption in SMEs, with several publications in Springer and ACM proceedings. He has also contributed to studies on image classification, autoencoder models, and technology adoption frameworks. In addition to teaching and research, Mr. Mbangata has served as a project coordinator and IT business analyst, bringing strong interdisciplinary experience that bridges academia, technology, and applied research
Research Interest
Database normalization and optimization, Machine learning and deep learning models (e.g., autoencoders, image classification), E-learning systems performance and digital learning technologies.
Abstract
Developing Ethnoscience-Based Concepts for Teaching & Learning Database Normalisation:
There are two viewpoints on database management: the viewpoint of relational databases and the viewpoint of non-relational databases. According to Kamara et al. (2020), a relational database is a "collection of tables where each table is a two-dimensional display using rows equivalent to an object and columns equivalent to characteristics", while Aydin et al. (2016) define a non-relational database as a collection of database systems that highlight schema-free representations and ad hoc organisational data. This means that database normalisation encompasses the process of arranging data in a recognised and structured way by increasing relations based on identified dependencies. Ineffective normalisation can lead to incompetent and inaccurate database systems. Therefore, data normalisation is a critical aspect of database design, aimed at reducing redundancy and improving data integrity. However, there seems to be a problem of teaching and learning database normalisation in general, as buttressed by the following abstracts from current literature. "Many database instructors at different institutions worldwide have two major concerns: what to teach and how to teach database courses effectively. Thus, finding an appropriate pedagogy to teach this subject is a challenge for today' s information systems (IS) educators," AlDmour A (2010). Amin et al. 2019 believe that scheming and constructing a decent relational database management system is very challenging; however, a well-developed database is satisfying and rewarding, and it is worth all the effort. As for Taipalus (2020), even though there is an agreed position, long life, and prevalent nature of database concepts and query language concepts in modules related to information systems (IS), information technology (IT), and computer science (SC), it is relatively astonishing that there is limited research studies on this issue(s) equated to the subjects like programming language concepts. In this study discussion, we explore the concept of data normalisation through the lens of ethnomathematics and the intersection of ethnobiology and database normalisation from the unnormalized form and its progression into the first, second, and third normal forms (1NF, 2NF, and 3NF). For ethnomathematics, we use a set of whole numbers from 1 to 50, excluding 0, to illustrate the concepts of removing the repeating groups (1NF), the partial dependencies (2NF), and the transitive dependencies (3NF), and discuss the principles behind each normal form. While for ethnobiology, the study began with unnormalised data, where animal categories were recorded as they exist in their raw, unstructured form, and these animals were selected in no particular order or merit, as long as they are living animals and can be categorised. Progressing to the first normal form (1NF), the data was organised into a tabular structure with unique rows and atomic values. In the second normal form (2NF), redundancies are reduced by ensuring that all non-primary attributes depend on the entire primary key. Finally, in the third normal form (3NF), transitive dependencies are eliminated, creating a fully normalised, efficient data model. The findings highlight how ethnomathematics and ethnobiological data naturally follow hierarchical and
relational patterns, making them an effective analogy for understanding database normalisation. These two approaches not only enhance the understanding of database concepts but also underscore the values of mathematical and indigenous knowledge in illustrating complex technical processes.