Centre for Computational Intelligence (CCI)


Led by Dr Goh Ching Pang    

What is Computational Intelligence ?

It stands for a sub-domain of Artificial Intelligence that endows the ability for a computer or machine to learn and perform any intellectual task, similar to a human being.

What we do ?

To explore on designing algorithm and techniques that close to the human’s way of reasoning, i.e. utilizes inexact and incomplete knowledge and performs control actions in an adaptive way. Image and video processing, data mining, natural language processing, artificial intelligence, computer vision processing, robotics and human computer interaction seek the similar goals with Computational Intelligence. It is a way of performing like human beings and most of the time heavily used to perform reasoning and decision making processes.

What we aim for ?

To research and integrate the nature-inspired intelligence methodologies into the computer systems to address the complex real-world problems.

Artificial Intelligence (AI) Group

Led by Dr Lim Khai Yin     

AI is a general term that implies the use of computer to model and / or replicate intelligent behaviour. AI research focuses on the development and analysis of algorithms that learns and / or performs intelligent behaviour with minimal human intervention. These techniques have been and continue to be applied to a broad range of solution in robotics, e-commerce, medical diagnosis, gaming, mathematics, etc. Specifically, research is being conducted in estimation theory, mobility mechanism, active computer vision and so on.

Computer Vision (CV) Group

Led by AP Ts Dr Tew Yiqi 

CV processing works on the computer-based interpretation of 2D and 3D image data set from conventional and nonconventional image sources. It involves the field of Medical Image Analysis, Visualization, Object recognition, Gesture Analysis, Facial Expression, Tracking and Scene understanding and modelling, etc.


Led by Mr Wong Hon Yoon     

Robotic research is involved in research pertaining to all aspects of robotic manipulation, controls and developing Socially Assistive Robots. Our primary focus is to engineer robots that can operate and interact with humans in unstructured environment. Furthermore, human motion and models development is focused. The current project is working on capturing elegant human motion, improve the system and robot programming in Python language.

Human Computer Interaction (HCI) Group

Led by Dr Aw Kien Sin     

HCI group aims to bridge gap between the human and digital world. Thus, we explore the possibilities to reduce the gap by integrating emerging technologies (e.g., new affordance of the Internet of Things) into our interaction design and solution, with new affordances . Beside, human factors such as human behavior, cognitive psychology, culture, environment, etc will be taken into consideration during the design process, as human/users is the centre of this field of study.

    Computational Intelligence Research Lab is working towards to set up the facilities and take opportunity for supporting innovative development through undergraduate final year projects, projects in postgraduate programmes offered by TAR UMT.
  • 2022: Enveloped Digital Document Recognition with Optical Character Recognition (OCR) Technology by Dr Tew Yiqi (obtained RM 100,000 from TAR UMT internal grant)  
  • 2020: Installation of high-end computer terminals and infrastructure for Artificial Intelligence (AI) and Big Data Analytics (BDA) related Research and Development, talent development and conducting public training or workshops by Dr Lim Yee Mei (obtained RM399k under RMK11)  
  • 2020: Interactive Dashboard with Visual Sensing and Zero-shot Learning Capabilities by Dr Chaw Jun Kit and Dr Tew Yiqi (obtained RM58,200 from TAR UMT internal grant)  
  • 2020: Smart Autonomous Drone System for Building Inspections by Ts Ong Jia Hui (obtained RM 4,000 from Hilti (Malaysia) Sdn Bhd)  
  • 2019: Obtained competition award from Hilti (malaysia) Sdn Bhd on the project title: Building Condition Evaluation through Crack Detection, by Dr Chaw Jun Kit and a final year project student (obtained RM 4,000 industrial grant)  
  • 2019: Collaborated with Asia Roofing Industries Sdn Bhd on the project title: Intelligent Materials and Manufacturing Planning System, by Dr Chaw Jun Kit and a postgraduate student (obtained RM 92,000 industrial grant)  
  • 2018: Collaborated with Hotayi Electronic (M) Sdn Bhd in Image Processing Automation (Optical Character Recognition) and Parts Supply Chain Automation (Purchase Order Matching System), by Ts Dr Tew Yiqi, Ms Choon Kwai Mui, and a team of Final Year Project students (obtained RM100,000 industrial grant, co-work with another 4 FOCS and FOET projects)  
  • 2018: Collaborated with Kian Joo Can Factory Berhad on the project title: Intelligent Materials and Manufacturing Planning System, by Dr Lim Yee Mei and a postgraduate student (obtained RM 92,000 industrial grant)  
  • 2018: Obtained Fundamental Research Grant Scheme (FRGS) in the area of HEVC Multiview Video Streaming Mechanism for Data Distribution Service Framework, by Ts Dr Tew Yiqi and a postgraduate student (obtained RM 48,000 government grant)  
  • 2017: Co-Research with FOET lecturer in the area of Micro-Expression Recognition via Fundamental Research Grant Scheme (FRGS) by Dr Lim Chern Hong 
  • 2017: Participated in PECIPTA 2017 to showcase Smart Classroom Management and Car Plate Recognition System and won bronze medals by Dr Lim Chern Hong 
Brain Tumor Classification

Brain tumor is a group of abnormal cells in the brain which can be either cancerous (malignant) or non-cancerous (benign). Gliomas are the most common brain tumor that can be categorized into low-grade glioma (LGG) and high-grade glioma (HGG). Identifying the location and area of the tumor has been a tedious job for radiologists, let alone to classify the tumor into the LGG / HGG. Hence, a fully automatic approach is proposed to segment the brain tumor in Magnetic Resonance Imaging (MRI) images and classify it into the LGG / HGG. A high bias algorithm 3D Convolutional Neural-network (CNN) is employed to segment the brain tumor into edema, necrotic, and enhancing tumor. The segmented tumor is then used to generate the radiomics features and the Support Vector Machine (SVM) is used to classify the LGG and HGG tumor. This work is tested on Brats 2017 and the experiments show promising results with the Dice scores, for segmentation and classification for LGG and HGG, at 0.85 Dice Scores and 0.83 F1-Score, respectively. View Project

Crack Detection System

Reinforced concretes and cement based materials are used to construct buildings and civil structures. The building conditions are prone to environmental exposure and loadings factor that can cause cracks on the surface. To prevent the expansion of harm, effective treatment requires first the understanding of the cracking root cause, and then, a strategy for repair is implemented. Traditional approach of manual crack detection requires more effort and time. In this project, an automated vision-based crack detection system using deep learning framework is developed to assist the repair work in the term of resource planning. The encoder-decoder architecture in the deep learning framework produces convolutional features that improve the performance of image segmentation where the cracks can be differentiated from the background more effectively. View Project

Intelligent Scene Detection

The team has generalized most of the object detection works, including exploration on various machine learning algorithm to a deployable prototype solution under different case scenario and camera view scene, to serve a specific business objectives. For instance: Go to Projects' Page

Intelligent Classroom System

Artificial intelligent processing in video technology is growing rapidly, such as face identification and behaviour analysis. This project utilizes face identification features to process multiple camera views, update students’ attendance to a cloud database and implement Anomaly Activity Detection Module. It constantly tracks the students’ whereabouts and publish the updates of the students’ status to the same cloud database. In this project, ten test faces are detected, identified and recorded in database based on the trained dataset with high accuracy within an acceptable time frame across multiple camera views under certain conditions.
View Project 1 View Project 2

Deviant Scene Detection

Most of the organizations (e.g., educational institutions, manufacturing factories etc.) in Malaysia require supervisors or managers to monitor participants (e.g., workers) during the operating hours in order to ensure that participants comply with the rules and regulation. This project is expected to reduce the human effort in monitoring participants by replacing it with an intelligent camera that is able to detect abnormal activity. The way to implement this is to utilize the camera surveillance in the operation area by adopting technology such as artificial intelligence (AI), machine learning (ML), image & video processing or computer vision (CV) and Internet of Things (IoT) to learn and detect human activities/behaviours. View Project

Face Mask and Social Distancing Detection

Face masks have recently been a symbol of the global battle to prevent COVID-19 spread. The project is expected to develop an higher accuracy face recognition model to recognize human face with and without mask. Secondly, social distancing detector is to monitor if save distance is practised among people in public. Thirdly, face mask detection is to detect high-risk situations without proper mask. These technique can be used in various settings to assist decrease illness spread, such as classrooms, stores, and factories. View Project

Virtual Classroom Monitoring

The advancement of computer technology allows students to interact with educators using Artificial Intelligence (AI) technology through smart classrooms (E-Learning Classrooms). Currently, smart classrooms are believed to change the existing dull teaching methods and enhance the students’ learning experience. Therefore, the proposed system offers real-time users performance monitoring features such as detection and recognition on face and hand gesture to monitor student activities and recognize student behaviour through the smart classrooms. View Project

Zero-Shot Learning on Human Action and Gesture Detection

The Zero-shot Leaning module in an object detection process observes an input (e.g., video scene) is not being trained, and predicts its detected category. The team has applied this approach in a gesture-based human machine interaction (HCI) prototype solution. HCI studies continually emphasize the user experience especially when it is implemented in a real-world environment. As known that every individual acts differently and more uncontrollable environmental variables might affect the performance to detect and react to the gesture performed. Even though there are many solutions and datasets proposed in the market, not each of them perfectly fitted to our needs. Hence, to propose a more tailored made gesture detection for own use, the existing zero-shot learning model will be tested on the gesture dataset introduced in this work to fine tune to own needs. View Project

Augmented Reality Exploration

The team has put in additional efforts on realizing a computer vision effect into the era of virtual reality + physical world = Augmented Reality (AR). As one of the key pillars of realizing Industry 4.0, several AR application has been explored, developed and deployed to achieve different creative ideations and solutions. For instance: Go to Projects' Page

AR Furniture with 3D model captions

AR is one of the latest technologies that involves the integration of computer graphics with the user’s environment in real time. The proposed project is an AR mobile application which offers preview of furniture in one’s real environment by allowing the consumers to visualize how the particular furniture will look in the real world using a smartphone before they make a payment. To construct the furniture into a virtual object which is the 3D model file, photogrammetry is applied as it is able to perform 3D construction using a batch of images captured by the smartphone. When the construction is completed, the 3D model file will be converted into a manifest file and is ready to be uploaded to the cloud. In the system, whenever a user selects a model, the target model will be downloaded and markerless tracking will be activated to render the model in the user’s environment without a marker. View Project

AR Shoes with 3D model reconstruction

AR technology brings digital information and virtual objects into the physical space. With AR, the digital world comes to life inside the view captured by our tablet or phone camera. The objective of this project to be achieved is to enable the user to construct 3D shoe models without redesigning the 3D shoe models that they wanted to display in the AR application. Besides that, this project is used to try on the 3D shoe models virtually by using the AR technology. The purpose of this project is to let users try on the shoes more conveniently. They can just use their mobile phone to try on the shoes virtually anywhere and anytime. View Project

Visual Search Tool for E-commerce

This project aims to develop a mobile app for E-commerce with better customer experience. Hence, instance-based image retrieval is integrated into this app to help customers to search for similar products captured by them. This is especially useful for elderly people who may not be too tech savvy. It allows them to snap an image to find desired products instead of typing the product’s name. Image search is an option in the standard search box, allowing the user to snap a photo or select a photo from the camera roll to search. Then, it will sift through over items available in the product catalogue to find similar products. View Project

Care-U Application

Nowadays are exposed to both external and internal stress and more often than not, they deny its existence and take on destructive ways to deal with it. Counseling service in universities is generally not the first option nor the preference of the students as the satisfaction of realization of them now comes from social media. In this work, a mobile application inclusive of a Chatbot with sentiment analysis, blog and music sharing, and an interactive game was developed. This application serves as a platform for the students to share their thoughts freely where they would be comfortable being themselves. View Project

Cost Optimisation with Industrial Machine Performance

This project studies the production of a single type of product on multiple parallel machines by a manufacturer. The manufacturer has a pool of machines available with a combined production capacity greater than is required for production requirements. We assume that machine productivity may vary within a given range, which gives manufacturers the opportunity to adjust their total production capacity to meet demand and utilise the different cost structures of available machines. In this case, the manufacturer must decide which machine to choose to generate a given requirement and how to operate the machine. This project presents a deterministic mathematical model to support the production and distribution planning scenario. View Project

Image Processing Automation and PO Matching System

Nowadays, companies often process invoices, order forms, and other paperwork frequently. They are not satisfied with those common OCR technology to have only conversion of images into texts and look forward to having text recognition on required information from the documents and extract them out for validation purposes. The designed system is able to provide auto-validation for the documents such as Purchase Order (PO), invoice, and Delivery Order (DO). It aims to produce a customized OCR application for companies that could process different types of documents, retrieve the required information and perform auto validation. Besides that, this system is developed to provide functions such as data extraction and auto validation to various types of documents in order to reduce human involvement. The creating template features was also implemented to deal with any kinds of new documents in the future. By continuing to improve this project, the project is able to reach a satisfied accuracy level and be ready for real life implementation. View Project

High Efficiency Video Coding in Industry 4.0

The demand for multi-view (MV) has increased rapidly and there was a lot of research work to improve the technique and fulfil its needs. High Efficiency Video Coding (HEVC) compression standard has been implemented in this work. HEVC is a compression standard designed to reduce bitrate and remain the same quality compared to the previous compression standard Advanced Video Coding (H.264). It will provide a better compression to higher resolution video such as Ultra High Definition (UHD). In this paper present a preliminary study on MV with depth by using HEVC compression on a real-time streaming protocol. The study of proposed work may help the industry to enhance the viewing experience by multiple camera capture and also resolve the data traffic issue to transmit UHD video. MV & 3D-HEVC codec was also proposing to encode and decode in near real-time video streaming by setting up a three different view of cameras with depth prototyping on higher performance CPU. Few validation methods like BD-rate, time, QP and PSNR will be considered to be used to make a comparison with simulcast real-time multiple views architecture in difference HEVC extension. View Project

Environmental Sustainability Publications

  1. TAN CHI WEE. A Prototype of Traffic Light Colour Detection Using Convolutional Neural Network (CNN) Algorithm. SDG11
  2. TAN CHI WEE. Flower Recognition Model based on Deep Neural Network with VGG19. SDG15
  3. TAN CHI WEE. Deep Learning & Hybrid Model - The Future of Medical Image Watermarking. SDG11
  4. TEW YIQI. An Evaluation of Virtual Classroom Performance with Artificial Intelligence Components. SDG11

Contact Us


TELEPHONE: 603-41450123 Ext no. 3233
MOBILE PHONE: 6011-10758554
FAX: 603-41423166 / 018 925 1001

8.30am - 5.30pm (Monday - Friday)

Map to TAR UMT