12th SCiUS Forum
Metadata is detailed information that shows the origin of the data, but metadata is used with information
technology.
Methodology
It can be divided into 3 steps,
Part 1: Data Preparation
1.1 Prepare a total of 25 samples and rename all images to sample_image_0X followed by image
number
1.2 Set every sample 30 centimeters away from the camera.
Part 2: Program Development
2.1 Use easyOCR to read sample images.
2.2 Shorten the resulting substring to 200 characters.
2.3 Compare the results to metadata to determine which of the results is closest to the sample.
Part 3: Application Display
3.1 Build a graphical user interface (GUI) for Python with Tkinter.
3.1.1 Create a window for importing images from your computer into the program. It has a
background, logo, instructions, buttons and text boxes for display.
3.1.2 Connect the application page to the program.
OT1_15_08
46
12th SCiUS Forum
3.1.3 Show results
Results
Table 1: Displays the readable name of
EasyOCR and the metadata of
Examples 1 – 5.
Table 2: Displays the readable name of
EasyOCR and the metadata of
Examples 6 – 10.
Table 3: Displays the readable name of
EasyOCR and the metadata of
Examples 11 – 15.
Table 4: Displays the readable name of
EasyOCR and the metadata of
Examples 16 – 20.
OT1_15_08
47
12th SCiUS Forum
Table 5: Displays the readable name of
EasyOCR and the metadata of
Examples 21 – 25.
Conclusion
Metadata contributes to a higher accuracy of the original result, or the result obtained from
TesseractOCR reading and calculating the error using CER and WER, with only about 5 originals. that reads
correctly However, when reading via EasyOCR, better results were obtained with the readings of the drug being
increased to 20 because it was only partially read, and the slow processing could be an advantage because the
drug that Correct reading is correct reading without any errors.
Acknowledgement
This project was supported by Science Classroom in University Affiliated School (SCiUS) under PSU
Wittayanusorn School. The funding of SCiUS is provided by the Ministry of Higher Education, Science,
Research, and Innovation, which is highly appreciated.
Reference
https://github.com/tesseract-ocr/tesseract [access 13 August 2021].
https://www.researchgate.net/profile/Sachin-D-
N/publication/331040290_Document_Image_Analysis_Using_Imagemagick_and_Tesseract-
ocr/links/5f1316a792851c1eff1bb5d4/Document-Image-Analysis-Using-Imagemagick-and-Tesseract-ocr.pdf
[access 13 August 2021].
“EasyOCR.” GitHub, 31 May 2022, github.com/JaidedAI/EasyOCR?fbclid=IwAR2awbavQnpYRdR8ZKBY-
jy4Ac0oGBVeVbRkDkG-leV0bqqgC6i0AR2ZQek. [access 13 January 2021]
https://towardsdatascience.com/overview-of-text-similarity-metrics-3397c4601f50 [access 13 January 2021]
https://machinelearninggeek.com/text-similarity-
measures/?fbclid=IwAR0aSbvEg6orpCfQr52dx33bYz7cSDCHJOytYJVg5Dl_d9MDcYatkk5XEDw
[access 13 January 2021]
https://pyimagesearch.com/2020/09/14/getting-started-with-easyocr-for-optical-character-
recognition/?fbclid=IwAR2QJHbhOXIZBWOOqNxPoDhjzd9nh4YrWkeDWyX5VVv1GYK1LNHsSZxs8VE
[access 13 January 2021]
OT1_15_08
48
12th SCiUS Forum
Title: The Development of an Application for OT1_18_01
Field:
Author: Cassava Disease Detection
Computer and Technology
School:
Advisor: Miss Kornrawee Kochtat
Miss Pemika Khakhai
Miss Woraruethai Hutawattana
Surawiwat school, Suranaree University of Technology
Assoc. Prof. Dr. Thara Angskun, School of Information Technology,
Institute of Social Technology, Suranaree University of Technology
Abstract
Cassava is one of the important industrial crops of Thailand. However, in each cassava cultivation,
there will be a problem of disease outbreak in the plant which results in damage to the product. In addition,
some diseases in cassava are similar and this may result in using inappropriate treatments by farmers. This
paper presents an application for detecting cassava lesions to help farmers by using machine learning, one of
artificial intelligence that focuses on using data and algorithms to learn for themselves, to help identify cassava
diseases. The results showed that the application was able to detect cassava lesions. The highest accuracy was
0.92, and the recall of diseased cassava was 0.94.
Keywords: cassava disease, machine learning, image classification
Introduction
Cassava is one of the important industrial crops of Thailand. There are many uses of cassava such as
turning into raw materials in various industries, including the food industry and the energy industry. However,
there are many obstacles in the process of cassava production, and one of them is during the growing cassava
plant. It's possible to have disease outbreaks in plants. This will result in damage to the product and causes
farmers to lose income. In addition, certain diseases have similar characteristics and cause confusions which
can lead to the wrong treatment and care, making the results in further damages to the plant.
Nowadays, artificial intelligence technology has played an increasingly important role in human daily
life by coming to help or reduce the burden of human work. One of the fields of artificial intelligence that is
constantly being studied and developed is Image Classification, which helps to classify and categorize images
that are difficult for humans to differentiate with greater accuracy.
From the problems mentioned before. This article presents an application for detecting cassava lesions
to help farmers by using artificial intelligence technology to help classify cassava diseases and using image
augmentation techniques to help increase the dataset. In addition, the application can provide information and
treatment methods on cassava diseases, and reduce the damage or loss in the plantation.
OT1_18_01/1
49
12th SCiUS Forum
Methodology
The method and experimental details are divided into 2 parts.
Part 1: Creating a model to detect cassava disease
1.1 Prepare the images of cassava leaves for training. There are 1089 images from iCassava 2019 competition
on kaggle.com, 316 images are healthy cassava and 773 are Cassava Green Mite(CGM). An example is
shown in Figure 1.
Figure 1: Examples of two types of Cassava: (A) Healthy and (B) Cassava Green Mite.
Figure 2: image augmentation; (A) Original, (B) GaussianBlur, (C) AverageBlur, (D) MotionBlur,
(E) Rotate, (F) Fliplr, (G) Shear, and (H) KeepSizeByResize
1.2 Divide Cassava photoset into three parts: training set, validation set, and test set with ratio 7 : 2 : 1
1.3 Use GaussianBlur, AverageBlur, MotionBlur, Rotate, Fliplr, Shear and KeepSizeByResize for image
augmentation. An example is shown in Figure 2
1.4 Create a model for image classification by transfer-learning MobileNetv2, a mobile-friendly model.
1.5 Train model by entering the train and validate data sets to the model and use the test data set to test the
model. The model was trained with Adam optimizer with learning rate of 0.00001 for 100, 150, and 200
epochs to compare the accuracy and recall.
Part 2: Developing a mobile application
2.1 Design User Interface of application by using Figma.
2.2 Download Android studio for creating applications by using Java language.
2.3 Download the trained model from Google Colab as a tflite file and import the model in Android Studio
to develop applications that can classify cassava diseases.
OT1_18_01/2
50
12th SCiUS Forum
Results and Discussion
The classification result of different epochs is shown in the form of confusion matrix. The variables
in the confusion matrix are used for calculate the accuracy and recall of the model which indicate the quality
of the model. As shown in Table 1, when we trained model with more epochs, the accuracy and recall of
Cassava Green Mite were increased. It was found that Cassava Green Mite’s recall was higher than healthy
cassava's recall, indicated that model can identify Cassava Green Mite better that healthy class.
Table 1: Accuracy of the model at different epochs
recall
epochs accuracy healthy CGM
0.88 0.90
100 0.8909 0.91 0.92
150 0.9182 0.88 0.94
200 0.9182
Figure 3: Confusion matrix of the result from model which was trained for 200 epochs.
0 means Cassava Green Mite and 1 means Healthy.
Figure 4: the relationship between accuracy and epoch Figure 5: the relationship between loss and epoch
OT1_18_01/3
51
12th SCiUS Forum
Conclusion
The purpose of this article is developing mobile application for classify Cassava Green Mite and
healthy cassava. We created the image classification model by Transfer-learning the MobilenetV2 model and
used image augmentation to expand the dataset. Then selected the best model into application.
Figure 4 and Figure 5 show that the cassava screening model does not overfitting, indicating that the
model is effective, and can be put into practical use.
In the future work, we will focus building multi-class cassava diseased classification model by
choosing cassava diseases which commonly found in Thailand. Furthermore, we will develop mobile
application to deploy for using in real life scenario.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS) under
Suranaree University of Technology and Surawiwat School, Suranaree University of Technology. The funding
of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This extended
abstract is not for citation.
References
1. Abayomi-Alli OO., Damaševičius R, Misra S, Maskeliūnas R. Cassava disease recognition from low-quality
images using enhanced data augmentation model and Deep Learning [journal article on the Internet]. 2021
[cited 2021 Oct 29]; 38(7): [about 21 page]. Available from: https://doi.org/10.1111/exsy.12746
2. Aduwo JR, Mwebaze E, Quinn JA. Automated Vision-Based Diagnosis of Cassava Mosaic Disease.
Industrial Conference on Data Mining; 2010.
3. Mwebaze E, Gebru T, Frome A, Nsumba S, Tusubira J. iCassava 2019 Fine-Grained Visual Categorization
Challenge. 2019.
4. Ramcharan A, McCloskey P, Baranowski K, Mbilinyi N, Mrisho L, Ndalahwa M, et al. A Mobile-Based
Deep Learning Model for Cassava Disease Diagnosis. Frontiers in Plant Science [Internet]. 2019;10.
Available from: https://doi.org/10.3389/fpls.2019.00272
5. Sambasivam G, Opiyo GD. A predictive machine learning application in agriculture: Cassava disease
detection and classification with imbalanced dataset using convolutional neural networks.
Egyptian Informatics Journal [Internet]. 2021;22(1):27–34. Available from:
https://doi.org/10.1016/j.eij.2020.02.007
OT1_18_01/4
52
Title: 12th SCiUS Forum
Field:
Author: OT1_15_05AI Chatbot for Emotional and Mental Health Support for the students
School: Technology and Computer
Advisor:
Miss Chonnipa Choonoi
Miss Sirinaphat Suttirak
PSU.Wittayanusorn School, Prince of Songkla University, Hatyai Campus
Mrs.Wareerat Pumitummarat (PSU.Wittayanusorn School)
Ass.Prof.Dr.Kitsiri Chochiang (Prince of Songkla University, Hatyai Campus)
Abstract
The "SafeZone" chatbot is an artificial intelligence chatbot for mental health care, giving advice on
educational problems for students at the secondary level on the application line, and aims to develop this
artificial intelligence chatbot for use within schools. To provide mental health care and emotional support for
students under the circumstances of the COVID-19 outbreak. And to collect mental health data of students in
schools for prompt assistance and care.
Functions in the "SafeZone" chatbot include counseling on educational problems, including problems
with incomprehensible learning, problems with teachers, problems with work, homework, and problems with
test scores, GPA, exams, and college entrance examinations. There is also a play mode to help users relax and
relieve stress from studying. Through the functions of encouragement, song recommendations, movie or series
recommendations, and today's captions.
Programs used for development include Dialogflow, Line Official Account, Canva (for graphic
design in chatbots), Integromat (for managing webhook between Dialogflow and Google Sheet) and Google
Data Studio (for displaying SafeZone dashboard which is taken care by the guidance teacher).
This project is the project that received the funding to subsidize the development of National Software
Contest (NSC). The 24th from national electronics and computer technology centre (NECTEC), Office of the
national science and technology development.
Keywords: Chatbot, Artificial Intelligence, Natural Language Processing, Mental Health care
______________________________________________________________________________
OT1_15_05/1
53
12th SCiUS Forum
Introduction
Learning needs to be adapted to the online format in the current situation, according to the COVID-
19 outbreak. The social media posts on Twitter have discussed the impact of online learning, in which students
become restless, gain more tiredness, and tend to stress more (messages from Twitter #ไมเรียนแลวอิสสั and #เรยี น
ออนไลน ). In addition to social media messages, there were also research studies on stress and depression in high
school students. It was found that the COVID-19 outbreak was causing increased stress and depression among
students in higher grades and that the greater risk of facing mental health problems also increased.1
Such a cause includes the inaccessibility of mental health services in Thailand, making the population
getting mental health services to expect that their mental health will be labeled as madness and causing concern
in terms of privacy. On the other hand, chatbots are an excellent option to solve problems. And it has been
developed and paid a lot of attention to. An article titled "The Big Promise: AI Holds for Mental Health by
Yelena Lavrentyeva" 2 said using an AI bot has made people more open-minded about their problems because
they believed that the bots were not biased and the advice was promptly offered.
This project for developing mental health chatbots for students has the purpose of developing a smart
chatbot for use in schools for mental health care and helping emotional and mental health support for students
in the range of online learning under the circumstances of the outbreak of COVID-19.
Methodology
Phase 1 : Requirement Gathering
Gathering requirement from sample group which is PSU.Wittayanusorn’s students.
Phase 2 : Developing Chatbot
Program : Dialogflow, Integromat, Google Datastudio, Line Official Account, Canva
Phase 3 : Satisfaction Survey
_______________________________________________________________________________________
OT1_15_05/2
54
12th SCiUS Forum
Result
The Chatbot's overall working process is divided into mainly 2 parts as follows:
1. The User Interface (UI) part begins when the user starts using the chatbot via the Line Application.
The user’s messages will be an input for the NLP process to analyze texts.To understand the main issues
before the Dialogue Management process will reply back with the same content that the user wants to
communicate with.
2. The back-end part is another important part with the purpose of collecting the user’s usage information
and the student's mental health information in order for the school to provide proper mental health care
for the students by gathering information via chatbot and illustrating the results in a real-time dashboard
through Google Data Studio with the supervision of the guidance teachers.
Conclusion
According to the requirements gathered and the problems high school students face daily through an
online platform, it was found that more than 50% of students have stress at a medium to the highest level due
to studying problems. The researchers then developed a "SafeZone" Chatbot to provide mental health care,
especially for teenagers and students, using an up-to-date language in order to make the user feel comfortable
and to make the user feel like talking with a friend, and moreover, a consultant who can give advice about the
study.
After using the chatbot, the chatbot's overall satisfaction results in a satisfied level, and satisfaction
in each aspect, such as the chatbot's design aspect, general details aspect and functional aspect, result in the
highest level of satisfaction, while the benefits and uses aspect and the chatbot's working aspect result in a
high level of satisfaction.
_______________________________________________________________________________________
OT1_15_05/3
55
12th SCiUS Forum
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS).
The funding of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation.
This extended abstract is not for citation.
References
1. Gunn Pungpapong, Rasmon Kalayasiri. Depression and anxiety plus levels of stress among secondary
school students during the Covid-19 lockdown. JHSMR 2022;40:157-171.
2. Larventyena Yelena. The big promise AI holds for mental health. Becoming Human [Internet]. 2021 [cited
2021 August 24]. Available from: https://itrexgroup.com/blog/ai-mental-health-examples-trends/# header
_______________________________________________________________________________________
OT1_15_05/4
56
12th SCiUS Forum
Title: Application that analyzes people at risk of contracting COVID-19
Field:
Author: Technology and Computer OT1_15_11
School: 1. Mr. Panapon Thienmontree
Advis or:
2. Mr. Napat Umpa
PSU. Wittayanusorn School, Prince of Songkhla University
Asst. Prof. Janya Sainui
Mrs. Wareerat Pumitummarat
Abs tract
This project is a project about an application that helps analyze risks in the situation of
COVID. The objective is to develop sensors that detect values such as body temperature, oxygen in the blood,
heart rate, and lung volume were calculated using the principle of Early warning scores relevant to COVID -
19 patients. And for applications that are responsible for displaying the measured values and the processed
values is a progressive web app built by React that has data pass through the Firestore as a connection point
between the application and the device. and to aid in early risk diagnosis to reduce the burden of healthcare
professionals in both patient data collection and in the diagnosis .
The part of the device that uses the sensor to measure the value is developed by the program using
Arduino and the part of the application is used by Visual Studio to develop using the connector between the
two parts as Firebase using As for the Firestore, it is used to store data and forward data to display on the
application that shows results to doctors, nurses, patients or normal people who want to check their health.
Our application systemhas both access to the data collection from the Arduino sensors and the processing of
risk values, clearly displayed and the data can be used for further use.
Keywords: COVID, Firestore, Arduino, sensor, React
OT1_15_11 / 1
57
12th SCiUS Forum
Introduction
Currently, there is a problem that has a huge impact on th e world and various countries are the
problem of the Covid-19 epidemic. Because it is an infectious disease with severe and rapid expanding number
of cases on almost every continent. causing damage to society. The problem comes from disease preventions
and control measures. lead to many consequences starting from checking the temperature of travelers at the
airport. Measures such as wearing a mask to wash your hands or suspension of travelling to cities, some
hospitals lack effective people who can help plague patients. need a large number of people for treatment,
examination and diagnose the severity of the disease.
While technology is currently very advanced in communication, entertainment, a nd medicine, but the
distribution of technology in hospitals is clearly not uniformed. such as the technology of measuring body
temperature and the lack of technology for measuring oxygen and blood pressure.
From the problems mentioned above, our project have foreseen the problem of the shortage of people
working in the hospital for the diagnosis of the severity of the disease, therefore, have thought to introduce the
technology to measure body temperature, oxygen, lung pressure and heart rate are being used together with
technology to send data to a database for further processing and help diagnose the severity of the d isease by
using the application that is easily accessible via mobile phones to help reduce the burden of hospital staff in
diagnosis and can do other useful things that need human care.
Methodol ogy
Overview of working and sending data to Firebase
It starts by taking the values from all the
sensors together and then sending the blood oxygen and
lung volume. heart rate and body temperature Go to the
processing section, where the processor calculates the
risk values based on the Early warning scores derived
from the research study.
Then passes this risk value to the Node MCU
esp32 to pass it to Firestore via Wifi, and then gets the
value from Firestore to display it in the application.
allowing patients and doctors to check the information.
Figure1: Sending data from Arduino to Firebase.
OT1_15_11 / 2
58
12th SCiUS Forum
Web app structures
1. Front-end (User Interface)
We use ReactJS, which is a Javascript framework for
building Progressive Web App while using Material
UI, which is a components library such as Button,
Sidebar, Progressbar, Log in/Sign up form from and
Themes. We designed the app to be lightweight and
minimal so that only necessary information will be
displayed to users
Figure 2: Web App entire system .
2. Back-end
The crucial part of the systembecause we need to display health information which is a private info, and
storing data directly from Arduino . Which we will use Firebase to deploy to a server and be a Database
for storing individual information while being the most secure option we can offer to users
Res ults
Figure 3: Unregistered homepage Figure 4: Login/signup popup
OT1_15_11 / 3
59
12th SCiUS Forum
Figure 5: Result message box Figure 6: Registered homepage
Conclus ion
In which we tested the sensor’s measurements system, body temperature, Oxygen in blood, heart
rate, and lung volume is working properly which its accuracy is great. By referencing Early warning scores
we can compute risk value and send the data to Firestore so it can be displayed on the web app in real-time
and securely. So doctors and medical personnel can make use of these patients’ body information easier.
Acknowledgment
This project was supported by Science Classroom in University Affiliated School (SCiUS) under
PSU Wittayanusorn School. The funding of SCiUS is provided by the Ministry of Higher Education, Science,
Research, and Innovation, which is highly appreciated.
References
1. Francisco Martín-Rodríguez, 2021 - Early Warning Scores in Patients with Suspected
COVID-19 Infection in Emergency Departments. accessible form
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8001393/pdf/jpm-11-00170.pdf [accessed 24 June
2021]
2. Google devs, 2020 - Formal reference documentation for Firebase SDKs accessible form
https://firebase.google.com/docs [accessed 24 June 2021]
3. Avatar Alex Abenoja,2019 - React Bootstrap,The most popular front-end framework
4. Rebuilt for React. accessible form https://react-bootstrap.github.io/getting-started/introduction
[accessed 24 June 2021]
OT1_15_11 / 4
60
12th SCiUS Forum
Oral presentation
Technology and Computer Group 1
Sunday August 28, 2022
No. Code Title Author School
Mr. Paniti Parattajariya Kasetsart University
1 OT1_10_01 Thai color rice classification Mr. Ditdanai Mekwattanakarn Laboratory School
using deep learning Mr. Anucha Pheombunkasemsuk Kamphaeng Saen
Campus Educational
2 OT1_08_01 Smart Wheelchair with ROS Mr. Sakchat Hensook Research and
Development Center
system by CiRA CORE for Mr. Phudit Thadthiam Lukhamhanwarinchamr
Disabled Mr. Kanatip Chaomaungpuk ab School
3 OT1_15_12 Physiotherapy application Miss Jiratthiporn Sathitsitthiporn PSU.Wittayanusorn
for the elderly Miss Thunsiri Sriphong School
4 OT1_09_06 Application development for Miss Nantatchaporn Pimapansri Engineering Science
Classrooms
knowledge sharing that can Mr. Sithikorn Vichakij (Darunsikkhalai School)
satisfy the user’s demand Mr. Siwakorn Kaewsa-Ad Demonstration School
Prince of Songkla
5 OT1_16_01 Developing Application for Mr. Phoenix Palaray University, Pattani
Campus
Predicting COVID-19 Mr. Awera Ruampornpanu
Surawiwat School,
Infection from Chest X-Ray Suranaree University of
Technology
Images Using Deep
PSU.Wittayanusorn
Learning Techniques School
6 OT1_18_03 Testing the efficiency of Mr. Thiranan Incham PSU Wittayanusorn
Surat Thani School
Cira CORE in separating
mango varieties using
photographs.
7 OT1_15_03 Estimated Mobile Miss Salil Saengamporn
Application of the Age of Miss Nutnicha Jirarattanasopa
Bloodstain
8 OT1_17_01 Web application for English Mr. Poomtai Suwannao
pronunciation training by Mr. Kongpob Boonma
using automatic speech
recognition technology.
61
12th SCiUS Forum
Title : Thai color rice classification using deep learning OT1_10_01
Field : Technology and Computer
Author :
Mr. Ditdanai Mekwattanakarn
School :
Adviser : Mr. Paniti Parattajariya
Mr. Anucha Pheombunkasemsuk
Kasetsart University Laboratory School Kamphaeng Saen Campus Educational Research and
Development Center
Ms. Busara Pattanasiri (Ph.D.)
Abstract
At present consumers are becoming more alert and paying more attention to their health care, resulting
in the popularity of consuming colored rice or rice that has not been milled and less been milled because the area
of the rice seed coat is rich in dietetics and has high nutrients that are beneficial to the human body such as
Antioxidants, Vitamin E, minerals, and high fiber. Although each type of colored rice has similar color and
characteristics but has different in toughness and softness. In this work, we applied by using Deep Learning to
classify 3 varieties of purple tone rice, namely, Kong Sri Nil, Riceberry, Niaw Dum, Kong Hom Mali Daeng, Kong
Daeng Manpu, Tubtim Chumpae, Luem Pua, and Sangyod by taking a picture of the rice grains with the Flatbed
scanner, then processing the rice images with a Python program and process the output of rice images with the
Convolutional Neural Network that consists of Convolutional Layer, Max-pooling Layer, Flatten Layer, and Dense
Layer. Due to the efficiency of the model, it has the highest accuracy at 87% and can classify Kong Sri Nil,
Riceberry, Niaw Dum, Kong Hom Mali Daeng, Kong Daeng Manpu, Tubtim Chumpae, Luem Pua, and Sangyod
are accurate at 92.5% 77.5% 85.37% 87.8% 92.5% 54.16% 77.5%, and 92.5%, respectively.
Keywords: rice; deep learning; machine learning
Introduction
Rice (Oryza sativa L.) is an important cereal crop and the staple food of most of the population. Although
most of the consumption of white rice, there are special rice varieties that contain pigments in the husks or bran
parts of the rice, such as black rice, purple rice, red rice, and brown rice. The pigment contains a mixture of
anthocyanin, which has the color of the grain from shades of red to purple. to black. Recently, the consumption of
colored rice has increased due to its benefits to human health. Due to the presence of phenolic compounds such as
phenolic acid and anthocyanin, numerous studies show antioxidant, anti-cancer, anti-inflammatory anti-allergic
Anti-mutation, and hypoglycemic activity of color rice. in addition, color rice is a good source of minerals, fiber,
vitamins, and other phytochemicals which have beneficial effects on health. Each type of colored rice is chemically
OT1_10_01
62
12th SCiUS Forum
different. where the amylose content is different. Resulting in the taste, texture, toughness, and softness when
cooked differently. Therefore, our group came up with the idea of figuring out how to determine the type of rice
by separating the color of the rice through a program that can separate the color of the rice from the photograph of
the grain. The program can analyze the types of rice through the color classification of the rice grains by using the
color properties of the rice grain images. In this research, we have used the method of Convolutional Neural
Network to develop a program used to analyze rice grains. The Convolutional Neural Network method is a method
to get information from images and learn the characteristics of those images. This allows us to analyze the image
of rice grains by learning the color characteristics of the images we add. Using this method in making a program
will allow us to analyze the type of rice from the color classification.
Methodology
Use 8 different rice grain samples as Kong Sri Nil, Riceberry, Niaw Dum,Kong Hom Mali Daeng,
Kong Daeng ManPu, Tubtim Chumpae, Luem Pua, Sangyod by using a flatbed scanner named Epson Perfection
V800 Photo which in Image Processing we used pierced acrylic sheet to make rice grains not to stick together, as
shown in Figure 1.
Figure 1: An example of rice grains image from a flatbed scanner.
Each rice grain sample’s source, color, and the total number of all samples, as shown in Table 1.
From rice grain images that we got from a scanner, will be taken through the image processing process
with the Python program, which consists of finding the edge of the rice grain images using the Canny edge
detection method. and create a frame in each image by Contour method, and get 1 separated rice grain, from the
obtained image will be screened to leave only the complete grain image that is determined by the standardized
score (z-score) of the grain area with values greater than 3 or less than -3.
Table 1: source, color, and the total number of a rice grain samples
Variety Source Color Number of rice grains
Kong Si Nil Nakhon Pathom purple 10,350
Riceberry Phayao purple 10,350
Niaw Dum Samut Prakan purple 10,350
Luem Pua Pathum Thani purple 7,830
Kong Daeng Manpu Pathum Thani red 10,350
Tubtim Chumpae Pathum Thani red 10,350
Kong Hom Mali Daeng Bangkok red 10,350
OT1_10_01
63
Sangyod Pathum Thani red 12th SCiUS Forum
10,350
In the process of building a convoluted neural network, a library called Keras and TensorFlow Version
2.0 is used to find the most efficient model structure. The created model will have a total of 10 layers which has
sort order for each layer, In these 10 layers, there are 4 types of layers: (1) Convolutional Layer (2) Max-pooling
Layer (3) Flatten Layer (4) Dense, In the last layer of the Dense layer, there is a SoftMax function to tell the
accuracy of each grain classification.
Result
From each variety of separated rice grain images will be put in the analysis and classification process by
the convolutional neural network method. All rice grain samples will be divided into two data sets, namely the
data set use for model learning. and data sets for testing results respectively as shown in Table 2.
Table 2: Number of rice seeds for model training, model prediction, and accuracy for each rice type.
Rice type Number of rice seeds for Number of rice seeds for Accuracy
training prediction
Kong Si Nil 3200 37 92.5
Riceberry 3200 31 77.5
Niaw Dum 3200 35 85.37
Luem Pua 3200 31 77.5
Kong Daeng Manpu 3200 37 92.5
Tubtim Chumpae 3200 13 54.16
Kong Hom Mali Daeng 3200 36 87.80
Sangyod 3200 37 92.5
In 300 loops of model development, all 8 types of rice were sorted, namelyKong Sri Nil, Riceberry, Niaw
Dum, Kong Hom Mali Daeng, Kong Daeng Manpu, Tubtim Chumpae, Luem Pua, and Sangyod with the highest
accuracy of 87%, as shown in Figure 2.
Figure 2: accuracy classification of color rice graph
OT1_10_01
64
12th SCiUS Forum
Figure 3: Confusion matrix of actual value and predicted value for each type of rice.
Figure 3 shows the Confusion matrix, which shows the model's ability to predict rice varieties, by each
row shows the variety of tested rice, and the value in each digit is the result of the prediction.
Conclusion
From the development of a model for separating 8 rice varieties with the following rice cultivars Kong
Sri Nil, Riceberry, Niaw Dum, Kong Hom Mali Daeng, Kong Daeng Manpu, Tubtim Chumpae, Luem Pua, and
Sangyod It was found that the model had an accuracy of 87% in sorting out all 8 rice cultivars.
Acknowledgment
This project was supported by Science Classroom in University Affiliated School (SCiUS). The funding
of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This extended abstract
is not for citation.
Reference
Verma, B. (2010). Image processing techniques for grading & classification of rice. 2010 International Conference
on Computer and Communication Technology (ICCCT).
Nagoda, N., & Ranathunga, L. (2018). Rice Sample Segmentation and Classification Using Image Processing and
Support Vector Machine. 2018 IEEE 13th International Conference on Industrial and Information Systems (ICIIS).
Koklu, M., Cinar, I., & Taspinar, Y. S. (2021). Classification of rice varieties with deep learning methods.
Computers and Electronics in Agriculture, 187, 106285.
Traore, Boukaye Boubacar; Kamsu-Foguem, Bernard; Tangara, Fana (2018). Deep convolution neural network
for image recognition.
OT1_10_01
65
12th SCiUS Forum
Title: Smart wheelchair with ROS system by CiRA CORE OT1_08_01
Field : for Disabled
Author : Technology and Computer
Mr. Sakchat Hensook
School: Mr. Phudit Thadthiam
Mr. Kanatip Chaomuangpak
Advisor: Lukhamhanwarinchamrab School, Ubon Ratchathani University
Dr.Amorn Thedsakulwong, Mr.Satapisat Kraisee
(Ubon Ratchathani University / Lukhamhanwarinchamrab School)
Abstract
A wheelchair is a device that helps people with disabilities, patients, and the elderly with mobility
problems. The number of elderly people with mobility problems reaches 1,032,455 people and this is a major
issue that affects their daily lives. Physically, the elderly are weaker than adults, therefore, the organizers were
interested in developing a smart wheelchair to assist the elderly using a robotic system by employing a robotic
system to assist in moving the wheelchair and preventing accidents while using it. The aim of this project is to
develop a smart wheelchair with a ROS system using CiRA CORE. The smart wheelchair system was done in
2 parts, (1) modification of the wheelchair and (2) controlling system with ROS using the CiRA CORE program
to detect obstruction. In the first part, we build the wheelchair by designing a 3D model for the connection
between the motor and the wheel. Next, We attach Two 24V 350W motors to the wheelchairs. And finally, we
installed the Arduino UNO circuit and tested the motor control system. Can run at a maximum speed of 8
kilometers per hour. In the second part, CiRA CORE will help to avoid obstacles. By moving normally using
a joystick to control. But when an obstacle is encountered, the CiRA CORE system receives the image from
the camera. and responds by turning the car to dodge itself automatically
:Keywords ROS, CiRA CORE, Arduino UNO, and L298N
Introduction
Wheelchair is a device that helps people with disabilities or patients with mobility problems, and the
number of elderly people with mobility problems reaches 1,032,455 people, so this problem is a big problem,
but many people Ignore this problem, for example, a person with a leg disability may need a wheelchair for
the rest of their life, or a patient with a leg fracture may need a wheelchair for a short time. The organizers
were interested in developing this project to help the elderly. To facilitate and reduce restrictions on doing
things Choosing the right wheelchair that meets your needs will not only make it easier to get around. It also
promotes quality of life by providing opportunities for a smoother world, education, work and social life. The
major problem is mobility. Wheelchairs are necessary for those with impairments and patients who are unable
to walk normally. To assist them in moving, the doctor recommended that they use a wheelchair.
So that patients or people with disabilities can prepare Prevent shock and accidents This is because the
physical condition deteriorates over time, making it difficult for the elderly to move. Therefore, the wheelchair
It is the best choice that answers your question. However, some older adults may lack the stamina to propel a
wheelchair on their own. Using an electric wheelchair is also inconvenient. So if there is an assistant pushing
a robotic system that helps to move and help prevent accidents during use. And the system that will come to
help must be very stable. Because the elderly are physically weaker than adults. The body is easily ill, easily
injured, and self-help is limited. It is therefore very important to pay attention to wheelchairs for the elderly.
The organizing committee is interested in solving the problem. Inconvenient operation and wanting to
add a function that helps to move in different areas is important.
OT1_08_01/1
66
12th SCiUS Forum
Method and Experimental Details
a. The following is a list of the equipment and tools used in the operation:
1) CiRA CORE program 5) Arduino Uno board
2) Wheelchair structure 6) Joystick
3) webcam camera 7) L298N
4) Two 24V 350W motors 8) Two 11.1 V 2200 mAH batteries
b. The procedure for putting together an intelligent wheelchair
There are 4 steps
(1) Take apart the wheelchair. To figure out how to balance two motors and where they should be
put on their sides. not slanted one way or the other. not skewed to one side.
(2) Once you've determined which component of
Attach the motor to the wheelchair frame with nuts and screws. Carry out the same technique on the
opposite side of the motor.
(3) Make connections between motors with the support of a 3D printed model and
wheelchair wheels.
(4) The wheelchair frame can then be tested after the engine and wheels have been linked.
Results, Discussion, and Conclusion
Table1 Test results of wheelchairs with joystick control
This system that controls the direction of the wheelchair. The controller receives commands from the
wheelchair user by receiving motor control commands from the joystick. The controls are directed as follows.
To give the wheelchair a wave to move the joystick in the desired direction and to stop, pull the joystick back
to the center, the two motors stop.
Table2 An experiment to make CiRA CORE recognize left, right and middle.
OT1_08_01/2
67
12th SCiUS Forum
When the object is at the x1-x2 position, set it to the detected location to the left of the wheelchair.
Will have a reading coordinate of less than 245 when the object is between the X3 position, set to be the
middle position. The average reading is between 245-388, and when the object is at the X4 – X5 position, the
reading in the X coordinate is greater than 388.
Table3 CiRA CORE program testing of obstacle detection systems for intelligent wheelchair control
Note: The actual image and the image in the camera will switch left and right.
The results of testing the CiRA CORE Obstacle Detection System program for intelligent
wheelchair control showed that when the wheelchair detects an object to the left, The wheelchair will turn to
the left. And when a known object is found on the right, the wheelchair will turn to the right. When an object
is found in the middle The wheelchair stops working.
Conclusion
1) Create a wheelchair that you can control using a joystick. This system regulates the direction of
the wheelchair to different destinations by the two motors functioning in relation to each other by managing
the CiRA CORE system. By receiving motor control commands via the joystick, the controller gets
commands from the wheelchair user.
2) CiRA CORE is the starting point for the system's design and development. In this prototype form,
receive photographs from the camera. It's possible that the camera utilized is a built-in webcam or a webcam
that has been installed. The CiRA CORE system processes the image after receiving it from the camera by
isolating the person from the subject.
Fig. 1. CiRA CORE operation. Fig. 2. Arduino circuit board. Fig. 3. A wheelchair with mortor.
OT1_08_01/3
68
12th SCiUS Forum
Acknowledgements
This project was supported by the Science Classrooms in University-Affiliated School Project
(SCIUS), and Faculty of Science, Ubon Ratchathani University (UBU). We would like to thank our supervisor
Dr.Amorn Thedsakhulwong and Teacher.Satapisat Kraisee, our collaboration at the Department of Physics to
design and adapt the wheelchair to be complete.
References
Deepak K., Reetu M., Sharmac S.R.,Design and Construction of a Smart Wheelchair. . [Online]. 2015 [cited
2021 September] Available from:
https://www.researchgate.net/publication/305699756_A_Study_on_Smart_Wheelchair_Systems
M. A. K. Al Shabibi and S. M. Kesavan, "IoT Based Smart Wheelchair for Disabled People," 2021
International Conference on System, Computation, Automation and Networking (ICSCAN), 2021, pp. 1-6,
doi: 10.1109/ICSCAN53069.2021.9526427.
Hartman A., Gillberg R., Lin C.T., Nandikolla V.K. Design and development of an autonomous
robotic wheelchair for medical mobility 2018 International Symposium on Medical Robotics (ISMR), IEEE
(2018), pp. 1-6
Prakash Kanade, Jai Prakash Prasad,Sunay Kanade., IOT based Smart Healthcare Wheelchair for
Independent Elderly. [Online]. 2021 [cited 2021 September] Available from:
https://ejece.org/index.php/ejece/article/view/355
.
OT1_08_01/4
69
12th SCiUS Forum
Title : Physiotherapy Application For the Elderly OT1_15_12
Field : Computer and Technology
Author : Miss Jiratthiporn Sathitsitthiporn
Miss Thunsiri Sriphong
School : PSU.Wittayanusorn School, Prince of Songkla University
Advisor : Asst. Prof. Nithi Thanon (Prince of Songkla University)
Mr. Weerawat Wongmek (PSU.Wittayanusorn School)
Abstract
According to the World Health Organization (WHO), in Thailand, the number of elderly is on par
with many developed countries, which is the third-largest growth rate in Asia after South Korea and Japan.
Thailand has been approaching an aging society since 2005. Department of Mental Health, Ministry of Public
Health (2020) together with the elderly are people whose bodies deteriorated.
Therefore, the organizers have an idea to make a physical therapy application for the elderly. In order
to facilitate the elderly to perform physical therapy. The application contains seven physiotherapy poses that
are referenced from medical websites. The elderly can read each physical therapy pose's description and view
illustrations. Once the exercise began, the webcam window showed a line indicating points on the body and
the corners on points that represent the joints in the movement. If the specified number of times is completed,
the webcam window will close automatically. And there is a message window stating the physical therapy has
been completed. So, the users can start doing physical therapy in the next posture until all postures are
completed. Moreover, this application will notify your relatives by sending a message.
The experimental results found that the physical therapy apps can accurately detect body movements
during physical therapy sessions. But there is a slight error due to incorrect standing. It may cause the
application not to be able to determine the right or left side.
Keywords : application, webcam, physiotherapy
_______________________________________________________________________________
OT1_15_12/1
70
Introduction
According to the World Health Organization (WHO), it is estimated that the number of people aged
60 years will increase by at least 3% per year. By 2030, the number of aging populations is estimated to be
around 1.4 billion and will increase to 2 billion in 2050.
Thailand is now an aging society. It can be seen from news, reports, or media that the number of
people over 60 years of age is increasing rapidly. Currently, we have 20% of the elderly population, or about
14 million people, compared to the overall population. Coupled with the elderly is the age when the body
wears out and deteriorated. This makes it more difficult to move or perform activities in daily life.
Physiotherapy in the elderly is therefore indispensable because it focuses on developing and rehabilitating the
body of the elderly through techniques. In most of today's physical therapy, the elderly must go to receive
treatment in a hospital or general medical facility. But some may not be convenient and therefore unable to
receive physical therapy continually. That makes physical therapy becomes less effective on the body.
Therefore, from the above problem, the organizers have an idea to make a physical therapy
application for the elderly. So that the elderly can do physical therapy in the application. And to facilitate
physical therapy of the elderly to be able to do physical therapy at home by following the description and
illustrations of the physical therapy posture in the application. After doing the move, a window will pop up
indicating that you've done it. And you can start doing the next posture. Furthermore, the application will
show the result and your progression after you’ve exercised. Then sends a message to your relative’s Line by
line notify.
Methodology
The experiments were divided into 2 parts as follows,
Part 1: Design the application system
_______________________________________________________________________________
OT1_15_12/2
71
1. Application homepage 1.2 homepage
1.1 Login page
1.3 exercise section page
Part 2: Coding the physiotherapy postures with Media Pipe pose estimation1 in Sublime Text
2.1 Determine three points of body joints from
Determining Joints picture to define and calculate the angle.
2.2 Prescribe the point that will display the angle on the webcam.
2.3 Create text boxes to show the number of repetitions and mobility status.
2.4 Create a condition of the postures and how to count the movement in each exercise.
_______________________________________________________________________________
OT1_15_12/3
72
12th SCiUS Forum
Results
1.The webcam window will display. The angle will show on the point of the middle joints and a message box
indicating the number of attempts and the movement status.
1.1 Physiotherapy images2,3,4
Conclusion
Conclusion From the results of the trial of physical therapy applications for the elderly, it was found
that the application can detect body movements in all physical therapy postures. But found that there was a
small mistake in detection. Standing or wearing a dress that hides certain parts of the body makes the app
unable to detect movement and unable to distinguish between the left and the right.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This
extended abstract is not for citation.
References
1. Nicholas Renotte. MediaPipePoseEstimation [Internet]. 2021 [cited 2021 Aug 13]. Available from:
https://github.com/nicknochnack/MediaPipePoseEstimation.
2. Anusorn Jareonwijit. 10 postures for physical therapy for the elderly. Health at Home Team [Internet].
2016 [cited 2021 Aug 13]. Available from: https://medium.com/@hah.team/10-f83cfefab54f
3. Rehabilitation Medicine Center for the Elderly. Physiotherapy for deterioration in the elderly. 2020 [cited
2021 Aug 13]. Available from: https://www.cherseryhome.com/content/5999
4. Dangkong Saengarun. Physical therapy for knee osteoarthritis patient. 2017 [cited 2021 Aug 13].
Available from: https://he02.tci-thaijo.org/index.php/simedbull/article/view/97678.
_______________________________________________________________________________
OT1_15_12/4
73
12th SCiUS Forum
Title : Application development for knowledge sharing OT1_09_06
Field : that can satisfy the user’s demand
Author :
Technology and Computer
School :
Ms. Nantatchaporn Pimapansri
Mr. Sithikorn Vichakij
Mr. Siwakorn Kaewsa-ad
Darunsikkharai school, King Mongkut's Institute of Technology Thonburi
Advisor : Dr. Suriya Natsupakpong (King Mongkut's University of Technology Thonburi)
Dr. Arnon Thongsaw (Darunsikkalai School)
Mr. Tas Yusoontorn (Darunsikkalai School)
Abstract :
Under the circumstance of coronavirus pandemic, education has been disrupted and forced to change
from onsite learning to online learning. This rapid change affects both teachers and learners in many aspects,
for instance, educational-unsupportive environmental problem, course comprehensive, health problem etc.
We, as project organizers, have an interest in solving the mentioned problem by using Knowledge Sharing
(KS). Based on many research papers, KS is a process that supports learning performance and enhances
working potential under the presence of people, knowledge, and applicable learning spaces. To build the
spaces, we designed an application for Knowledge Sharing by summary notes that can satisfy the users demand
using Flutter and Dart language. We have also done a survey on the topic of educational problems to collect
the data from the interested group. This application development is based on the project objective and the
information obtained from the survey. In results, we launched the application prototype to the interested group
and collected satisfaction data to evaluate the application. The major result is positive with some commentaries
on developing future work.
Keywords : Application development / Education / Knowledge sharing / Summary notes
Introduction
During the pandemic of corona virus, the educational system has been disrupted. Schools were shut
down and online-learning systems were brought up to use. This new learning system caused difficulty in
adaptation to both educators and learners. They struggled with technology use, communication problems,
environmental distraction, and mental and physical health problems.
OT1_09_06/1
74
12th SCiUS Forum
Regarding a survey on online learning in the United States and Dominican States, learners are not
satisfied with this new learning system due to the difficulty to focus on the study. This situation affected the
learners' understanding of the lessons and led to an informational discussion among learners to provide a
knowledge sharing environment. Knowledge sharing (KS) has been investigated by many researchers to have
positive impacts on individual creativity, learning performance, and work efficiency. From further research,
notetaking is one of the factors that advocates Knowledge sharing and improves recall of lecture materials.
There were many online learning platforms that have been used during 2015-2019 and an application
is a platform that has its popularity increase every year. However, we explored the Appstore and Play store
and found no application that builds a knowledge sharing environment using notetaking. Thus, we decided to
solve online learning problems by developing an application that has an initiative design from Pinterest
application apply with note sharing idea. We conducted a survey about the objective of the application and
collected opinions from the interested group. Every opinion has been considered and combined into a final
idea of an application for knowledge sharing by note taking that provides a learning space that can be accessed
from anywhere. The interested group’s opinions will be collected again after the launch of the prototype
application.
Methodology
The methods were divided into 5 parts as follow,
Part 1: Literature review
We studied following the topics of online learning problem, M-leaning, Summary note and Knowledge
sharing. All the information we obtained from this part will be used as the background knowledges for the
questionnaires in the next part.
Part 2: Survey on online learning problems using questionnaires
We provided a questionnaire with topics related to online learning problems in different aspects. The topics
included learning environment, lesson understanding, mental and physical health, time management,
availability of learning tools and educators performance. This questionnaire was answered by the interested
group.
Part 3: Application design
3.1 Conceptual application and structure design
We used the information obtained from the questionnaire to analyze the demands of the interested group, or
users, and created conceptual features of the application which consisted of files Sharing, Favorites saved, Tag
to find notes and Discussion space.
OT1_09_06/2
75
12th SCiUS Forum
3.2 Survey on the application structures
We conducted a survey to check users satisfaction on the application conceptual structures and collected data
and opinions from the users to use to improve application structure design.
Part 4: Application development
We develop the application by coding in Visual studio code with Flutter and Dart language. The application
design was based on the improved design from the previous method. After the application went through testing
and debugging processes, the initial application was ready to launch to the actual users.
Part 5: Application evaluation
After we have completed the application development, we conducted a user satisfaction survey on application
design and effectiveness of the features. The application was evaluated using a scale of 1-5 and users’ opinions
were also collected for future improvement.
Results
The final design of the application consists of 6 Figure 1: Source code and display of Home page
pages, which are Register, Login, Home page, Search page,
Favorite page, and Profile. The design (UI/UX) of the
application was displayed on a mobile phone simulator
through the processing of Dart source code written on Visual
Studio code program. Although some of the design was
modified, the conceptual functions were still retained. The
example of source code and display of the application design
can be seen in the figure 1.
In the application evaluation part, the result from
the survey, which is mentioned in Part 5 of the
methodology, shows that the majority of the interested
group is satisfied with the application’s design and agreed
that the features existed in the application are accurately
effective as the concept and objective of this project desired.
Figure 2: Satisfaction score on application’s function
OT1_09_06/3
76
12th SCiUS Forum
Conclusion
From the satisfaction survey, the results show that most of the users have positive opinions and high
level of satisfaction on the application. Based on the scoring on both satisfaction and suitability, they agreed
that the UI/UX designs are compatible with concept and objective of the application.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS) under the
King Mongkut's University of Technology Thonburi and Darunsikkhalai school Engineering Science
Classroom. The funding of SCiUS is provided by the Ministry of Higher Education, Science, Research, and
Innovation, which is highly appreciated. This extended abstract is not for citation.
References
DeZure, D., Kaplan, M., & Deerman, M. A., 2001, “Research on student notetaking: Implications for faculty
and graduate student instructors”, Center for Research on Learning and Teaching [Electronic], No.
16, pp. 1-8, Available: CRLT Occasional Papers [2021, June 28].
Doreaki-enG, 2020, ช่วง COVID-19 กกั ตวั ไปด้วย ออนไลน์ไปด้วย ปรับตวั อย่างไร? ท้งั ทางาน ท้งั สอน ท้งั ประชุม ท้งั เรียน บน
ออนไลน์ [Online], Available: https://www2.rsu.ac.th/sarnrangsit-online-detail/covid19onlineactivity
[2021, July 28].
Desiree Peralta, 2020, Why Online Classes don’t Work and How to Improve them [Online], Available:
https://medium.com/illumination/why-online-classes-dont-work-and-how-to-improve-them [2021,
July 28].
Titima Thumbumrung, 2019, ร่วมสร้างสรรค์แบ่งปันความรู้เพื่อสังคมแห่งการเรียนรู้ – Knowledge Sharing [Online],
Available: http://www.thailibrary.in.th/2019/02/08/ks-kt-ke/ [2021, July 7].
OT1_09_06/4
77
12th SCiUS Forum
Title: Developing Application for Predicting COVID-19 OT1_16_01
Infection from Chest X-Ray Images Using Deep
Field:
Author: Learning Techniques
School:
Advisor: Technology and Computer
Mr. Phoenix Palaray
Mr. Awera Ruampornpanu
Demonstration School Prince of Songkla University, Prince of Songkla University, Pattani
Campus
Asst. Prof. Dr. Salang Musikasuwan
Asst. Prof. Dr. Rattikan Saelim
Department of Mathematics and Computer Science, Faculty Science and Technology,
Prince of Songkla University, Pattani Campus
Abstract
COVID-19 (Coronavirus Disease Starting in 2019) is currently causing an epidemic catastrophe over
the world. Every country affected by the pandemic has seen deaths and economic downturns. Since 2020,
Thailand has been experiencing an outbreak. Some COVID-19 patients have ground-glass opacity in their
lungs, according to literatures, which can be seen on chest X-rays by a radiologist. From a report, there are
now 179 community hospitals in Thailand without radiologists or radiological technologists, indicating a
radiology shortage. The objective of this project was to create a predictive model utilizing the convolutional
neural network (CNN) approach of deep learning. The developed models will be used to investigate the
possibility of detecting people who have early signs of COVID-19 infection. AlexNet, Xception, VGGNet16,
VGGNet19, MobileNetV2, and NasNetMobile were chosen as the six CNN architectures. The study discovered
that MobileNetV2 was the best-performing algorithm with Accuracy, Precision, Recall, Specificity, and F1-
Score values of 0.98, 0.97, 0.99, 0.97, and 0.98, respectively. Finally, the mobilenetV2 model was selected to
design and implement a mobile application for predicting and evaluating COVID-19 infection from chest X-
ray images.
Keywords: COVID-19, Chest X-rays, Convolutional Neural Network, Predictive model, Mobile application
Introduction
The present COVID-19 epidemic has triggered an epidemic problem over the world, with several
deaths as a result of the outbreak. According to research, when the virus enters the body, it connects to
phemosite type 2, also known as Angiotensin-converting enzyme 2, allowing it to penetrate cells and release
cytokines, resulting in an inflammatory response. Water enters the interstitial fluid, causing the ground glass
to float, which can be seen on an X-ray of the lungs and lead to pneumonia, the leading cause of death.
According to Singwiratham's [1] research, there were more than 179 hospitals in Thailand lack of radiologists.
In this study, deep neural networks called convolution neural network (CNN) architectures, will be used to
OT1_16_01/1
78
12th SCiUS Forum
create the predictive model for predicting the initial infection results. The final model will then be implemented
into a mobile application.
Methodology
In this study, chest X-ray data from researches of (Chowdhury et al., 2020) and (Rahman et al., 2021),
has been applied for developing predictive model to predict the initial infection result of COVID-19. The
processes of this research can be described in Figure 1.
Figure 1 Research framework
Firstly, the labeled chest x-ray images consisting of 3,616 and 10,192 images of COVID-19 patients
and normal people, respectively. The quality of those x-ray images was optimized by converting into 4-
dimensional arrays and RGB color format to meet the requirement by each type of model using the NumPy
and OpenCV-Python libraries.
Next step, 1,500 images were randomly selected from each class of chest X-ray images. To develop
the predictive model, the data from last step has been randomly selected and divided into 2 groups 80% and
OT1_16_01/2
79
12th SCiUS Forum
20% of training and testing set, respectively. Training dataset was used to develop predictive models by using
6 different type CNN, namely AlexNet, Xception, VGGNet16, VGGNet19, MobileNetV2 and NASNetMobile.
The obtained models were assessed their predictive performance by calculating the error value using
the Binary Cross-Entropy method. Confusion Matrix has been used to analyze the Accuracy, Sensitivity,
Specification, F1 score, and Receiver Operating Characteristics (ROC) curve. Finally, the Gradcam algorithm
was applied to represent the lung abnormalities discovered through the model analysis.
The best model obtained from the previous step was fine tuned by varying the number of epochs to
gain the most accuracy. Lastly, the best model was deployed to implement a COVID-19 chest x-ray images
screening mobile application. The Flutter platform and Heroku cloud computing have been utilized to construct
the mobile application.
Results, Discussion and Conclusion
At first, the training and testing were split in such a way that having a big different number of normal
(10,192) and COVID-19 infected (3,616) images, known as imbalance dataset. The imbalance dataset was
divided into training and testing datasets with a ratio of 75 to 25. In training dataset, there were 2,712 and 7,644
x-ray images for COVID-19 infected and normal people, respectively. In testing dataset, there were 904 and
2,548 X-ray images for COVID-19 infected and normal people, respectively. After each model has been
develop using training dataset, the performance can be measured and the result can be shown in Table 1.
Architecture Accuracy Precision Recall Specificity F1-Score Time per
Epochs
AlexNet 0.54 0.42 0.27 0.74 0.33
Xception 0.62 0.27 0.27 0.74 0.27 175
VGGNet16 0.52 0.46 0.26 0.74 0.34 3,418
VGGNet19 0.63 0.28 0.27 0.75 0.28 264
MobileNetV2 0.61 0.29 0.27 0.74 0.28 182
NASNetMobile 0.58 0.33 0.26 0.74 0.29 173
763
Table 1 Result of model using imbalance data
Table 1 shows that the imbalance data contains values collected from benchmarks, indicating that
VGGNet-19 has a maximum accuracy of 0.63, which is not yet acceptable. As a result, the new data division
was adjusted to balance data, i.e. chest X-ray images of COVID-19 patients per normal people were 1,500
images for each group. At this stage, the data were spilt into training and testing datasets with the ratio of 80
and 20, respectively. Later, the model was further tuned by adding four layers: one layer of AveragePooling2D,
two layers of Dense, and one layer of Dropout, and using OpenCV-Python. Table 2 shows the benchmark
results for each model using the testing dataset.
Architecture Accuracy Precision Recall Specificity F1-Score Time per
Epochs
AlexNet 0.80 0.91 0.75 0.89 0.83
Xception 0.82 0.94 0.76 0.92 0.84 29
VGGNet16 0.85 0.85 0.94 0.93 0.86 30
VGGNet19 0.81 0.9 0.76 0.88 0.82 27
MobileNetV2 0.98 0.97 0.99 0.97 0.98 30
NASNetMobile 0.79 0.95 0.72 0.93 0.82 25
34
Table 2 Result of model using balance data
OT1_16_01/3
80
12th SCiUS Forum
Table 2 shows that all models that applied balance data have greater Accuracy, Precision, Recall,
Specificity, and F1-Score than models that applied imbalance data. MobileNetV2 show the best predictive
performance. The confusion matrix , Receiver Operating Characteristics (ROC) Curve, and Gradcam algorithm
of MobileNetV2 can be illustrated in Figure 2 (a)-(c), respectively.
(a) (b) (c)
Figure 2 Confusion matrix, ROC Curve and Gradcam algorithm of MobileNetV2
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation, which is
highly appreciated. This extended abstract is not for citation.
Reference
1. Singweratham N, Decha N, Waichompu N, Somnak S, Tamepattanapongsa A, Thongrod S, et al.
Shortage and Demand of Radiologic Technologists for Health Care Settings under the Jurisdiction of the
Office of the Permanent Secretary, Ministry of Public Health. Songkhla: Southern College of Nursing and
Public Health Network 2021 Jan 1; 8(1):115-126.
2. Suwatanapongched T, Nitiwarangkul C, Sukkasem W, Phongkitkarun S. Rama Co-RADS: categorical
assessment scheme of chest radiographic findings for diagnosing pneumonia in patients with confirmed
COVID-19. Ramathibodi Medical Journal. 2021 Jun 25;44(2):50-62.
3. Koysupsin S. Characteristics of lung radiographs of COVID-19 patients in Patong Hospital. Journal of
Health Science-Journal of Public Health. 2021 Jun 30;30(Supplement 1):S25-32.
4. Jain R, Gupta M, Taneja S, Hemanth DJ. Deep learning based detection and analysis of COVID-19 on
chest X-ray images. Applied Intelligence. 2021 Mar;51(3):1690-700.
5. RABBAH J, RIDOUANI M, HASSOUNI L. A new classification model based on stacknet and deep
learning for fast detection of covid 19 through x rays images. In2020 Fourth International Conference On
Intelligent Computing in Data Sciences (ICDS) 2020 Oct 21 (pp. 1-8). IEEE.
6. Siddiqui MA, Ali MA, Deriche M. On the Early Detection of COVID19 using Advanced Machine
Learning Techniques: A Review. In2021 18th International Multi-Conference on Systems, Signals &
Devices (SSD) 2021 Mar 22 (pp. 1-7). IEEE.
OT1_16_01/4
81
12th SCiUS Forum
Title : Testing the efficiency of CiRA CORE in OT1_18_03
Field : separating mango varieties using photographs.
Author :
Technology and Computer
School :
Advisor : Mr. Thitanan Incham
Mrs. Krittakarn Yuangnoon
Mrs. Pakkaporn Atvichai
Surawiwat School, Suranaree University of Technology.
Asst. Dr. Rattanaphorn Hanta, Suranaree University of Technology.
Abstract :
The aim of this study was to test the efficacy of CiRA CORE in isolating mango varieties using
photographs. As perennials are so common in many places and so diverse in morphology, it is difficult to
identify species visually. When it is classified in detail, it can be time consuming and prone to errors especially
when the classifier lacks skill and experience. This study aims to develop an application that identifies tree
species using photographs of leaves. Using leaf photographs is an easy alternative for initial inspection and
photography. In testing on AI training data, the team encountered an issue where AI was detecting something
other than the photos used for performance testing. As a result, more time was needed to train AI, and there
were fewer variables that could make it more difficult. The accuracy of AI is reduced for more reliable results.
Keywords : mango , varieties , CiRA CORE
Introduction
Nowadays, technology plays more roles and duties in our society. We have innovated and developed
this technology to be more suitable for our lifestyles, especially researching information which can process
photos for example CamFind, Veracity and Google Lens. But these applications are still limited by the lack of
information needed for users. Therefore, we run a performance test on CiRA CORE to improve and develop
to an application.
We used leaves of common trees in our daily life, which are hard to identify with human eyes. When
we try to differentiate them in detail, it can take a long time or even mistaken with another type with familiar
82
12th SCiUS Forum
leaf structure. Therefore, for the propose of program we’ve developed using CiRA CORE’s algorithm to
classify those pictures and apply it to application.
Methodology
1. datasets collecting
We begin by collecting 500 photos of Samruedu and Baotai for AI training and 50 photos for testing.
Then we select the images that can be used.
Samruedu mango leaves
Baotai mango leaves
2. AI Training
The selected images will train AI to achieve prototype image recognition using deeptrain.
3. Efficiency testing
Afterward, we brought the prepared images for testing and imported them into the program and
allowing DeepClassify to detect and determine which mango variety it resembles. Then the output block will
show which variety is most similar as well as the guarantee percentage.
83
12th SCiUS Forum
4. Repeating experiments
From the error in processing, the program detected the upper left side of the images. This affects the
accuracy of the photographic identification resulting is low accuracy. Therefore, repetition was performed to
achieve greater accuracy.
Results
100
90
80
70
60
50
40
30
20
10
0
Correct result Incorrect result Other (error)
Figure.1 the Efficiency testing results
Figure.2 the program error detection by detecting paper in the background instead of a mango leaf
According to Figure 1 and Figure 2, the program detected the paper in the background instead of the
mango leaf. This cause a lot of processing errors. As a result, the accuracy of the resulting photographic
identification is low.
84
12th SCiUS Forum
Conclusion
After testing the program for 100 times, it was founded that the program still had error leaf detection
caused by the same problem leading to a lot of processing errors. Therefore, our team will develop the program
in the future to achieve greater accuracy.
Acknowledgements
This project was supported by Science Classrooms in University Affiliated Schools (SCiUS). The
funding of SCiUS is provided by the Ministry of Higher Education, Science, Research and Innovation. This
extended asbstract is not for citation. And with the help of our consultants Asst. Dr. Rattanaphorn Hanta, Dr.
Monrawat Rauytanapanit, Mr. Sutichote Tawweekasikarm, and Mr. Anupon Suwannatrai, we can complete
this project with case. We are totally grateful for the assistance given by our advisors.
References
[1] กลอยใจ คงเจ้ียง, กิรนันท์ เหมาะประมาณ, กรกช นาคคนอง, อจั จิมา จิรกวิน, สุคนธ์ วงศช์ นะ. (2562). การสํารวจ รวบรวม และคดั เลือกสายตน้
มะม่วงเบาในพ้นื ที่ภาคใตต้ อนล่าง. ตรัง: ศูนยว์ จิ ยั และพฒั นาการเกษตรตรัง.
[2] Saritnumand, O.; Phakham, W., Morphological characterization of mango leaf in 20 cultivars. Journal
of Agricultural Production 2019, 3, 2651-2475.
[3] Chousangsuntorn, C. ; Tongloy, T. ; Chuwongin, S. ; Boonsang, S. , A Deep Learning System for
Recognizing and Recovering Contaminated Slider Serial Numbers in Hard Disk Manufacturing
Processes. Sensors 2021, 21(18), 6261.
85
12th SCiUS Forum
Title : Estimated Mobie Application of the Age of Bloodstain OT1_15_03
Field : Technology
Author : Ms. Salil Saengamporn
Ms. Nutnicha Jirarattanasopa
School : PSU Wittayanusorn School, Prince of Songkhla University
Advisor : ASST. PROF. DR. Pattara Aiyarak, Mr.Weerawat Wongmek, Pol.Lt.Col.
Jarunya Samsuwan
Abstract
Bloodstains are easy to find as evidence at the arena. They are caused by a drop of blood on any
surface and they were left to dry completely. Over time, the color of the bloodstains often changes by period
from approximately 1 hour to 4 8 hours, the red values (from RGB) of the bloodstains which has a directly
variable with the age of bloodstains since the blood had dropped. It can also be used to predict the age of the
bloodstains.
The researchers have an idea to create an estimated mobile application of the age of bloodstain
that can detect the bloodstained area from bloodstain images from the crime scene. For predicting the age of
the bloodstains to forecast the time of the incident and helping confirm the witness's testimony by using the
red color of bloodstains at different periods of time from the research. The application will detect the
bloodstained area by drawing the red frame around it, and showing the age of bloodstain by using the linear
regression which can predict the incident time as said in the objective with the accuracy of 79.166%.
Keywords : bloodstains, blood detection, blood age, blood age mobile application
Introduction
Every time when crimes happened, the most important and noticeable evidence is going to be the
Bloodstains, which can bring us to the proof of the incident time just from blood analysis. Since this method
takes quite a lot of time working in the Lab, we're looking for the faster way to analyze the results.
One of the research, Estimation of the Age of Bloodstain on Cloth by the RGB color value
Analysis from Silapakorn University, says as the result that there is a direct variation between the Red color
value of Bloodstains & Blood dropping time and this relation can be written in a graph form to find the specific
formula which can determine the blood dropping time if we know the red color value of the bloodstains.
From this knowledge, the researchs want to create the mobile application which can predict the
time of the event in a short period of time, from 0 – 48 hours, by using python coding knowledge and MIT
App inventor cooperation.
Methodology
The experiments were divided into 2 parts as follows,
Part 1: Data Preparation
1.1 Paper researching
OT1_15_03
86
12th SCiUS Forum
Went through the Blood age estimation project from Slilapakorn University and analyzed the
graph between the red value(y) and blood age(x). The project using the comparison tables of red color value
of the bloodstain during 0 – 3 hr. and 4 – 48 hr as shown below. (Since the ratios in each period are different,
the researcher divided the data into two tables)
1.2 Used Google Colaboratory to calculate
two linear regressions from each of the
tables. The equations are presented below.
Part 2: Program Development
2.1 Red color value to ages
Created the code to detect the bloodstain and
contour the area by using cv2.contourArea, the program
will create the red frame mask around the bloodstain.
Used the 2 previous equations to find the blood age.
2.2 Mobile application
Developed by Flask framework to create the web application. For the camera the researchers must
use socket.io to run the Client's Camera by coding in sublime text. Used the ngrok to upload the website to the
server, users can approach our localhost from their own devises. Put the link of previous website in the MIT
app inventor for designing the UI and creating mobile application.
Results
Results from progress 2.1 displayed pop-up window of Red color value to ages program. The red
areas were contoured with red frames and calculated the average of red value in the areas. The program
displayed blood age results from the estimation with the accuracy of 79.166%.
Figure 1 and 2 : Comparison example of program estimation and the data set
Turning to progress 2.2, the Flask web app, which was uploaded to the server by ngrok, are able
to open clients’ camera by using socket.io. The website can also detect the red areas and display blood age as
required.
OT1_15_03
87
12th SCiUS Forum
MIT App Inventor had been used as user interface developing. The end result showed application
window which can approach from both Android and iOS platfroms.
Figure 3 and 4 : Starter web app interface appoarched by ngrok Figure 5 and 6 : Application’s window in Android
Figure 7 and 8 : Application’s window in iOS
Conclusion
Created Blood age detection mobile application are able to estimate the incident time by detecting
the RGB color value (red color) and using linear equations from the Estimation of the Age of Bloodstain on
Cloth by the RGB color value Analysis passed-research data set. Overall, the application has an ability to
predict the appropriate ages with the accuracy of 79.166%
OT1_15_03
88
12th SCiUS Forum
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS)
under Prince of Songkhla University and PSU Wittayanusorn School. The funding of SCiUS is provided by
Ministry of Higher Education, Science, Research and Innovation, which is highly appreciated. This extended
abstract is not for citation.
References
1. Pattarathip Laohabut, Sirirat Chusakulkriang and Supachai Supalaknaee. Estimation of the Age of
Bloodstain on Cloth by the Method of Image Analysis. Forensic thesis of Silpakorn University; 2016.
2. Attila Szabo. Kinetics of hemoglobin and transition state theory. Proc. Nati. Acad. Sci. USA; 1978.
3. Nattawadee & Nattapong. Watermelon sweetness measurement. Science and technology journal Naresuan
University; 2020.
4. Tatsanee Krongthong. Python for learning. IPST journal; 2016.
5. Sutama Chokpampoon, Teanjai Noypa and Kanokthip Kotsumrhan. Estimation of the substance Gac lycopene
by RGB color image analysis combined with neural network method. Science and technology journal; 2018.
6. Suda Tienmontri. Completed Python guidebook for learning. Nonthaburi. IDC; 2020.
OT1_15_03
89
12th SCiUS Forum
Title : Web application for English pronunciation training by using automatic
Field : speech recognition technology OT1_17_01
Author : Technology and Computer
School : Mr. Kongpop Boonma
Advisor : Mr. Poomtai Suwannao
PSU.Wittayanusorn Surat Thani School
Prince of Songkla University, Surat Thani Campus
Assoc. Prof. Dr. Jirapond Muangprathub
(Prince of Songkla University, Surat Thani Campus)
Dr. Tiraporn Jaroensak
(Prince of Songkla University, Surat Thani Campus)
Miss Jutharat Thong-iad
(PSU.Wittayanusorn Surat Thani School)
Abstract
Automatic speech recognition (ASR) is favorably chosen as a learning technology used for English
pronunciation practice. This project aims to build an algorithm with ASR for a web application to practice
English pronunciation, as self-learning material. To create the web application, Flask is used for creating the
architecture and features of the web application. The deep learning algorithm automatically maps audio and
text files, and then transfers into the speech recognition models, as the application programming interface used
in the web application. The ASR used in the web application consists of two datasets of audio files for English
pronunciation practice. One dataset is English Native Accent ASR, which is the audio library with the standard
American English native accent or IPA pronunciation. The other is Thai-English Accent ASR, which is the
audio library produced by Thai people with different knowledge and experiences in English speaking
communication. The practice of English pronunciation particularly focuses on eleven problematic consonant
sounds of Thai EFL students, according to the previous studies of English pronunciation in Thai contexts.
These eleven consonant sounds are divided into five lessons: 1) /ð/-/θ/-/tθ/, 2) /ʒ/-/ʃ/, 3) /dʒ/-/tʃ/, 4) /z/-/s/ and
5) /b/-/p/. If the users receive a score higher than 90%, between 61-90%, between 31-60%, and lower than
30%, they will be classified as an EFL learner with an American English accent, a native-like accent, an
intelligible Thai-English accent, a localized Thai-English accent, respectively.
Keywords : Automatic speech recognition, Learning application, English pronunciation, Connectionist
temporal classification, Deep learning
Introduction
Today, English is taught as a foreign language in Thailand, but Thai EFL students still have the very
low level of English proficiency (EF English Proficiency Index, 2021) This project aims to design the
application for Thai EFL students to practice their English pronunciation by build an algorithm with Automatic
Speech Recognition (ASR) for a web application to practice English pronunciation, as a self-learning material.
OT1_17_01/1
90
12th SCiUS Forum
Automatic Speech Recognition (ASR) is defined as "a cutting-edge technology that/ allows a computer or
even a hand-held PDA/ to identify words that/ are read aloud or spoken into any sound-recording device. The
ultimate purpose of ASR technology is/ to allow 100% accuracy with all words that// are intelligibly spoken/
by any person regardless of vocabulary size, background noise, or speaker variables." (CSLU, 2002). This
project focuses on writing an algorithm for web-based learning as an online learning platform to improve
English pronunciation, particularly for Thai EFL students.
Methodology
The methodology of this study was divided into four parts, namely data collection, system design,
system development and web application.
1.) Data collection
We have studied the sounds that have problems in English pronunciation for Thai people and have
come up with 11 sounds, namely /ð/, /θ/, /tθ/, /ʒ/ /ʃ/, /dʒ/, /tʃ/, /z/, /s/, /b/ and /p/. The English vocabulary with
these sounds were recorded by high-school students, undergraduates, and teaching professionals in English
and non-English subjects. The record is created for Thai English pronunciation database to evaluate the user’s
English pronunciation competence, together with American English pronunciation.
In this project, End-to-end system is an algorithm
which is used to map a sequence of input acoustic features
directly into a sequence of grapheme or words. We trained the
end-to-end algorithm to optimize criteria related to the final
evaluation metric which we are interested in word error rating.
The end-to-end algorithm is favorably used to create ASR Figure 1: End-to-end ASR Pipeline
systems which involve trained acoustic, pronunciation and
language model components in separate ways. Figure 1 shows the structure of End-to-end algorithm structure
to curate pronunciation lexicon, define phoneme sets for the particular language that requires expert
knowledge. It can be seen that end-to-end speech recognition greatly simplifies the complexity of traditional
speech recognition. Therefore, it is not a necessary to label information manually. The system can
automatically learn language or pronunciation information in the neural network.
2.) System design
Users and admin staff login to the system. First, users or
English pronunciation learners are required to take the pre-test that
ASR is used to measure learner’s English pronunciation. The
application will then show the user’ pre-test results and also gives
them recommended lessons to carry on their English practice. Then,
the user’s score data is stored in the score database. When the user
finishes learning their recommended lessons, they are required to
take the post-test in order to compare their pre-test and post-test Figure 2: Use case diagram
scores and evaluate their English pronunciation performance. The
learning path is shown as use case diagram (Figure 2)
OT1_17_01/2
91
12th SCiUS Forum
The work of using ASR starts with asking the user to speak words. Then, the system records the
sound and sends the audio data to the server for processing with the ASR model, where the model predicts the
accuracy by giving a probability value. Finally, the result values are processed into scores and inserted into
the MySQL database, as shown in Figure 3. This web application is divided into three tables for database. One
is the data table of users’ profile; another two data tables are vocabulary and users’ scores which is separated
by phoneme to be used for analyzing users' speaking skill scores as shown in Figure 4.
Figure 4: Flowchart of ASR Figure 3: Database design
3.) System development
To implement the system, the web application is developed by using flask as a server-side web
framework and MySQL is used as a database. Pytorch is also used as a python library for machine learning
development. In order to prepare the data for training, the filename and the file's verbal response are required.
Once the model got trained, we will use it to compare the accuracy of the sound which is pronounced by the
user and take the value as a scoring criterion.
4.) Web application
Users initially take the pre-test. In the English
pronunciation lessons, ASR is applied as a measurement
of pronunciation accuracy. The criterion used in
assessment is the recorded pronunciations of Thai
people who have different educational backgrounds and
experiences in using English after the records were
imported and trained into the model. Figure 5: System overview
In the section of score analysis, the system
displays the score results and also show the lessons that the user should have more practice, according to the
pre-test scores. This web application of English pronunciation practice provides five lessons based on five
pairs of problematic sounds: 1) /ð/ - /θ/ - /tθ/, 2) /ʒ/ - /ʃ/, 3), /dʒ/ - /tʃ/, 4) /z/ - /s/ and 5) /b/ - /p/. The user
practices English pronunciation at the word-level only. After finishing learning as recommended, the user has
to take the post-test. Then, the system shows the score of their post-test.
OT1_17_01/3
92
12th SCiUS Forum
Results
In our Thai-English pronunciation corpus (Figure 6),
940 audio files of 59 vocabulary were collected from 16 Thai people
with different background of English. However, this corpus is
considered as a low-resource corpus compared to general ASR corpuses
that at least 200 hours of sound are required. According to the trial of the
web application by using ASR algorithms, we found that the end-to-end
algorithm is the most appropriate technique for our project because end-
to-end algorithm directly maps a sequence of input acoustic features into
a sequence of grapheme or words. This algorithm also uses Figure 6: Corpuses in ASR training
Connectionist Temporal Classification (CTC) to align the input and output sequences when the input is
continuous and the output is discrete which is decent for low-resource dataset. The trained ASR model
analyzes the user’s pronunciation according to the accuracy of the phonemes that the user has pronounced in
order to give the pre-test score and also recommend the lessons for the user (Figure 7).
Figure 7 Pre-test and Introducing the lesson by analyzing the ASR
Conclusion
This paper proposes to improve English pronunciation for Thais based on common Thais mistake in
English pronunciation. In particular, the method applies multiple E2E ASRs as measured of fluency in English
pronunciation which require sufficiency speech corpus. Due to lacking theoretical speech assessment results
the ASR model is not dependable but trend to acceptable measured. The tool to acceleration speech corpus
assessment on specific case of pronunciation mistake for Thais plays significant role on this works.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This
extended abstract is not for citation.
References
Wang S, Li G. Overview of end-to-end speech recognition. InJournal of Physics: Conference Series 2019 Apr
1 (Vol. 1187, No. 5, p. 052068). IOP Publishing.
Michael Nguyen. Building an End-to-End Speech Recognition Model in PyTorch [internet] AssemblyAI
Blog.2020 Dec 1; cited 2022 May 2022.
Isarankura S. Using the audio-articulation method to improve EFL learners’ pronunciation of the
English/v/sound. Thammasat Review. 2015;18(2):116-37.
OT1_17_01/4
93
12th SCiUS Forum
List of Science Projects 12thSCiUS Forum
Oral presentation
Technology and Computer Group 2
Saturday August 27, 2022
No. Code Title Author School
Miss Natnicha Junden PSU.Wittayanusorn
1 OT2_15_10 Digital Cart Miss Punyisa Maliwan School
Suankularbwittayalai
2 OT2_11_01 Diagnosis leukemia type Miss Supitcha Taweetun Rangsit School
Myeloma using Machine Miss Nasira Channaronk Rajsima Witthayalai
Learning Model Mr. Pongporn Indraphandhu School
3 OT2_06_01 Development of real-time air Mr. Pongwarat Aekrathok Paphayomphittayakom
School
quality monitoring by low- Mr. Tangman Sattayapanudech
PSU.Wittayanusorn
cost PM sensor Mr. Thanakrit Laksanalekha School
4 OT2_14_01 A Smart Cap : Obstacles Mr. Phongsit Thongthiang Surawiwat School,
Suranaree University of
Detection for the visually Miss Chananchida Poltam Technology
impaired using ultrasonic Engineering Science
Classrooms
sensors (Darunsikkhalai School)
5 OT2_15_01 Salem the Third : Rogue-lite Mr. Veerapaj Rajsakij Demonstration School
of Khon Kaen
RPG game development Mr. Puwis Napibul University
using Godot engine PSU.Wittayanusorn
6 OT2_18_02 Developing a robotic arm for Mr. Suphavich Kittichaisarot School
chemical container handling Mr. Piyamin Sripho
in laboratory using artificial Mr. Apirak Santiweerawong
intelligence to recognize
objects.
7 OT2_09_04 The study and comparison of Mr. Sorravit Leeprasertsuk
controllers for self-driving Mr. Pachara Wongthanakarn
cars Mr. Phidipok Makcharoenchai
8 OT2_05_01 Artificial Intelligence Miss Wadsana Muenthaisong
Classification of Diabetic
Retinopathy and Macular
Degeneration.
9 OT2_15_02 Office Syndrome monitoring Miss Pinjutha Srisawat
& detection system Miss Enika Thaikul
94
12th SCiUS Forum
No. Code Title Author School
10 OT2_03_03 Automatic swap machine for Mr. Kittisak Rotong Naresuan University
Secondary
testing COVID-19 Mr. Phunuwat Boonkeard Demonstration School
Islamic Science
11 OT2_19_01 Thermal insulation board Miss Pimmada Useng Demonstration School
from plastic waste Miss Nurfatihah Arwae
Engineering Science
12 OT2_09_05 Development of Electrical Miss Suchaya Panturasri Classrooms
(Darunsikkhalai School)
Energy Measurement Based Miss Varinthorn Chatburapachai
Naresuan University
on NodeMCU ESP8266 Mr. Vessapoo Sawangwong Secondary
Demonstration School
13 OT2_03_01 Development of a PV Miss Kamolchat Jadyangtone Paphayomphittayakom
School
monitoring system by using Miss Nutchaya Kaewdaeng
PSU Wittayanusorn
IoT Surat Thani School
14 OT2_14_02 The development of Rice Miss Chawisa Nuilek
Disease classifier from Rice Miss Phitchayapha Rungrueang
Leaf images Mr. Kasidate Jaroensiripun
15 OT2_17_02 Weeding Detection and
Control System with Mr. Weerachai Sornprom
LoRaWAN Technology
95