12th SCiUS Forum
Title : Digital Cart OT2_15_10
Field :
Author : Computer and Technology
Miss Punyisa Maliwan
School : Miss Natnicha Junden
Advisor : PSU.Wittayanusorn School, Prince of Songkla University
Ms. Chouvanee Srivisarn (Prince of Songkla University)
Mr. Weerawat Wongmek (PSU.Wittayanusorn School)
Abstract :
Nowadays, People are living more fast-paced in everyday life. For example, going out for shopping
can sometimes be a problem. In the process of purchasing, Consumers cannot know the value of the products
they added to their carts, hey will know only the price of each product which bring the products that they have
chosen may encounter problems which is the value may greater than they prepared and may interfere with the
payment process, cause wasting of time
From the problems mentioned above, the organizers have done a project to calculate the product price
for a shopping cart (Digital Cart) to be more suitable and more convenient to use. For the first step, the organizer
collects product price information and barcodes to create the database. When the user starts the transaction, the
system will retrieve the data saved in the model and display it through the display screen showing to the user.
In the last step, the author has created a 3D Box to insert the model. And testing the model as efficiently as
possible
The experimental result found the digital cart can calculate the product price accurately and efficiently
by scanning barcodes. But sometimes there might be an error with the barcode scanner module.
Keywords : Barcode, Arduino ide, Barcode scanner
Introduction
Nowadays, technology is used to invent or design various products. The products that are invented
are helping out the consumers or people encountering problems during their lives. The main problem that is
often encountered is time. Time is a very important thing in our lives and it is a problem that we can often face
in our daily life. For example, when we have limited or insufficient time to shop, sometimes it was difficult to
calculate how much we have bought now because we know only the price of each product. When we do not
know the price of the entire product, sometimes the price of the product is more than the amount we have
prepared, and when there is not enough money, it may cause the delayed payment, and sometimes It may cause
annoyance and disturb other users.
Therefore, we have invented a shopping cart calculating product prices to facilitate convenience for
users from such problems. The project organizers have the idea to create a price calculating cart that displays
real-time product prices to help shopping become easier. It relies on the principle of UART, which is when the
barcode of the product is shown to the camera. The system will clearly display the total price of all products to
OT2_15_10/1
96
12th SCiUS Forum
help reduce the problem of not knowing the price of the items in the cart and may help reduce the problem of
insufficient funds to shop.
Methodology
The experiments were divided into 2 parts as follows,
Part 1: Database preparation
1.1 Collect the barcode of the desired product. Currently, there are a total of 10 collections, which are:
1.2 Set the price of each item.
1.3 Bring barcodes number and prices to record in the code for comparison in a database.
Figure 1 Example of price code
Part 2: System development
2.1 Create a code about displaying through the home screen in the Arduino ide program by specifying
the display screen that will be used to start the program.
2.2 Create a code for displaying on screen in Arduino ide program, specifying the length of the number
of digits and total price to be displayed on screen to make it clearer and easier to understand.
Figure 2,3 Code for finding the number of digits of the sum to display through the LCD screen and Code for displaying results through
the LCD screen
2.3 Design and build equipment for modeling Using the design from the Tinkercad program, the box
is torn equal to 16.5 cm long, 8.5 cm wide, 5 cm high, and printed by a 3D printer (Flashforge).
OT2_15_10/2
97
12th SCiUS Forum
2.4 Create a Restart button to use in the process that the user wants to start all new transactions.
Figure 4,5 Equipment for inserting a model (3D box).
Results
The functionality of the shopping cart was divided into 2 parts as follows:
Part 1: User section (Customer)
which is the part that starts when the user uses the shopping cart to calculate the price of the product.
By bringing the desired product to be scanned through the Barcode scanner module, the system will go to view
the price that was initially set and displayed through the display screen. When the user wants to start a new
transaction, you can press the restart button to redo the system.
1.1 Screen during transaction
Figure 6 Screen during transaction
Step 1: Scan the first product using the Barcode scanner module. Then the screen will display the
barcode and the price of the 1st product.
Step 2: When you want more products, you can scan the product as in step 1. Then the screen will
display the barcode of the 2nd product and the price of 1st product plus 2nd product.
Step 3: If you need more products Can follow step 1, then the system will display the total price via
the display screen. and display the barcode of the last product scanned
Step 4: To make a new transaction. You can press the restart button.
1.2 System architecture diagram
OT2_15_10/3
98
12th SCiUS Forum
Figure 5 Overall of the model
Part 2: Part of the system maker (Back-up)
which is the part that will start working when new product price information is added or want to edit
additional information by bringing the barcode and price to be recorded in the code (as same as the database).
Conclusion
A digital cart calculates the price of the product that the organizers have created and be able to read
the barcode and calculate the price of the product that you want to make a transaction accurately. It uses the
barcode scanning principle of the Barcode scanner module by evaluating the model's performance using
conducting three transaction trials, it was found that the model can perform well with good efficiency. And
there are fewer errors caused by the model. Most of the errors that are often encountered are caused by the
error of the Barcode scanner module.
Acknowledgments
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by the Ministry of Higher Education, Science, Research, and Innovation. This
extended abstract is not for citation.
References
1. Norhashimah Mohd Saad, Nik Mohd Zarifie Hashim, Barcode Recognition System, July 2013.
https://www.researchgate.net/publication/264623038_Barcode_Recognition_System
2. Brian M., 2000 ‘How UPC Barcodes Work?’,7 November
2008.http://electronics.howstiffworks.com/upc.htm
OT2_15_10/4
99
12th SCiUS Forum
Title : Diagnosis leukemia type Myeloma using Machine OT2_11_01
Learning Model
Field :
Author : Technology and Computer
School : Ms. Nasira Channaronk
Advisor :
Mr. Pongporn Indraphandhu
Ms. Supitcha Taweetun
Suankularbwittayalai Rangsit School and Thammasat University
Phuphiphat Jaikaew, M.Sc. Faculty of Science and Technology, Thammasat University
Abstract
Multiple myeloma (MM) is cancer-related to the blood and bones. There are statistics of MM
worldwide, about four people per 100,000 people per year. And found MM patients at a rate of 0.5-1% of the
country's population. The problem from similar between MM and other types of blood cancers and bone
diseases. This cause makes the diagnosis difficult. Most MM cancer diagnoses are in the advanced stage.
Moreover, there is difficult for patients to survive, and no confirmation to cure MM currently. Therefore, this
research aims to create a model for staging this disease to make patients receive treatment quickly. There is a
test algorithm starting by collecting data from all 995 patients. From the database of the MMRF-COMPASS
project, the data that we prepared has two types of data: clinical data and SNP data. We performed four models:
Decision tree, K-Nearest Neighbors, Support Vector Machine, and Naive Bayes. All models measured the three
performance: Accuracy, AUC values, and ROC curve. The results showed the best decision tree model in the
Rapid Miner application in Binary 1. They have three crucial features: Albumin, Creatinine, and SNP, which
were altered on the IGHV2-70 gene of chromosome 14 at position 106770621. Interestingly, if the serum
albumin level were increased, the chance of survival would be less. While if the creatinine level is over, it can
be diagnosed as cancer, and SNP was altered on the IGHV2-70 gene mentioned above, which will be related
to the Immunoglobulin and can separate stage I from another stage effectively. Compared to all experiments,
this modeling in Binary 1 had AUC as high as 0.814. But Binary 2 has only 0.73 AUC because the data is
underfitting. And Binary 3 has a high of 0.923, which is an inflated AUC value because the data is overfitting.
The researcher will develop the research to be more efficient and can use this research in conjunction with the
doctor’s diagnosis to increase the chances of survival of more patients.
Keywords : Multiple Myeloma, Machine Learning, Classification, Prognosis, Biomarker
Introduction
The previously reported statistics of Multiple Myeloma (MM) in Thailand found new cases on
average 300-400 cases per year and saw patients with the disease at a rate of 0.5%-1% of the entire population.
The average age found 59 years. The global incidence of MM is approximately 4 cases per 100,000 inhabitants
per year, comprising 1% of all cancers and nearly 15% of all hematologic malignancies. The disease is twice
as common in blacks as in whites. There is slightly more common in men than women. The disease is also
more common in older adults aged 60-70 years. This cancer is similar to other blood cancers and bone diseases
that are difficult to diagnose. In addition, MM is often asymptomatic in its early stages. Making it difficult to
OT2_11_01/1
100
detect. The research team aims to develop machine learning to detect the stage of Multiple Myeloma patients
to be helpful and a guideline for further diagnosis and treatment.
Methodology
1. Data collection
Collecting clinical data and SNPs from 995 patients and arranged in a table for an organized
and easy to use data.
2. Data preparation
Manage raw data to change appropriately for importing into the database or for further analysis.
2.1. Data cleaning - exclude irrelevant data: replace the missing value with “-“ and valuable feature
with a missing value not more than 20% of total data. If the stage of multiple myeloma is not
identified, it is divided into a test set.
2.2. Data transformation - convert data to suit usage
2.3. Data Integration - combines data from multiple sources into a comparable dataset.
2.4. Data Selection - feature selection and feature engineering. Clinical data is performed using
Relief, which separates each binary, binary1 gains albumin and creatinine, binary2 gets Race,
albumin, and age, and Binary3 gets Age, Gender, and Dexamethasone in the SNP's. Choose if it
appears more than 2 times, the relief value is less than 3%, which Binary1 has 19 positions, Binary 2
has 17 positions, and Binary 3 has 2 positions.
3. Build model
3.1. Splitting up data: divide the data into 3 parts: Training set, Test set, and Validation set by
using a 10-fold cross-validation method which is to divide the data into 10 parts with each part from
random that will distribute the information evenly Divide the data into 2 types, one for teaching the
model in 9 parts, for testing one part, and repeating until the end.
3.2. Modeling: due to the large database size, Supervised learning was chosen. The selected
algorithms were K-Nearest Neighbors, Decision tree, Support Vector Machine, and Naive Bayes.
3.3. Train model: uses Training set to teach model multiple algorithms using standard parameters
of each algorithm. Next, the most essential variables of each algorithm were analyzed, and finally, the
type of error of each model was investigated.
4. Evaluate model
measure and compare the performance of each model using measurement values such as Accuracy,
ROC curve, AUC, and p-value of each model. Decide which values have the most significant impact on the
model to measure model performance. We must also consider that Can the model work well with all data
groups?
OT2_11_01/2
101
12th SCiUS Forum
Results
Binary 1 performance table
Based on the comparative table of all Binary 1 model trials from both programs, the highest AUC
was 0.814 +/- 0.050 in the Decision tree model in the Rapid Miner using three data features: Albumin,
Creatinine, and Chr14:106770621: T>C. ( IGHV2-70 Gene )
Binary 2 performance table
In the comparison table of all Binary 2 models of both programs, the highest AUC is 0.73 in the
Scikit-learn Support Vector Machine model using 21 features: Race, Albumin, Age, and SNPs, for example,
Chr14:g. 105863862:T>G ( IGHJ5 Gene ), Chr13:g.105490240:A>G ( DAOA-AS1 Gene )
Binary 3 performance table
Based on the comparison table of all Binary 3 model trials from both programs, the highest AUC
was 0.923 +/- 0.036 in the Decision tree model in the Rapid Miner using three features: Age, Gender, and
Dexamethasone.
Discussion
OT2_11_01/3
102
12th SCiUS Forum
From the picture, the model was the best model of all experiments. A decision tree model in binary 1,
using 3 features Albumin, Creatinine, and SNP, which were altered on the IGHV2-70 gene of chromosome 14
at position 106770621.
This model has the highest AUC value and the most significant area under the graph. In the first step,
albumin must be greater than 37.395 g/l, the second step is separated by creatinine must be less than or equal
to 100.388 umol/l, and the third step is separated by creatinine less than or equal to 71.302. Finally, separated
by an SNP that altered the IGHV2-70 gene of chromosome 14 at position 106770621 by transitioning from T
base to C base, it can be concluded that it is stage 1.
Research recommendations for those who want to develop this research, you should build
performance in binary 2 because we separated stages 2 from other stage not as good as it should be. In addition,
if the doctor wants to use our research for diagnosis, this research should be used with other diagnostic methods.
Conclusion
The best model of all trials was Binary 1, the Decision tree model in the Rapid Miner, which best
distinguished Stage 1 of MM disease among all trials with AUC as high as 0.814, whereas Binary 2 had an
AUC of only 0.73. Because the data is underfitting and Binary 3 has an inflated AUC value of up to 0.923 due
to overfitting data, In applying the model, doctors can use it in conjunction with stage 1 diagnosis for better
outcomes. and increase the chances of survival of more patients
Acknowledgments
This project was supported by Science Classroom in University Affiliated School (SCiUS) under
Suankularbwittayalai Rangsit School and Thammasat University. The funding of SCiUS is provided by the
Ministry of Higher Education, Science, Research, and Innovation. This extended abstract is not for citation.
References
1. Harousseau JL, John Shaughnessy J, Richardson P. Multiple Myeloma [Internet]. Ashpublications.org.
Hematology Am Soc Hematol Educ Program; 2004 [cited 2021May30]. Available from:
https://ashpublications.org/hematology/article/2004/1/237/18686/Multiple-Myeloma
2. ผสู้ ันติ สิทธาคม. โรคเอม็ เอม็ คอื อะไร และอนั ตรายแคไ่ หนกนั [Internet]. รามา แชนแนล. สาขาวิชาโลหิตวิทยา ภาควชิ าอายรุ ศาสตร์ คณะแพทยศาสตร์
โรงพยาบาลรามาธิบดี มหาวิทยาลยั มหิดล; 2017 [cited 2021Apr19]. Available from:
https://www.rama.mahidol.ac.th/ramachannel/article/
3. Guilal R. Multiple myeloma dataset (MM-dataset) [Internet]. Mendeley Data. Mendeley Data; 2019 [cited
2021Jun4]. Available from: https://data.mendeley.com/datasets/7wpcv7kp6f/1
4. Chen R, Garapati S, Wu D, Ko S, Falk S, Dierov D, et al. Machine learning based predictive model of 5-
year survival in multiple myeloma autologous transplant patients [Internet]. American Society of
Hematology. American Society of Hematology; 2019 [cited 202ADApr22]. Available from:
https://ashpublications.org/blood/article/134/Supplement_1/2156/427904/Machine-Learning-Based-
Predictive-Model-of-5-Year
OT2_11_01/4
103
12th SCiUS Forum
Title: Development of real-time air quality monitoring by low-cost PM sensor
Field:
Technology and Computer OT2_06_01
Author: Tangman Sattayapanudech, Pongwarat Aekrathok and Thanakrit Laksanalekha
School: Ratchasima Witthayalai School, SCiUS-Suranaree University of Technology
Advisor: Assist. Prof. Dr. Kiattisak Batsungneon and Mr. Supachai Kaewpoung Institute of Public
Health, Suranaree University of Technology
Abstract:
At present, airborne particulate matter, especially particulate matter (PM2.5) has created serious
problems on our daily life, particularly the health issues. Since commercially PM2.5 monitoring stations are
high cost, there has been installation only in some particular areas and cannot be accessed publicly. This results
in the case that people don’t have a danger alert warning when the PM concentration exceeds the air quality
standard. As part of solving this problem, it is the aim of this project to develop air quality monitoring system
using a low-cost PM sensor which is capable of measuring PM concentration and display results in real-time
on a web and Line Application. The developed air-quality monitoring consisted of a PM sensor, a computer
interface to link the measured data to a cloud server and operating software for presenting the real-time values
on-line. Then, the unit was tested for measuring the particulate particles emitted from three different sources,
namely cigarette smoking, exhausts from vehicle combustion engine with gasoline and diesel. The background
was measured prior to measure the PM value in each emission environment. The 3 minutes’ emission results
from this experiment, showed that the amount of PM2.5 from cigarette smoking was the highest concentration
(511.5 µg/m3), following by exhaust from diesel engine (96.7 µg/m3) and gasoline engine (66.5 µg/m3). The
air quality monitoring can be install many more at any hazardous areas and especially, This will help people
to avoid the hazardous area which may cause health problem due to the high PM concentration exposure.
Keywords: Modbus RTU, Computer interface, Air quality monitoring, PM Sensor, Internet of Things
Introduction
Many health organizations categorize particulate matter (PM) by size because different size particles
have different health effects. For instance, PM 10 particles (particles less than 10 microns in size) can irritate
your nose and eyes. PM 2.5 are 2.5 microns or smaller particles that are considered especially dangerous to
human health because they bypass many of our body’s defenses. Nose hair, mucus, and other defenses work
to catch these smaller particles before they enter deeper into our bodies. That said, PM 2.5 particles can get
into our lungs, where they can reach the alveoli and eventually enter the bloodstream.PM leading to many
human health effect such as upper respiratory irritation, heart disease, and lung cancer. Therefore, there are
OT2_06_01/1
104
12th SCiUS Forum
commercially air-quality monitoring stations that have been installed in many areas to provide the level of PM
concentration and announce publicity from time to time. However, these air-quality monitoring stations are
expensive and thus cannot cover all hazardous area where the PM concentration exceeds the air quality
standard. This results in the case that people do not have a danger alert warning to avoid hazardous area. As
part of solving this problem, it is the aim of this project to develop air quality monitoring system using a low-
cost PM sensor which is capable of measuring PM concentration and display results in real-time on a cloud
server via web Application. This will allow everyone to use their mobile phone to access the situation data of
PM concentration and the alert of danger can be known at any convenient time. The experimental part in this
project started by assembling a particle meter using the ET-Modbus RTU PM2.5 Box V2 particle sensor with
Modbus RTU protocol using a Raspberry Pi4 board as the operating system. The operating system can measure
both PM2.5 and PM10 fine dust particles using the sensor. Then the data from Raspberry Pi4 was linked to
cloud server using Software node-red to keep on Firebase and show at real-time via internet on mobile. Then,
the unit was tested for measuring the particulate particles emitted from three different sources, namely cigarette
smoking, exhausts from vehicle combustion engine with gasoline and diesel. The background was measured
prior to measure the PM2.5 value in each emission environment.
Methods and Experimental Details
The research was conducted in several stages. The research stages carried out in this study are
following ;
A. Components Preparation
The real-time air monitoring system developed in this work, consisting of the following:
Particle Sensor Model ET-Modbus RTU PM2.5 Box V2, Raspberry Pi4 board used as an operating
system or a small computer, Software node-red is used to transmit data from sensors to the cloud servers.
Sublime Text 3 is used for writing websites such as HTML, CSS, Java, etc, Sign up for r login on Firebase
real-time database to send measurement results to the cloud server, Data display via the web application
B. Component fabrication
Componentt fabrication and the real-time air quality
monitoring system developed in this work were assembled
as shown in Figure 1 and Figure 2.
Figure 1: System’s components Figure 2: Processing of Modbus RTU
OT2_06_01/2
105
12th SCiUS Forum
C. Testing System and Sensor
Figure 3: Testing System All the testing and sensor were arranged as shown in Figure 3.Our dust
meter was connected to the raspberry pi and the concentration of PM2.5
was measured every 1 second. Then the data was sent by Modbus RTU
RS 485 to the cloud server and displayed on the web page and warning
on Line Application. First we measured the background PM2.5 and
observed if it could show up on the web that we have created. It was found
that there was some small value of PM2.5 as expected.
Table1:Experimental Steps and conditions So the experiment started by measuring for the
background of PM2.5 for 3 minutes, then measured
Condition Background Starting Reset sensor PM2.5 from cigarette smoking, 3 minutes later,
measured saved the values and spent another 5 minutes
1. Smoking 3 min 5 min resetting the engine and did the same for a diesel-
2. Diesel 3 min 3 min 5 min
3. Gasoline 3 min 5 min
3 min
3 min
engine vehicle and Gasoline-engine vehicle, as shown in Table 1.
Results
Website was created that showed PM2.5 concentration on the web page and warning on Line
application when PM2.5 concentrate exceed the
standard level (50 µg/ m3). The measured PM2. 5
concentrations were
illustrated.
Figure 4: Real-time PM2.5 on Web page and Warning PM2.5 value on Line Application
3,000 PM 2.5 From Smoking PM 2.5 From Diesel Engine (Car) PM 2.5 From Gasoline Engine (Car)
2,500
2,000 100 200 300 400 500 600 700 250.0 120.0
1,500 Time (s)
1,000 200.0 100.0
500 150.0
0
0 100.0
50.0
0.0
0 100 200 300 400 500 600 700 800
Time (s)
PM 2.5 (µg m3) 80.0
PM 2.5 (µg m3)
PM 2.5 (µg m3) 60.0
40.0
20.0
0.0 100 200 300 400 500 600 700 800 900
0 Time (s)
Figure 5: Measured PM2.5 from smoking, Diesel and Gasoline
for background (for Ethnegfiinrest 3 minute) and testing with cigarette smoking, exhausts from vehicle combustion
engine with gasoline and diesel,as shown as in Figure 5.The data of measured PM2. 5 from emission of
different sources conditions was shown in Table 2. It was found that background average PM2. 5 was 68. 8
µg/m3 and cigarette smoke averaged 511.5 µg/m3, Diesel-engine averaged 96.7 µg/m3, Gasoline-engine with
OT2_06_01/3
106
12th SCiUS Forum
an average of 66.5 µg/m3. It can be seen that the PM2.5 from cigarette smoking had the 5.3 times more than
Diesel- engine, 7. 7 times more than that from Gasoline- engine. After measuring for PM2. 5 concentration
during emission, it was found that there were some PM 2.5 left in the chamber. This PM2.5 also accumulates
to higher concentration and cause more health problem than expected since the accumulation of PM2.5 exceed
the safe standard limit for PM2.5 (50 µg/m3).
Table 2: Average measured values of PM 2.5 (µg/m3) from different sources of emission.
Condition Average PM2.5 (µg/m3) After Experiment
1.Smoking Background During Experiment 65,.5
2.Diesel 68.1
3.Gassoline 63.4 511.5 58.8
76.6 96.7
66.6 66.5
Conclusion
The air quality monitoring system developed from this project is not expensive so it can be installed many
more areas. This will help people to avoid the hazardous areas which may cause health problem due to the
high PM concentration exposure. Apart from the cheap cost and having the ability of measuring PM2. 5
concentration as compared with those others commercial monitors, the air quality monitoring system
developed from this work is more convenient since it can be viewed in real-time via internet on mobile phone
when PM2.5 concentrate exceed the standard level of Thailand.
Acknowledgment
This project was supported by Science Classroom in University Affiliated School (SCiUS). The funding of
SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This extended abstract
is not for citation.
References
1.M. Kampa and E. Castanas, “Human health effects of air pollution,” Environ. Pollut. Barking
Essex 1987, vol. 151, no. 2, pp. 362–367, Jan. 2008, doi: 10.1016/j.envpol.2007.06.012.
2.S.-Y. Chuang, N. Sahoo, H.-W. Lin, and Y.-H. Chang, “Predictive Maintenance with Sensor Data
Analytics on a Raspberry Pi-Based Experimental Platform,” Sensors, vol. 19, no. 18, p. E3884, Sep. 2019,
doi: 10.3390/s19183884.
OT2_06_01/4
107
Title: A Smart Cap: Obstacles Detection for The Visually Impaired 12th SCiUS Forum
using Ultrasonic Sensors
Field: Technology and Computer OT2_14_01
Author: Mr. Pongsit Tongtiang and Ms. Chananchida Poltam
School: Paphayompittayakom School, Thaksin University
Advisor: Dr. Naphat Keawpibal, Thaksin University
Asst. Prof. Dr. Noppamas Pukkhem, Thaksin University
Abstract
The objective of this research is to develop a smart cap by utilizing multiple ultrasonic sensors that
provide detecting obstacles, help visually impaired person with a sense of security and lessen the number of
accidents and allow them to live more easily in their daily lives. The wearing the smart cap helps them to monitor
the presence of any objects around them and alert when either the objects at high level are detected or something
in front of them are movement. In our proposed system, ESP32 microcontroller is used as an important part to
control ultrasonic sensors and buzzer module on the smart cap. There are three ultrasonic sensors equipped at
different positions on the cap, i.e., left, front, and right, to detect and calculate distance between visually impaired
person and obstacles using the principle of reflection of waves. Furthermore, when they start to use the smart cap,
it will send a notification to the visually impaired assistant through Line application and be able to warn the visually
impaired people by the sound from a passive buzzer module when an impediment has been detected. The smart
cap can detect obstacles within 1 meter with an alarm and continue to detect obstacles in real-time. In our
experiments, the tolerance in distance detection of three sensors is measured in three different ranges, consisting
of 20 cm, 50 cm, and 100 cm, respectively. The results show that the error of distance measurement for those
sensors are an average of 0.83 cm, 0.67 cm, and 0.35 cm for the position at left, front, and right, respectively.
Therefore, the value of errors are less than 1 cm for all sensors, which is a satisfactory result in detecting obstacles
for the our proposed smart cap.
Keywords: smart cap, obstacle detection, visually impaired person, ultrasonic sensor
Introduction
Currently, the number of people who are visually impaired person increasing in the global population
over 18 years of age as reported in the WHO Global Report. The number of blind people worldwide has increased
from 5.3 billion in 1990 to 8 billion in 2020 and tends to increase to 10.3 million by 2050. Particularly, in Thailand,
there are over 200,000 visually impaired person as reported by the Department of Empowerment of Persons with
Disabilities in 2018. This disability may be caused by either from congenital disability or from accidents. It affects
OT2_14_01/1
108
12th SCiUS Forum
many activities in daily life, for example, reading a paper, writing, and especially travelling which is lead to
unexpected accidents for a visually impaired person
Recently, many state-of-art technologies have been developed to take care the mobility of visually
impaired person. These are mainly equipped microcontroller board, sensors, and power supply, which is designed
for easily wearing as a part of body. These devices precisely operate as a radar system using ultrasonic rays to
detect the object around them. The distance of the object can be measured by the time of wave travel in the air.
In this work, the smart cap has been developed by connecting three ultrasonic sensors of HC-SR04 model
on a cap in three different positions. Sensors send all reflective signals and identify the distance of objects if present
to ESP32 microcontroller. If any objects are detected in the vicinity of sensors within specific range, the ESP32
microcontroller then sends a signal to buzzer to alert the visually impaired person. Furthermore, when the smart
cap is started, it will send a notification to Line application belonging to the assistant of visually impaired person.
Therefore, the person can just wear a cap, the use of a cane is not needed. It is cheap and easily to use in real-time
environment.
Objective: To develop an obstacle warning device for the visually impaired person that helps them safer
to live a more comfortable life. When enabled, a notification message will be sent to visually impaired assistant.
The use of three ultrasonic sensors, it provides more precisely obstacle detection, which are equipped on a cap at
the left, front and right the smart cap is affordable and cost-effective.
Methodology
Experimental design consists of 3 parts as follows:
Part 1: The circuit connection pattern of the sensors on the smart cap is shown in Figure 1. The smart cap consists
of six main devices, including three ultrasonic sensors, one buzzer module, one microcontroller, and one power
supply. Three ultrasonic sensors were designed to be positioned at an angle of 135 degree apart as shown in Figure
2, to avoid the high frequency waves of ultrasonic sensors reflected back to other ultrasonic sensors.
Figure 1: Circuit connection of sensors inside smart caps Figure 2: The position angle of ultrasonic
Part 2: The ESP32 microcontroller is responsible for being a smart cap operation controller, with command coding
through the Arduino IDE program. Figure 3 illustrates the proposed smart cap. There are 3 ultrasonic sensors of
HC-SR04 model positioned on a cap at the left, front and right, which is responsible for detecting objects within
1 meter. Passive buzzer module: is a small speaker having one sound. When an object is detected, the buzzer is
OT2_14_01/1
109
12th SCiUS Forum
responsible for alerting to the visually impaired person. Power bank: is supplies power to the smart cap. The pattern
of alarm for each ultrasonic is different. The ultrasonic sensor on the left begins to detect obstacles within 1 meter.
If an obstacle is presented, the passive buzzer sounds a “Beep”. Then, the middle sensor starts to detect the
obstacles, if an obstacle is detected the buzzer will sound “Beep Beep”. The right sensor then. Starts detect
obstacles, if an obstacle is detected. the buzzer will sound “Beep Beep Beep”. These operations will continuously
detect the vicinity of objects in real time.
Figure 3: Smart cap Figure 4: Line notification from the smart cap
Part 3: The overview of smart cap operation is shown in Figure 4. At starting with turning on power bank,
microcontroller starts working and attempts to connect the Internet via Wi-Fi hotspot. After the ESP32 is connected
the Internet, it will send an alert via Line notify to visually impaired assistants.
Result Discussion and Conclusion
The experiments were conducted by measuring the precision of distance for three ultrasonic sensors with
different ranges, including 20 cm, 50 cm, and 100 cm, respectively. Each position of sensor on the smart cap was
measured 10 times for every distance by showing on serial monitor in Arduino IDE and then calculate an average.
Table 1 shows the results of an average distance measurement and error percentage of all ultrasonic sensors on the
smart cap. The error percentage of distance measurement can be calculated as the follow equation:
| − | × 100
HC-SR04 Ultrasonic Sensor Distance Calculation Equation:
Table 1: The results of an average distance measurement and its error percentages
Position of Distances (cm) Error Percentage (%)
Ultrasonic
Sensors 20.00 50.00 100.00 20.00 50.00 100.00
Left 20.30 48.80 99.00 1.5 % 2.4 % 1%
Front 20.00 49.00 99.00 0% 2% 1%
Right 20.50 49.50 99.40 2.5 % 1% 0.6 %
From the results, the error of distance measurement for those sensors are an average of 0.83 cm, 0.67 cm, and 0.35
cm for the left, front, and right sensors, respectively. These errors can be occurred from the different sizes of
objects and the heights of objects.
OT2_14_01/3
110
12th SCiUS Forum
Results of detection performance for ultrasonic sensors were correctly tested with the alarm of the passive
buzzer module. The test was done by having the ultrasonic sensors on 3 positions, left, front and right. Each sensor,
detects the wall obstacles within 1 meter for, 20 times and then we recorded the value of the number of notifications
of the passive buzzer module on each side and check the accuracy of detecting obstacles through the Serial
Monitor. Figure 5: shows the notification message via Line Notify when starting the use of smart cap.
Figure 5: Line notify notification result in visually impaired assistant mobile phone
The results of the smart cap obstruction alert test showed that the smart cap could accurately detect
obstacles and provide accurate distances, but smart cap have limitations: obstacles can only be detected at the top.
And smart cap lack aesthetics, not very convenient to use because the devices used in smart cap are at a low cost.
Therefore, it may not be effective in detecting enough obstacles. Compared to other barrier detection devices that
have been around for quite some time. Therefore, if this smart cap is introduced to further develop the scope of
barrier detection, then the scope of the barrier detection will be carried out. To develop notifications via Line
Notify. Additional, notification functions such as GPS may be added. This will allow the device to alert visually
impaired person to be practical and beautiful and convenient to use.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS). The funding
of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This extended abstract
is not for citation.
References
[1] World Health Organization. World Report Vision. (2019)
[2] Md. Wahidur Rahman, Saima Siddique Tashfia, Rahabul Islam, et al. The architectural design of smart blind
assistant using IoT with deep learning paradigm. Journal Pre-proof. (2020)
OT2_14_01/4
111
Title : 12th SCiUS Forum
Field : Salem the Third : Rogue-lite RPG game development using Godot OT2_15_01
Author :
engine
School : Technology
Advisor : Mr. Puwis Na Pibul
Mr. Veerapaj Rajsakij
PSU. Wittayanusorn school, Prince of Songkla University
Asst. Prof. Dr. Chinnapong Angsuchotmetee and Mrs. Wareerat Pumitummarat
Abstract
The main objective of this project is to develop a Rogue-lite RPG game designated for teens aged 12 or
above. The game that we developed in this project is named "Salem the Third". The main concept of this game
is inspired by "Salem Witch Trials". This game combines the concept of Christianity and Science Fiction to make
the main storyline exciting for players. The game is developed using Godot Engine, compiled, and deployed in
HTML5 game mode such that players can play this game online through their web browser without having to
install the game. Assets of this game are made in 2D pixel art styles. All are hand-drawn on our own using
Aseprite. The main prototype of this game is playable online. Our prototype, though it is already playable, is still
required a thorough game testing before it can be published commercially. That will be our future work on this
project.
Keywords : game, godot engine, rogue-lite
Introduction
Nowadays, the global game industry has grown exponentially because people are getting stressed due to
the COVID-19 pandemic. So they use games to entertain themself and fulfill their needs which can reduce their
stress in this situation.
In addition, the Thai game industry is also growing significantly and is likely to grow more. This will
make the investors interested in the game industry and there will be higher competition in the market. So we think
that having game development skills might be helpful in the future. However, because of the resources and time
limitation, we decided to focus on the replayability of the game with low development resources for the most
efficient of the game in time limit development. The game in this project is a rogue-lite RPG game in the concept
of witch hunt and time traveling, called Salem the Third.
OOTT22__1155__0011//11
112
12th SCiUS Forum
The main objective of this project is to learn about the overall game development process and to make a
game that can entertain players and reduce their stress during the COVID-19 pandemic.
Methodology
In this project, we use Godot engine to develop the game and Aseprite to make assets. There are 3 parts
of development as follows,
Part 1: Game designing
1.1 Researching for game theories
1.2 Learning about the ability of Godot engine
1.3 Writing a game design document
Figure 1 : QR code to access the game design document
Part 2: Finding and making assets
2.1 Drawing assets on Aseprite
2.2 Finding soundtracks on dig.ccmixter.org
Part 3: Programming, testing, and publishing
3.1 Developing game systems on Godot engine
3.2 Testing the results and fixing the problems
3.3 Publishing the complete version of the game on itch.io
OOTT22__1155__0011//21
113
12th SCiUS Forum
Results
Figure 1 shows the overall game system in this project. When
the player opens the game for the first time, they will see the game
menu interface that has options ‘Play’ to start the game and ‘Quit’ to
close the game. If players press ‘Play’, the prologue cutscene will start.
After that, the player will start the game at the ‘Town’ scene. The main
objective of this game is to find and win the final boss called ‘Dyson
Sphere’. If the player finished the main objective, the ending scene will
show up. But if the player got killed by any enemies, the death scene
will show and the player has to select the choices to play the game
again or quit the game.
Players can access the game from the QR code in figure 2 or
using the itch.io link below the QR code. The game can only be
playable on PC platform. Figure 2 : Overall game system flowchart
Figure 312 : QR code to play the game. Figure 4 : Gameplay footage
(https://salemthethird.itch.io/salemthethird-beta)
Conclusion
Salem the Third is published and can be playable on itch.io. The game also accomplished the two main
objectives, to learn the overall game development process and to make a game that players will have fun playing
it. However, there might be some more details and game systems to develop which is our future plan for this
project.
OOTT22__1155__0011//31
114
12th SCiUS Forum
Acknowledgments
This project was supported by Science Classroom in University Affiliated School (SCiUS) under
Prince of Songkhla University and PSU. Wittayanusorn School. The funding of SCiUS is provided by Ministry of
Higher Education, Science, Research and Innovation. This extended abstract is not for citation.
References
Brooke J. The Black Death and its Aftermath | Origins [Internet]. Origins. 2020 [cited 2022 March 29]. Available
from: https://origins.osu.edu/connecting-history/covid-black-death-plaguelessons?language_content_entity=en
Çakır A. Block Universe Theory: Flow of the Time - Predict - Medium [Internet]. Medium. Predict; 2020 [cited
2022 March 31]. Available from: https://medium.com/predict/block-universe-theory-flow-of-the-time-
6f13eafcbd1c
Falk D. A Debate Over the Physics of Time | Quanta Magazine [Internet]. Quanta Magazine. 2016 [cited 2022
March 31]. Available from: https://www.quantamagazine.org/a-debate-over-the-physics-of-time-20160719/
Huya-Kouadio F. Godot Docs – 3.4 branch [Internet]. Godot Engine documentation. 2022 [cited 2021 December
7]. Available from: https://docs.godotengine.org/en/stable/
Ippoodom, T. เมือ่ ผมู้ ีอำนำจร่วมวง “ล่ำแมม่ ด” ทซี่ ำเลม: บทเรียนที่ไม่วบิ วบั แตส่ งั คมหวำดระแวงกนั เอง. [Internet]. The MATTER. 2020 [cited
2022 March 29]. Available from: https://thematter.co/social/salem-witch-3/107658
Ovartsatit, P. ปัจจยั กำรเติบโตของอุตสำหกรรมเกมในระดบั ประเทศและระดบั โลก. [Internet]. Depa.or.th. 2020 [cited 2022 March 29].
Available from: https://www.depa.or.th/en/article-view/growth-factor-gaming-industry
The Digital Economy Partnership Agreement. อุตสำหกรรมดิจิทลั คอนเทนต์ ปี 2563. [Internet]. Depa.or.th. 2020 [cited 2022
March 28]. Available from: https://manage.depa.or.th/storage/app/media/file/publication-Digital-content-63.pdf
Tyler D. How to Use Game Theory in Game Development [Internet]. Video Game Design and Development. 2017
[cited 2022 March 30]. Available from: https://www.gamedesigning.org/learn/game-theory/
Wallenfeldt J. Salem witch trials | History, Summary, Location, Causes, Victims, & Facts | Britannica. In:
Encyclopædia Britannica [Internet]. 2022 [cited 2022 March 29]. Available from:
https://www.britannica.com/event/Salem-witch-trials
Zavada J. The Nativity of Jesus: An 800-year-old Christmas Tradition [Internet]. Learn Religions. 2020 [cited
2022 March 31]. Available from: https://www.learnreligions.com/what-is-the-nativity-of-jesus-700743
OOTT22__1155__0011//41
115
12th SCiUS Forum
Title : Developing a robotic arm for OT2_18_02
Field :
chemical container handling in
laboratory using artificial
intelligence
Technology and Computer
:Author Mr. Piyamin Sripho
Mr. Apirak Santiweerawong
Mr. Suphavich Kittichaisarot
:School Surawiwat school, Suranaree University of Technology
:Advisor Assoc. Prof. Dr. Jiraphon Srisertpol, Suranaree University of Technology
Abstract
This paper represents the early stages of work towards a complete system working to assist users
in a chemistry laboratory. With this aim, the system has to be capable of determining reagent bottles’
position, identify its type, grip a specific one, and whether it is found, deliver to the user. In order to achieve
its objective, the system integrates Google Cloud Vision API to analyze captured images from Webcam
mounted on the top of a mechanical arm. Further, this project includes software-based databases used for
correlating any user requirements.
Finally, after success in developing the preliminary robot’s algorithm, the results are
demonstrated with three means; object and text detected through the Google Cloud Vision API are measured
in percentage of accuracy, gaining 92.14%; Object and text detection under a moving camera via hauling
instead of using robotic arm, the results obtain 94.44% accuracy, by 6 difference chemicals: Copper (II)
sulfate, Hydrochloric acid 12 M, Sodium Hydroxide Pellets, Sodium Hydroxide 50%, Hexane and
Ethanol/Ethyl Alcohol 95% in both experiments; The accuracy of 8 difference chemical; adding N-Pentanol,
Cyclohexane, Propane-2-OL and Butan-1-OL, captured by higher resolution camera using Ipad pro 2021
compared with former camera are recorded at 96% and 89.4% respectively
:Keywords Robotic arm, Reagent bottle, Google Cloud Vision API
OT2_18_02/1
116
12th SCiUS Forum
Introduction
Nowadays, robots have advanced massively to work in different tasks – facilitating humans to
manufacture in certain industries. As we face some issues in the laboratory; opting for the wrong chemical,
lack of recording usage, these would court failure of chemical operation, and be challenging to track those
chemicals.
Our objective is to create an algorithm for a robot arm to assist users of chemical laboratories in identifying
and picking up chemical bottles. To make unfamiliar users more comfortable and less prone to errors when
using the laboratory.
We choose to test our algorithm efficiency in a simulated laboratory environment. The first and second
experiment was tested with 6 chemical bottle samples, that are, Copper (II) sulfate, Hydrochloric acid 12 M,
Sodium Hydroxide Pellets, Sodium Hydroxide 50%, Hexane, Ethanol/Ethyl Alcohol 95%. And the third
experiment was tested with 10 chemical bottle samples, 6 of the same bottles from previous experiments and
the addition of bottles of N-Pentanol, Cyclohexane, Propan-2-OL and Butan-1-OL.
Methodology and Experimental details
We start by researching the code required for the algorithm. Next, we code the algorithm and test
it with test samples on three different experiments. Then we debug the algorithm and run more tests. And
lastly we analyze and conclude the results.
Our algorithm can be split mainly into 4 parts; Request Chemical, Scan Chemical, Found Chemical and Grab
the Chemical to the user.
Fig 1. Diagram showing the overall system
1. “Request Chemical” - The user requests a chemical from the user interface in Fig 2.
2. “Scan Chemical” - The algorithm will identify the objects and texts in picture by;
2.1 The webcam captures an image and sends it to the algorithm
OT2_18_02/2
117
12th SCiUS Forum
2.2 Google Cloud Vision API analyze the image of objects and texts and using Autocorrect for
correcting small mistakes
2.3 Filter out all the unnecessary text
3 “Found Chemical” - The algorithm compares the found chemicals and the one requested. If there isn’t a
match the algorithm will move the robot arm to the next position and go back to “Scan Chemical”.
4 “Grab the Chemical to User” - If there is a match in “Found Chemical” the robot arm will grab the
chemical and bring it to the user.
Fig 2. User Interface
Experiment 1: Testing the accuracy in identifying text and chemical bottles. For this experiment, the Google
Vision API part of the algorithm was tested with 53 test samples of images that have random amounts of
chemical bottles ranging from 1 to 6 bottles in Fig.3. The result will be the accuracy in detecting the correct
number of bottles and correctly identifying the chemical bottles.
Experiment 2: Test the algorithm with a simulated user case. For this experiment, the algorithm was tested
with 36 test cases. Each test case has 6 chemical bottles randomly arranged where the camera can only see 3
bottles at a time.
Experiment 3: Comparing the accuracy of identifying text and chemical bottles with different cameras. For
this experiment, we used the same methods as Experiment 1, where we only test the Google Vision API part
of the algorithm. We test both of the cameras with 50 test samples that have the same setups but taken with
different cameras.
OT2_18_02/3
118
12th SCiUS Forum
Results, Discussion and Conclusion
Result Experiment 1: The algorithm can detect the quantity of chemical bottles in test cases perfectly
(100%) while the average accuracy of the algorithm is 92.14%.
Result Experiment 2: This experiment also provides the same accuracy rate as experiment 1 in quantity test
and higher in label detection test with 94.44%.
Result Experiment 3: From the comparison of webcam’s performance and using camera on IPad pro 2021,
the latter one performs better accuracy
with 96%, whereas only 89% is estimated for webcam.
The initial experiments’ result, according to detecting label tests using images and moving camera tests,
could be assumed that our algorithm's performance works efficiently and accurately. In terms of the effect of
the webcam's resolution on accuracy test, its result says that accuracy directly depends on resolution. In the
meantime, we also develop a user interface for chemical requirements. Finally, as a long term objective, we
set the plans to connect the robotic arm with this algorithm and test the complete system in the laboratory.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS).
The funding of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation.
This extended abstract is not for citation. Furthermore, we appreciate our advisors; Assoc. Prof. Dr. Jiraphon
Srisertpol, Mr. Siripong Pawako and Mr. Mr. Anupon Suwannatrai for their guidance, valuable advice and
support for our work.
References
1. Hosseini H, Xiao B, Poovendran R, Google’s Cloud Vision API is Not Robust To Noise. IEEE
International Conference on Machine Learning and Applications; 2017.
2. Ramos-Garijo R, Prats M, Sanz J, Del Pobil P, An autonomous assistant robot for book manipulation in a
library. IEEE International Conference on Systems, Man and Cybernetics; 2003.
OT2_18_02/4
119
12th SCiUS Forum
Title : The study and comparison of controllers
for self-driving cars
OT2_09_04
Field : Technology and Computer
Authors : Mr. Phidipok Makcharoenchai
Mr. Pachara Wongthanakarn
Mr. Sorravit Leeprasertsuk
School : Darunsikkhalai School (SCiUS project), KMUTT
Advisor : Assoc. Prof. Dr. Benjamas Panomruttanarug, Department of Control System and In-
strumental Engineering, KMUTT
Mr. Jiravatt Rewrujirek, Office of Engineering Science Classroom, KMUTT
Mrs. Chanakan Grosseau, Office of Engineering Science Classroom, KMUTT
Abstract
The popularity of self-driving cars has been increasing at present time. However, the tech-
nology is still not fully trusted because of errors that might occur. This project is created to study and
compare 3 types of controllers; PID (Proportional Integral Derivative), Stanley, and MPC (Model
Predictive Control). The study used python programming of each controller’s mathematical model to
simulate each controller driving a self-driving car along a lane line. The controllers were simulated
at the velocity parameters of 1.5, 2.5 and 3.5 wheel base range per second. The total times used and
MSEs (Mean Square Error), calculated from heading errors and cross-track errors, were then com-
pared to find the most efficient controller. The result showed that MPC has the least MSE at all speeds
as MPC predicts the path with the least cost function as opposed to the state-by-state calculation used
by PID and MPC. However, MPC also has the slowest computational speed of all controllers at more
than 0.2 seconds per step. PID and Stanley have similar MSE values, with PID having slightly lower
MSEs. Both controllers have a computational speed of fewer than 0.001 seconds per step. Despite
the similar performance, constant values in PID are sensitive to every change in state and need to be
constantly tuned to follow the path correctly. In conclusion, the most efficient controllers are Stan-
ley and MPC, which outperform PID. MPC is ideal for vehicles with high computational power and
requires high precision, while Stanley is more suitable for vehicles with simple processing units and
doesn’t require much precision.
Keywords : controller, self-driving cars
OT2_09_04/1
120
12th SCiUS Forum
Introduction
The popularity of self-driving cars has been increasing at present, with global sales reaching
31 million units in 2019 and is predicted to reach 54 million units by 2024. However, there’s still
skepticism towards self-driving cars in regards to error. When an error occurs, the system will dis-
engage, in which the drivers are expected to take over and drive manually, causing the driver to be
vulnerable during the takeover. Thus, further improvements are needed to lower the possible errors.
This project aims to study and compare the efficiency of 3 types of controllers commonly used
in vehicle industries; PID (Proportional Integral Derivative), Stanley, and MPC (Model Predictive
Control). The study used python programming of each controller’s mathematical model to simulate
each controller driving a self-driving car along a lane line. The efficiency was measured by comparing
each controller’s MSE (Mean square error) of cross-track errors (displacement between the lane and
position of the car) and heading errors (the offset angle from the line parallel to the road). Additionally,
the computational speed and controller tuning difficulty were also considered.
Methodology
The project was divided into 2 parts as follow,
Part 1: Programming of controller models
1.1 Program of the physics model was scripted according to the following mathematical mod-
els: vt cos(ψtg + βt)
ψxy˙˙˙tgtgtg
vt sin(ψtg + βt) ()
lr
= vt cos βt tan δt ; βt = tan−1 lf + lr tan δt
lf +lr
v˙t at
whereas x˙ gt is rate of change in car’s position in x plane at time t, y˙tg is rate of change in car’s position
in y plane at time t, ψ˙tg is rate of change in car’s trajectory angle at time t, v˙t is rate of change in car’s
velocity at time t, at is acceleration at time t, δt is steering angle at time t, lf is displacement between
the car’s position and the front axle, lr is displacement between the car’s position and the rear axle,
and βt is angle between trajectory of the velocity and trajectory of the car at time t. The model tracks
the position, trajectory, and velocity of the car along the 2D plane of the ground using car’s center of
gravity as the position.
1.2 Program of the PID controller was scripted according to the following mathematical
model: ∫t d
δt = Kp ctet + Ki 0 ctet dt + Kd dt ctet
OT2_09_04/2
121
12th SCiUS Forum
whereas δt is the steering angle at the time of calculation, Kp is proportional gain value, Ki is integral
gain value, Kd is derivative gain value, and ctet is the cross-track error at the time of calculation. PID
controller calculates the optimal steering angle by weighting the 3 gain values multiplied by the cross-
track error at the state of calculation.
1.3 Program of the Stanley controller was scripted according to the following mathematical
model: ()
K · ctet
δt = eψt + tan−1 Ks + Kd · vt
whereas δt is the steering angle at the time of calculation, K is cross-track error gain value, Ks is
a special gain value to prevent divisor form reaching 0, and Kd is the speed gain value, eψt is the
heading error at the time of calculation, and ctet is the cross-track error at the time of calculation.
Stanley controller uses 2 terms to correct the path; The first term positions the car in parallel to the
lane while the second term corrects the car back to the lane.
1.4 Program of the MPC controller was scripted using the following mathematical models:
∑N δ ∈ [−δmax, δmax]
J = wcte(ctet)2 + weψ(eψt)2;
t=1
whereas J is the cost function, wcte and weψ are weight value for cross-track error and heading error
respectively, N is the number of steps MPC is instructed to calculate, and δ is the steering angle. The
objective of MPC is to minimize the cost function of a car in N step forward every interval of dt
within the physics model and steering angle constraints.
Part 2: Simulation of controller models
2.1 The parameters for the simulation were defined as follows: the initial position was defined
as x = 20, y = 25, ψ was defined as 0, the lane line was defined with the parabola function
y = 0.06x2 + x − 1
the weight values of MSE were defined as 1 and 3 for cte and eψ respectively, lf and lr were defined as
0.5, the constraints for δ were limited between −π/3 to π/3 radians, dt value for PID and Stanley were
defined as 0.1 second, and v is defined as 1.5, 2.5, and 3.5 wheel base range per second respectively.
2.2 Each controller was simulated at 3 different velocities for 75 steps. The controllers were
tuned until the optimal path is simulated for each velocity. The controllers were tuned using the
following methods: PID was tuned using the twiddle algorithm and manual tuning, Stanley was
tuned using 1 continuous value using phase portrait for only one and manual tuning final for different
velocities, and MPC was tuned by adjusting N and dt values.
2.3 The MSE values from each simulation were calculated and weighted according to the
weight values. The resulting MSE values and the computing times used were then graphed for com-
parison. The difficulty of tuning was also included in the consideration.
OT2_09_04/3
122
12th SCiUS Forum
Results
The result showed that the higher velocity of the controller can lower the MSE slightly be-
cause it can drive back faster while still can control direction properly. MPC has the lowest MSE
values at 1.87430, 1.64071, and 1.65914 respectively. However, it also used the highest comput-
ing time at more than 0.2 seconds per step. PID has slightly lower MSE values than Stanley at
3.76359, 2.60767, and 2.08784 respectively, while Stanley has the MSE values of 3.83407, 2.60818,
and 2.12164 respectively. Both controllers used considerably lower computing time than MPC at less
than 0.001 seconds per step.
Conclusions
In conclusion, the most efficient controllers are Stanley, which has low computing time, and
MPC, which has high precision. Although PID has a slightly higher precision than Stanley with
similar computing time, PID is more difficult to tune. MPC is ideal for cars with high computational
power and requires high precision, while Stanley is more suitable for cars with simple processing
units and doesn’t require much precision.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS)
under the King Mongkut’s University of Technology Thonburi and Darunsikkhalai school Engineer-
ing Science Classroom. The funding of SCiUS is provided by the Ministry of Higher Education,
Science, Research, and Innovation, which is highly appreciated. This extended abstract is not for
citation.
References
1. Farag W. Track Maneuvering using PID Control for Self-driving Cars. EEENG. 2020; 13(1):
91–100. Available from: http://dx.doi.org/10.2174/2352096512666190118161122
2. Farag W, Saleh Z. MPC Track Follower for Self-Driving Cars. 2nd Smart Cities Symposium
(SCS 2019). 2019; Available from: http://dx.doi.org/10.1049/cp.2019.0192
3. Törő O, Bécsi T, Aradi S. Design of Lane Keeping Algorithm of Autonomous Vehicle. Pe-
riod. Polytech. Transp. Eng. 2016; 44(1): 60-68. Available from: http://dx.doi.org/10.3311/
pptr.8177
OT2_09_04/4
123
Title : 12th SCiUS Forum
Field : Salem the Third : Rogue-lite RPG game development using Godot OT2_15_01
Author :
engine
School : Technology
Advisor : Mr. Puwis Na Pibul
Mr. Veerapaj Rajsakij
PSU. Wittayanusorn school, Prince of Songkla University
Asst. Prof. Dr. Chinnapong Angsuchotmetee and Mrs. Wareerat Pumitummarat
Abstract
The main objective of this project is to develop a Rogue-lite RPG game designated for teens aged 12 or
above. The game that we developed in this project is named "Salem the Third". The main concept of this game
is inspired by "Salem Witch Trials". This game combines the concept of Christianity and Science Fiction to make
the main storyline exciting for players. The game is developed using Godot Engine, compiled, and deployed in
HTML5 game mode such that players can play this game online through their web browser without having to
install the game. Assets of this game are made in 2D pixel art styles. All are hand-drawn on our own using
Aseprite. The main prototype of this game is playable online. Our prototype, though it is already playable, is still
required a thorough game testing before it can be published commercially. That will be our future work on this
project.
Keywords : game, godot engine, rogue-lite
Introduction
Nowadays, the global game industry has grown exponentially because people are getting stressed due to
the COVID-19 pandemic. So they use games to entertain themself and fulfill their needs which can reduce their
stress in this situation.
In addition, the Thai game industry is also growing significantly and is likely to grow more. This will
make the investors interested in the game industry and there will be higher competition in the market. So we think
that having game development skills might be helpful in the future. However, because of the resources and time
limitation, we decided to focus on the replayability of the game with low development resources for the most
efficient of the game in time limit development. The game in this project is a rogue-lite RPG game in the concept
of witch hunt and time traveling, called Salem the Third.
OOTT22__1155__0011//11
124
12th SCiUS Forum
The main objective of this project is to learn about the overall game development process and to make a
game that can entertain players and reduce their stress during the COVID-19 pandemic.
Methodology
In this project, we use Godot engine to develop the game and Aseprite to make assets. There are 3 parts
of development as follows,
Part 1: Game designing
1.1 Researching for game theories
1.2 Learning about the ability of Godot engine
1.3 Writing a game design document
Figure 1 : QR code to access the game design document
Part 2: Finding and making assets
2.1 Drawing assets on Aseprite
2.2 Finding soundtracks on dig.ccmixter.org
Part 3: Programming, testing, and publishing
3.1 Developing game systems on Godot engine
3.2 Testing the results and fixing the problems
3.3 Publishing the complete version of the game on itch.io
OOTT22__1155__0011//21
125
12th SCiUS Forum
Results
Figure 1 shows the overall game system in this project. When
the player opens the game for the first time, they will see the game
menu interface that has options ‘Play’ to start the game and ‘Quit’ to
close the game. If players press ‘Play’, the prologue cutscene will start.
After that, the player will start the game at the ‘Town’ scene. The main
objective of this game is to find and win the final boss called ‘Dyson
Sphere’. If the player finished the main objective, the ending scene will
show up. But if the player got killed by any enemies, the death scene
will show and the player has to select the choices to play the game
again or quit the game.
Players can access the game from the QR code in figure 2 or
using the itch.io link below the QR code. The game can only be
playable on PC platform. Figure 2 : Overall game system flowchart
Figure 312 : QR code to play the game. Figure 4 : Gameplay footage
(https://salemthethird.itch.io/salemthethird-beta)
Conclusion
Salem the Third is published and can be playable on itch.io. The game also accomplished the two main
objectives, to learn the overall game development process and to make a game that players will have fun playing
it. However, there might be some more details and game systems to develop which is our future plan for this
project.
OOTT22__1155__0011//31
126
12th SCiUS Forum
Acknowledgments
This project was supported by Science Classroom in University Affiliated School (SCiUS) under
Prince of Songkhla University and PSU. Wittayanusorn School. The funding of SCiUS is provided by Ministry of
Higher Education, Science, Research and Innovation. This extended abstract is not for citation.
References
Brooke J. The Black Death and its Aftermath | Origins [Internet]. Origins. 2020 [cited 2022 March 29]. Available
from: https://origins.osu.edu/connecting-history/covid-black-death-plaguelessons?language_content_entity=en
Çakır A. Block Universe Theory: Flow of the Time - Predict - Medium [Internet]. Medium. Predict; 2020 [cited
2022 March 31]. Available from: https://medium.com/predict/block-universe-theory-flow-of-the-time-
6f13eafcbd1c
Falk D. A Debate Over the Physics of Time | Quanta Magazine [Internet]. Quanta Magazine. 2016 [cited 2022
March 31]. Available from: https://www.quantamagazine.org/a-debate-over-the-physics-of-time-20160719/
Huya-Kouadio F. Godot Docs – 3.4 branch [Internet]. Godot Engine documentation. 2022 [cited 2021 December
7]. Available from: https://docs.godotengine.org/en/stable/
Ippoodom, T. เมือ่ ผมู้ ีอำนำจร่วมวง “ล่ำแมม่ ด” ทซี่ ำเลม: บทเรียนที่ไม่วบิ วบั แตส่ งั คมหวำดระแวงกนั เอง. [Internet]. The MATTER. 2020 [cited
2022 March 29]. Available from: https://thematter.co/social/salem-witch-3/107658
Ovartsatit, P. ปัจจยั กำรเติบโตของอุตสำหกรรมเกมในระดบั ประเทศและระดบั โลก. [Internet]. Depa.or.th. 2020 [cited 2022 March 29].
Available from: https://www.depa.or.th/en/article-view/growth-factor-gaming-industry
The Digital Economy Partnership Agreement. อุตสำหกรรมดิจิทลั คอนเทนต์ ปี 2563. [Internet]. Depa.or.th. 2020 [cited 2022
March 28]. Available from: https://manage.depa.or.th/storage/app/media/file/publication-Digital-content-63.pdf
Tyler D. How to Use Game Theory in Game Development [Internet]. Video Game Design and Development. 2017
[cited 2022 March 30]. Available from: https://www.gamedesigning.org/learn/game-theory/
Wallenfeldt J. Salem witch trials | History, Summary, Location, Causes, Victims, & Facts | Britannica. In:
Encyclopædia Britannica [Internet]. 2022 [cited 2022 March 29]. Available from:
https://www.britannica.com/event/Salem-witch-trials
Zavada J. The Nativity of Jesus: An 800-year-old Christmas Tradition [Internet]. Learn Religions. 2020 [cited
2022 March 31]. Available from: https://www.learnreligions.com/what-is-the-nativity-of-jesus-700743
OOTT22__1155__0011//41
127
12th SCiUS Forum
Title : Office syndrome monitoring and detection system OT2_15_02
Field :
Author : Technology and computer
Miss Pinjutha Srisawat
School : Miss Enika Thaikul
Advisor : PSU.Wittayanusorn School, Prince of Songkla University
Asst.Prof. Nithi Thanon, Mr. Weerawat Wongmek
Abstract
“Office Syndrome monitoring & detection system” is a program for detection, notification and
following up user’s posture to prevent injury from the wrong position and to improve user’s quality of life.
This program is suitable for the working ages, pupils, students and anyone who has to work with a computer
for a long time.
“Office Syndrome monitoring & detection system” contain ergonomic real time detection, And Health
risk score assessment, which is composed of Level 0 display green, Level 1 display yellow, Level 2 display
red. After that it report this information in daily, weekly and monthly.
The programs used are Sublime text and Command Prompt. Libraries used are OpenCV and
MediaPipe, all written in Phython.
After using the program with 10 volunteers, the satisfaction score in terms of utilization was in the
highest level of satisfaction (x̅ = 4.67 X.D. = 0.56). And the satisfaction score in terms of design was in the
satisfied level (x̅ = 4.13 X.D. = 0.83).
Keyword : Office syndrome, Webcam, Posture, Monitoring, MediaPipe
Introduction
Due to the Coronavirus 2019 pandemic, the government asked agencies to work from home to reduce
the risk of spreading so many people had to work more online and work with the computer for several hours a
day and which lead to many symptoms such as shoulder pain, back pain, etc.
Office syndrome is a muscle pain caused by the same muscle patterns tie-up repeatedly for a while,
which can spread into chronic pain, including numbness in the arms and hands.
OT2_15_02/1
128
12th SCiUS Forum
In this study, developers explored technology to monitor sitting posture. Using webcam is one of
them. Webcam have the advantage of being able to move from place to place, simultaneously being convenient
and it is in the economical price . Developed by using python
The system analyzes the user's posture with pose estimation, then reports the health of the user's health
after using the system in daily. And also suggest different ways to relax muscles for better health
Methodology
There are two main compartments of the Application are creating user by register system and Posture
detection which combine with posture check system. Correct or incorrect posture and there are twist and pitch
for incorrect posture. The prolong check system check how much the time user use for prolonged sitting. After
that, the data’ll calculated for health risk score to notify and advise user to get exercise and kept it in user’s
device. The data’ll be used to monitoring the risk score in daily by pie chart.
Health risk assesment
there are 3 level of the health risk
• Level 0 displayed in green
• Level 1 displayed in yellow
• Level 2 displayed in red
OT2_15_02/2
129
12th SCiUS Forum
Result
The Result of application
Application “Posture” Daily report
Correct Posture Twist Posture Pitch Posture
The Result of satisfaction
Table 1 summarizes the results of the utilization satisfaction assessment.
From Table 1, the overall satisfaction assessment of respondents was found to be the highest level of
satisfaction (x̅ = 4.67 X.D. = 0.56). There were satisfactions at the highest level in effective, Appropriate of
suggestion for the exercise and User alert from the level warning.
OT2_15_02/3
130
12th SCiUS Forum
Table 2 summarizes the results of the design satisfaction assessment.
From Table 2, the overall satisfaction assessment of the respondents was in satisfied level (x̅ = 4.13
X.D. = 0.83). There were at an extremely high level of satisfaction in the beautiful and interesting, formatting
of the application, convenience and easy to use.
Conclusion
The application was able to detect posture and notify users about the wrong posture. In addition, it
was able to advise exercise to relax the user and make a daily report in the pie chart.
From measuring by a total of 10 people, the satisfaction score in terms of utilization and satisfaction
was at a very satisfying level.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This
extended abstract is not for citation.
References
1. ธีรศกั ด์ิ พาจนั ทร์ และ เพชรรตั น์ สุขสลงุ . ความชกุ และปัจจยั ทีค่ วามสมั พนั ธก์ บั การปวดหลงั
ส่วนล่างของบคุ ลากรสาธารณสุข. วารสารวชิ าการสาธารณสุขชมุ ชน. 2563;(03):196-203.
2. ภูจนา ปาลยิ ะวรรณ. ระบบติดตามโรคคนทางานออฟฟิศโดยใช้กลอ้ ง Kinect.
[วิทยานิพนธ์ปริญญาวิทยาศาสตรมหาบณั ฑติ ]. กรุงเทพฯ: มหาวทิ ยาลยั เทคโนโลยีพระจอมเกลา้ ธนบุรี; 2557.
OT2_15_02/4
131
12th SCiUS Forum
Title : Automatic swab machine for testing COVID-19 OT2_03_03
Field : Technology and computer
Author : Mr. Kittisak Rotong
Mr. Phunuwat Boonkeard
School : Naresuan University Secondary Demonstration School
Advisor : Assoc. Prof. Somchai Kritpolwiwattana (Naresuan University)
Abstract
The purpose of this project is to design and construct prototype automated swab machines. Because
robots are very important in human lives, especially in medical activity. Therefore, the researchers invented a
robot used for detecting the COVID-19 virus to reduce exposure of medical personnel to the examinees, with
a swab method. The Arduino Uno microcontroller board was utilized to control the machine, by writing a code
on the Arduino IDE program to control the gear motor in the moving of a holder swab rod pedestal. The
microswitch was used to determine the precision of movement distance, and the Micro Servo Motor was used
for the rotation of the swab rod. The web camera was used for monitoring and controlling the machine by the
supervisor all the time. The machine operation starts from inserting the swap rod into the handle, then the
patients place their chin on the base. And the supervisor will adjust the angle of the examinee's nostrils position
for matching the specified position. The swab rod will be moved into the posterior nasal cavity for a distance
of 8 cm, and the Micro Servo Motor will rotate the swab rod as 10 cycles. After finishing the process, the
machine will move back to the starting position. The machine was tested on the dummy head. The results of
the test show that the machine can move the swab probe and rotate the swab probe precisely, and it was safe
to use the machine.
Keywords: Arduino Uno board, Arduino IDE, Gear motor, Microswitch, Micro Servo Motor
Introduction
Nowadays, robots play a role in the daily life of human beings in medicine, such as robots that
doctors could control remotely to assist in surgery. The project researchers, therefore, see the benefits of
utilizing robots in the medical field.
Due to the crisis of the Coronavirus, which is a virus that causes both humans and animals infected, quickly
that it is dubbed the official name, “Covid-19.” There are also a lot of problems; for example, the difficulty
in testing Covid-19 or the medical personnel’s risk of getting infected.
Hereby, the project researchers realized the benefits of introducing a robot utilized for detecting the COVID-
19 virus by the swab method; accordingly, the researchers wanted to develop a swab machine for testing the
Covid-19 virus that could be utilized in helping detect the Covid-19 virus.
Especially, the swab machine would help reduce the contact between medical personnel and examinees, and
it would also save time including being more accurate in detecting the Covid-19 virus.
OT2_03_03/1
132
12th SCiUS Forum
Methodology
The experiments were divided into 3 parts as follows,
Part 1: Work planning process, we analyzed the related problem.
1.1 When medical personnel did the Covid-19 test for examinees
they were likely to take a risk to be infected.
1.2 Examinees test by themselves with ATK, may not follow the instruction correctly and
would bring about the mistake of the Covid-19 virus detection result.
1.3 Define the objectives.
1.4 Procure relevant equipment and study the working principle of related equipment,
including studying the structure of the nasal cavity.
Table 1: Table showing materials and equipment used.
Number Name of materials and Material and equipment
equipment version
1
Arduino board Arduino Uno R3
2 Arduino Uno R3 Arduino
Arduino shield
3 Sensor Shield V5.0
Micro servo Motor The Tower Pro MG 90 s Micro
4
5 Gear Motor servo Motor
Moter drive Gear Motor 3V - 12V
6 Pear cable (Male-Male,
Male-Female, Female-Female) L298N Motor
7 Laser cutter
8 Acrylic sheet Flate Cable 40C
9 3D printing
10 Soldering iron Hylax HY-1390s
11 Lead solder Cast acrylic sheet
12 Acrylic bonding agent Flashforge finder
MV-730
ULTRA CORE 60/40 0.8 mm.
TOTO
1.5 Design a prototype project of the infection swab machine at this stage and
then make a schematic diagram of our swab machine.
1.6 We planned all of the necessary equipment for the machinery structure
that any devices or materials were utilized to construct the swab machine.
OT2_03_03/2
133
12th SCiUS Forum
Part 2: Implementation process, the researchers analyzed the related problem, the researchers followed the
structure that was laid out previously.
2.1 Craft a rectangular box from acrylic sheets.
2.2 Use a 3D printer to create The grip used for holding the swap rod.
Figure 1: Design the whole Figure 2: The machine
structure of the machine. when all crafting is done.
2.3 Write code with Arduino IDE program using Arduino board to Controls Works of the machine.
2.4 Install all devices together according to the designed layout.
Part 3: The trial and improvement process
3.1 Test the working code of all machines, that have been written with the Arduino IDE program.
3.2 Modify the code, all machine operation code, that has been written with the Arduino IDE.
3.3 Testing the functionality and accuracy of the entire system of the swap machine.
3.4 Record the results and make corrections by testing them with a dummy head.
Figure 3: Flow chart for how the swab machine works.
OT2_03_03/3
134
12th SCiUS Forum
Results
The results were summarized from 10 tests conducted on the efficacy of the swap machines, each
consisting of 3 trials. The test results are shown in Table 2.
Table 2: Shows the experiment of an automatic swab machine for testing the covid-19 virus
with a dummy of 10 times.
The result was that the Swap apparatus was able to enter the dorsal nasopharynx at a distance of 8
cm. In the test, we measured the distance. away from the ruler with an error of +_1 mm from the ruler.
Conclusion
The machine was tested on the dummy head. The results of the test show that the machine
can move the swab probe and rotate the swab probe precisely, and it was safe to use the machine.
Acknowledgments
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This
extended abstract is not for citation.
References
1. วทั ธิกร ดอนสกลุ . bloggang [อินเทอร์เนต็ ]. กรุงเทพมหานคร [ปรบั ปรุงเมอื่ 2 มีนาคม 2562; เขา้ ถงึ เมื่อ 15 กนั ยายน 2564]. เขา้ ถึงไดจ้ าก:
https://www.bloggang.com/mainblog.php?id=eyeohtee3&month=02-03-.
2. วาลิณี ชุมนะลาลกั ษณ์. sites.google [อนิ เทอร์เน็ต]. กรุงเทพมหานคร [เขา้ ถงึ เม่อื 16 กนั ยายน 2564]. เขา้ ถึงไดจ้ าก:
https://sites.google.com/site/poopriawwalinee/thekhnoloyi-h.
OT2_03_03/4
135
12th SCiUS Forum
Title: Thermal Insulation Board from Plastic Waste OT2_19_01
Field:
Author: Technology and computer
Ms. Nurfatihah Arwae
School: Ms. Pimmada Useng
Advisor: Islamic Sciences Demonstration school, Prince of songkla University
Asst. Dr. Preecha Kasikamphaiboon
Asst. Dr. Uraiwan Khunjan
Abstract:
The objective of this research was to study and produce insulation from plastic waste. To reducing
the amount of plastic waste that goes in to landfills. And to suggest the way to reuse plastic for most useful.
Using plastic high density polyethylene (HDPE) mixed with polypropylene (PP) for insulator and coated with
multi-layer plastic to reflex heat radiation. All of these are the type that non-recyclable. This research study
for three formula ratios between HDPE:PP is 40:60, 50:50 and 60:40. Mix the materials with internal mixer
then molding by compression molding machine which controls the temperature of mixing compound and
molding was 165oC. And take it to test tensile strength, thermal conductivity, moisture absorption and based
on ASTM standard and flammability test based on UL-94 standard. The result of experiment from three
formula ratios between HDPE:PP for the best to be insulation board is 60:40 ratios. It was found that the
increased PP ratio results that less thermal conductivity and moisture absorption. But the burning rate of all
formulas are insignificantly different, and the tensile strength ratio of 40:60 was statistically higher than others
ratios. The study on the production of insulation from plastic waste in this project to reduce the amount of
plastic waste that will be disposed of in landfills to be useful and valuable. This should be the way to reduce
the problem of plastic waste in the environment.
Keyword: Plastic Waste, Non-Recyclable, Landfill, Insulation board
Introduction
In 1950 the world produced only 2 million tons per year. since then annual production has increased
nearly 200 times, reaching 381 million tons in 2015. Consequently, plastic production has an annually
increasing trend and by 2050, around 12 billion tons of plastic is expected to accumulate in landfills and in the
environment. There are many disadvantages landfill requires a lot of space and that area cannot be used for
farming anymore because it takes a long time to decompose or some species cannot decompose and also cause
odor pollution. Thailand is ranked as the third highest proportion of plastic waste in general waste with a per
OT2_19_01/1
136
12th SCiUS Forum
person usage rate of 17.6% per year general waste is disposed of by landfill as it is not recycling or has a
complicated process for example, food bags, foam, snacks sachets, because packaging waste is contaminated
with food, making it difficult to recycle. Packaging was mainly primary plastics and the amount of waste
generated is most in the type of packaging, which is about 150 million tons per year The most common plastics
in the top 5 are Polyvinyl chloride (PVC) Polystyrene (PS) Polypropylene (PP) High density polyethylene
(HDPE) and Polyethylene terephthalate (PET) all of plastics will affect the environment, human and animal
health. There are also contributes to the greenhouse effect and microplastics. Plastics are source of waste gases
in the air and it can have more and more due to the increase in plastic production throughout the year so these
gases accumulate in the environment and the most of this increase in plastic waste is general plastic waste,
which must be disposed of in landfills than recycled waste.
Thermal insulation board from plastic waste products use different quality plastics to mix two
specifications to find the best ratio, that it can be good insulation and pass in the standard quality test, thermal
insulation board from plastic waste have outstanding features is light weight and difficult of water absorption.
There is aluminum foil from multi-layer plastic to help reflex heat radiation.
Methodology
The experiments were divided into 3 parts as follows,
Part 1: Materials preparation.
1.1 Preparation of multi-layer plastic waste by using snack sachets from the same brand of snack sachets.
Determine the size of the snack sachets 20*30 cm. Rinse the dirt and drain water to dry completely.
1.2 The preparation of the plastic for the middle layer using 2 types of plastics waste: PP and HDPE plastics
waste by rinse the dirt and drain water to dry completely. After that cut it into small pieces.
Part 2: Insulation board molding.
Mixing 2 types of plastics waste between PP plastic waste and HDPE plastic waste by using the
formulas as shown in Table 1. Mixing plastics waste with an internal mixer at temperature 165°C, Rotor speed
at 60 rpm for 15 min. Then the mixed plastic was extruded with Compression molding machine at temperature
165°C for 10 min and cooling for 10 min.
When the mixed plastic board was extruded, Place the multi-layer sheet on template by bringing the
aluminum foil side facing down. Following with mixed plastic board and cover with multi-layer sheet by
bringing the aluminum foil side facing up. Then compress with Compression molding machine at temperature
165°C for 3 min and cooling for 10 min. Determine the size of the insulation board 20*20*0.3 cm.
Table 1: Mixing proportion of insulation from plastic waste
Formula Ratio PP
1 HDPE 60
40
OT2_19_01/2
137
12th SCiUS Forum
2 50 50
3 60 40
Part 3: Testing the properties of insulating board.
First, taking the specimen to be scanned by Scanning Electron Microscope (SEM) to verify the
external structure and surface of the specimen. Then test the properties of insulation board as follows,
3.1 Testing the physical properties of insulating board: Moisture absorption test based on ASTM
D570, bulk density test based on ASTM D792 and flammability test based on UL-94
3.2 Testing the mechanical property of insulating board: Tensile strength test based on ASTM D638
3.3 Testing the thermal property of insulating board: Thermal conductivity test based on ASTM C518
Results and Discussion
The surfaces of all formulas of insulation board from plastic waste are smooth and shiny. The external
structure and specimens were scanned by Scanning Electron Microscope (SEM) as shown in Figure 1.
ABC
Figure 1: Thermal insulation board under SEM 100x magnification A=40:60, B=50:50 and C=60:40
The result of physical properties test such as moisture absorption, bulk density and flammability test
including the result of mechanical property test: tensile strength and thermal property test: thermal
conductivity of all formulas of thermal insulation board from plastic waste as shown in Table 2.
Table 2: Compare the properties of all formulas of thermal insulation board
Moisture Burning rate Tensile Thermal
absorption (%) (mm/min)
strength (MPa) conductivity(W/mK)
20.25
40:60 0.9986 21.61 31.8 0.843
50:50 0.9552 20.87 11.0 0.792
60:40 0.8709 9.0 0.654
Then compare the properties between of insulation board from plastic waste and Thermo Flex EPDM
FR insulation from AA Plus group (shorturl.at/ivCT8) as shown in Table 3.
OT2_19_01/3
138
12th SCiUS Forum
Table 3: Compare the properties of all formulas of thermal insulation board
Properties Thermo Flex EPDM FR Insulation board from plastic
Moisture absorption (%) insulation waste
<5 % by weigh 0.87 %
Burning rate (mm/min) Class V-0 Class HB
Tensile strength (MPa) 3.2 MPa 9.0 MPa
Thermal conductivity(W/mK) 0.035 W/mK 0.654 W/mK
Conclusion
From the extrusion of thermal insulation board according to the 3 formulas the specimens were the
same appearance, it is smooth and shiny. The study of the properties of thermal insulation board from plastic
waste, it was found that the increased PP ratio results in less thermal conductivity and moisture absorption.
But the burning rate of all formulas are insignificantly different, and the tensile strength ratio of 40:60 was
statistically higher than other ratios.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This
extended abstract is not for citation
References
1. Moore C, John P, Barbara A, Lewis R, Metych M, Chopra S, et al. Our World in Data [Internet].
England: Oxford University; 2018[Update] 2022 April; cited [29 Aug 2021]. Available from:
shorturl.at/rtIY9
2. ประทุมมา สุโสะ. สมบัตเิ ชิงกลของพลาสติกผสม 3 ชนิด ระหว่างพอลเิ อทลิ นี ชนดิ ความหนาแน่นสงู พอลเิ อทลิ ีนชนิดความหนาแน่นตา่ และพอลโิ พรพลิ ีน
[วิทยานพิ นธ์]: ปทมุ ธานี: มหาวทิ ยาลัยเทคโนโลยีราชมงคลธัญบุรี; 2554.
OT2_19_01/4
139
12th SCiUS Forum
Title: Monitoring of electrical quantity detection OT2_09_05
Field:
Author: based on NodeMCU ESP8266
Advisor: Technology and Computer
Varinthorn Chatburapachai
Suchaya Panturasri
Vessapoo Sawangwong
Mr.Supakit Rongam Engineering Science Classroom
Dr.Supapong Nutwong Department of Electrical Engineering King Mongkut's University
of Technology Thonburi
Abstract
Electrical energy is indispensable in daily life due to it is required in the operation of all appliances.
Conventionally, electrical energy is measured by the watt-hour meter which is an analog instrument. Friction
and heat in mechanical parts may cause an error in the measured quantity. Moreover, human error can occur
while collecting and reporting the electricity bill. Therefore, the improved method of electrical energy
measurement is presented in this work. It is based on digital instruments where accuracy and precision are
obtained and online monitoring system. The PZEM-004T (AC Digital Power Energy Meter Module) is
adopted for measuring electrical quantities, including voltage, current, energy, and power. The microcontroller
(NodeMCU ESP8266) is also introduced to monitor the measured quantity online. When compared with the
high-quality digital clamp power meter (HIOKI CM3286), the presented measurement method can achieve
the accuracy and precision of 98.02 ± 0.02% and 99.91 ± 0.01%, respectively. Considering to the monitoring
system, real-time measured electrical quantities not only can be simply monitored through a mobile
application, namely, Blynk, but also alarmed the notification of energy consumption limit and hardware status.
This provides reliability and comfortable to the electricity user by using an online monitoring system.
Keywords: Application Blynk / Electric quantity detection / Electric quantity monitoring / Human error /
PZEM-004T
Introduction
Currently, electrical energy contributes to making progress in all aspects of the country. Especially,
in the industry and technological field. (Metropolitan Electricity Authority, 2560) The process of monitoring
the electricity consumption requires the employee of MEA to write down the numbers from the electricity
meter (Sompassorn U, 2020), causing an error in informing the electricity users about the payment of
electricity bills. Which is the problem caused by human error. (Thai PBS News, 2020) The consumers can
OT2_09_05/1
140
12th SCiUS Forum
check the error from both the electric meter or check the electricity meter from the display screen that can
show the electricity consumption at any time via the application based on foreign research called IoT Based
Smart Energy Management System using PZEM 004T Sensor & Node MCU which used PZEM-004T as a
measure of daily power consumption. Uses Node MCU to transmit data online. The prominent point of this
project is the ability to display the data by using Online monitoring through Blynk. (Sabu George, 2021) Hence
using online monitoring, imparted the convenience of interpreting the power consumption of the users: current,
voltage, power, and energy. And also, the mobile application could use to reconsider the electricity
consumption unless it is precise compared to the general.
Methodology
The procedures were divided into 2 parts as follows:
Part 1: Examination of electric quantity detection
1.1 Electric quantity detection assembly
According to examine the electric quantity detection, using the board NodeMCU ESP8266 and the
PZEM 004T, compare the electrical measurements with the digital clamp meter (HIOKI CM3286) in 3
different PF circuits including Fan (PF = 0.50) Capacitor (PF = 0.76) and Light bulb (PF = 1.00). Figure 1 is
the diagram for the experiment.
load
Figure 1 Diagram of electric quantity detection
1.2 Work principle
After assembling the extension lead with Power Supply 200 VAC/50Hz, electric quantity data will
be detected and gathered from the Digital Clamp Meter (high-precision electrical measuring equipment) and
the Clamp of PZEM 004T. Electric quantity detection will display through the Serial Monitor and the Digital
Clamp Meter screen as the circuit in Figures 2, 3, and 4 below.
Serial Monitor Figure 3 Circuit (light bulb) Figure 4 Circuit (capacitor)
Figure 2 Circuit (fan)
1.3 Data Collection and Analysis
Gathering data from 2 electric quantity detections in every 5 seconds 20 data. The data analysis uses
the collected data to determine Accuracy and Precision compared with the Digital clamp meter. To accomplish
OT2_09_05/2
141
12th SCiUS Forum
the objective of this data analysis, there were two equations for these calculations. First, the accuracy can
calculate by using the following equation:
| ̅ − |
= (1 − ) × 100%
In addition to the precision:
= ∑ =1| − ̅ |
20
Whereas accuracy is the indicated value of, precision is the indicated value of equipment capability
̅ = average of detected data, = detected data, = actual detected data from multimeter
Part 2: Blynk application monitoring
Subsequent connected NodeMCU ESP8266 with Blynk through Wifi connection set the necessary
Widget Box such as “SuperChart”, and “Labeled Value” to illustrate the data. PZEM 004T is used to collect
and store the data in NodeMCU ESP8266, then sent to display on the Blynk Application. It displayed electric
quantities consisting of Current, Voltage, Power, and Energy.
Results
Procedures 1.1 and 1.3, assembling the electric quantity detector and analyzing the data, resulted in
the Accuracy and Precision as the Figures 5 and 6 below. The bar chart illustrated the percentage of the
Accuracy and Precision of the electric detector for each of the electric quantities. The experiment found that
the average Accuracy and Precision of the electric detector were 98.02% and 99.91%.
Figure 5 Accuracy chart Figure 6 Precision chart Figure 7 The equipment
Figure 7,8 and 9 is the image of the electric quantity detector, which was designed for convenient
use. Considering Figure 10, the “Super charts” and the “Labeled Value” illustrated the electric quantities
(voltage, current, power and energy). related to the time. The Notification would alarm when the hardware
was offline or the energy consumed was over the limit as Figures 11 and 12.
Figure 8 Use with consumer unit Figure 9 LCD display Figure 10 Blynk display
OT2_09_05/3
142
12th SCiUS Forum
Figure 11 Notify “Project offline” Figure 12 Notify “Over limit”
Conclusion
The experiment for each type of load could give different Accuracy and Precision. However, the
average Accuracy and Precision of the electric detector compared with the Digital Clamp Meter were 98.02%
and 99.91%. In addition to monitoring, the electric quantity on the Blynk Application made the user convenient
to inspect and notify of the electric quantity during the routine through Wifi connection.
Acknowledgments
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This
extended abstract is not for citation.
References
George, S. IoT Based Smart Energy Management System using Pzem-004t Sensor & Node MCU. In:
International Journal of Engineering Research & Technology (IJERT). Vol.9, Kottayam: 2021. p.45-48.
Johnson L. How to Calculate Precision [Internet]. Maryland: Leaf Group; 2018 [cited 2022 March 16].
Available form: https://sciencing.com/difference-between-systematic-random-errors-8254711.html.
Manan K. Accuracy Formula [Internet]. New Delhi: Cuemath; 2020 [cited 2022 March 16]. Available form:
https://www.cuemath.com/accuracy-formula/.
Sompassorn U. Withīkānchekmitœ̄ faifāwādư̄ annīchaipaikīnūai [How to check the electricity meter that have
been used this month] [Internet]. Bkk: true id; 2020 [cited 2021 June 23]. Available form:
https://intrend.trueid.net//article/วิ ธี ก า ร เ ช็ ค มิ เ ต อ ร์ ไ ฟ ฟ้ า -ว่ า เ ดื อ น น้ี ใ ช้ ไ ป ก่ี ห น่ ว ย trueidintrend
_130993.
Thai PBS News. Rēngtrūatsō ppomčhotnūaifaifāmaitrongmitœ̄ [Check if the power unit does not match the
meter] [Internet]. Bkk: Thai PBS; 2020 [cited 2021 June 23]. Available form:
https://news.thaipbs.or.th/content/291495.
OT2_09_05/4
143
12th SCiUS Forum
Title : Developing of a PV Monitoring System by Using IoT OT2_03_01
:Field Technology and Computer
:Author Miss.Kamolchat Jadyangtone
School : Miss.Nutchaya Kaewdaeng
,Naresuan University Demonstration School Naresuan University
: ( )Advisor Assoc. Prof. Somchai Kritpolwiwattana Naresuan University
Abstract
Monitoring system are essential to recognize the photovoltaic available on-site, evaluate electrical
conversion efficiency, and detect failures. This work proposes a real-time internet of things system for micro
and mini photovoltaic generation systems that can monitor continuous data such as solar intensity, voltage,
current, ambient temperature, cell temperature, and amount of dust in the air. The proposed system measures
all relevant meteorological variables and directly photovoltaic generation data from the PV module (not from
the inverter). To make the monitoring system, we find the acceptable sensor and other components. The
wireless PV monitoring system uses the ESP32 wroom devkit microcontroller board that has a built-in WiFi.
This ESP 32 microcontroller based datalogger corrects and stores all monitoring parameter on the Blynk cloud
and displays that on smartphone by Blynk Application on real-time basis for an efficient monitoring. The
accuracy of the constructed device was ascertained by comparing the measured parameters with that of
conventional standard measuring instruments which show good agreement with an R-Squared in the range
0.9648 to 0.9989.
:Keywords Photovoltaic, Internet of Thing, Monitoring System
Introduction
Internet of things (IoT) is a technology that has been used in daily life recently. where the Internet
of Things refers to a network of objects, equipment, vehicles, and buildings. and other things which have
embedded electronic circuits, software, sensors and network connections, and enable those objects to store and
exchange data. The Internet of Things allows objects to be aware of their surroundings and be controlled
remotely through their existing network infrastructure. It allows us to integrate the physical world with the
computer system more closely. The result is increased efficiency, accuracy, and economic benefits as the IoT
is complemented by sensors and actuators that can change their mechanical characteristics based on the
actuation. It will become a system that is generally classified as a cyber-physical system. Can work together
on existing internet infrastructure $1 billion Solar energy is a renewable energy source that can sustainably
generate electricity with photovoltaic cells. Because it is a clean energy source that reduces the emission of
gases that destroy the ozone layer and greenhouse gas in addition, the advantages that make it more popular to
OT2_03_01/1
144
12th SCiUS Forum
use solar cells are that it is easier to use, lower cost, and easy to install and maintain. This makes the
photovoltaic power generation system have high potential. Especially the photovoltaic power generation
system installed in remote areas is a system that is very suitable for general use. Increasing the power generation
potential of the photovoltaic system depends on important variables such as the amount of electrical energy
that can be produced Price per unit of electricity produced payback rate and investment budget which, if
possible, should be able to make every solar cell system work at high efficiency and in actual use, the installed
solar system
Methodology
The experiments were divided into 5 parts as follows,
Part 1 : Study theories and information about Developing of a PV Monitoring System by Using IoT.
Part 2 : Design and build a solar photovoltaic measurement system using the Internet of Things.
The Internet of Things system that can be displayed in real-time consists of three main components:
the first part is the sensor, and the second part is the connection to the wireless network. Send the stored data
to the cloud, and the third is the part of the application that sends data between the user and the cloud platform
for remote access.
Section 1 is the Section 2 is the Section 3 is the
measurement section. connection to the
wireless network. application Cloud
sensor
WIFI Blynk
The photovoltaic system using the Internet of Things shows a diagram of the system as shown in Figure. The
figure shows the components of the system. The pyranometer probe is used to measure the total solar radiation
incident on the solar panel surface expressed in W/. m2 Measurements regarding the solar panel environment
are: Ambient air temperature probe that affects the temperature of the solar panel. and a dust-in-air probe that
affects the accumulation of dust on the surface of the solar panel, which, if there is a large amount, will result
in a lower amount of sunlight incident on the solar cell The temperature probe behind the solar panel shows
the temperature of the solar cell, which affects the performance of the solar panel. Voltage and current sensors
It is used to display voltage and current from the solar panel, respectively. The microcontroller uses the ESP
32 Wroom Devkit board. For receiving and processing data and sending data to cloud-platform via WIFI
module for instant display of measurement values. The meter can retrieve the data from the mobile phone using
the Blynk application to get the data from the cloud platform to display the value on the mobile phone.
OT2_03_01/2
145