12th SCiUS Forum
Part 3 : Test the value of the photovoltaic measurement system using the Internet of Things vs. the actual
instrument.
The value of the PV measurement by system using Comparative measuring device.
the IoT.
Solar Radiation Kipp & Zonen CMP3 and Calibration Graph
Dust Density Calibration Graph
Ambient Temperature Thermometer
PV Temperature Thermometer
Voltage Multimeter
Current Multimeter
Part 4 : Test the operation of the solar photovoltaic measurement system using the Internet of Things.
Part 5 : Summary of experimental results.
Results
As a result of the research, the device was connected to the ESP32 Wroom Devkit microcontroller
board and wrote a program to control and process the measured data with the Arduino IDE program. A solar
ray probe was made from an LDR Photoresistor Module. The probe was calibrated by calibrating the solar ray
probe with a KIPP&ZONEN model CMP3 pyranometer. Voltage and current probes were calibrated by
calibration. With a SANWA digital multimeter model CD800a, the ambient air temperature probe and the solar
panel back temperature are calibrated with a SATO glass rod thermometer. The air dust probe uses a calibration
curve provided by the company. Manufacturer for determination of airborne dust content. The measured values
are displayed on the BLYNK application
Conclusion
From the test, when the assembled device is taken to measure and then tested for comparison, it
can be seen that the displayed values are close or equal. Therefore, it shows that the device can measure the
value according to the standard of normal measuring equipment. and from field testing It can be seen that the
device developed and invented can be used in real life. The displayed values are reliable and match the standard.
OT2_03_01/3
146
12th SCiUS Forum
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by the Ministry of Higher Education, Science, Research, and Innovation. This
extended abstract is not for citation.
References
1. Kavitha, V. and V. Malathi. (2 0 1 9 ) . A Smart Solar PV Monitoring System Using IoT. SRAIC-2 0 1 9 ,
pp.19-33.
2. Kumar, N. M., K. Atluri and S. Palaparthi. (2018). Internet of Things (IoT) in Photovoltaic Systems.2018
National Power Engineering Conference, IEEE.
3. Hanumanthaiah, A., M L Liya, M Aswathy and C. Arun. (2 0 1 9 ) . IoT Based Solar Power
Monitor&Controller for Village Electrification. IEEE.
4. Bouhamida, H. A., S.Ghouali, M. Feham, B. Merabet and S.Motahhir. (2021). PV Energy Generation and
IoT Power Consumption for Telecom Networks in Remote Areas. Technology and Economics of Smart
Grids and Sustainable Energy.
5. Adhya, S., D. Saha, A. Das, J.Jana and H. Saha. (2 0 1 6 ) . An IoT Based Smart Solar Photovoltaic Remote
Monitoring and Control unit. 2 nd International Conference on Control, Instrumentation, Energy &
Communication (EIEC).
6. Srinivasan, K.G., K. Vimaladevi and S. Chakravarti. (2 0 1 8 ) . Solar Energy Monitoring System by IoT.
Special Issue Published in Int. Jnl Of Advanced Networking & Application (IJANA), pp.46-51.
7. Babu L, R.L.R., D Rambabu, A. Rajesh Naidu, R.D.Pradad and P.Gopi Krishna. (2018). IoT Enabled Solar
Power Monitoring System. International Journal of Engineering & Technology, vol. 7(3.12), pp. 526-530.
OT2_03_01/4
147
12th SCiUS Forum
OT2_14_02Title : The Development of Rice Disease Classifier from Rice Leaf Image
Field : Technology and Computer
Author : Miss Chawisa Nuilek
Miss Phitchayapha Rungruang
School : Paphayompittayakom School, Thaksin University
Advisor : Asst.Prof.Dr.Noppamas Pukkhem
Abstract :
The objective of this research is to develop a rice disease classification from rice leaves images using
deep learning techniques. The process consists of 5 steps 1) Rice leave image acquisition; 2) Data
preparation; 3) Rice disease classifier development; 4) Classification model evaluation and 5) Classification
model deployment. In our approach, Teachable Machine is used to create rice disease classifiers by supporting
the deep learning tool environment. It was found that the optimized parameter that made the highest accuracy
of the model was 32 batch size, learning rate 0.01 epoch 20, with an accuracy of 96.83%
Keywords : Deep learning, Rice disease, Image classification
Introduction
Rice is a cereal that is popularly consumed by the world's population. It is also one of the important
economic crops of Thailand. both in terms of consumption It is also important to the country's economy
Rice cultivation at present is very difficult due to rice disease which the local people do not have
knowledge of the disease or the nature of the diseased rice, what is the appearance, color, or shape. In which
the diagnosis is generally made by farmers using external observation methods. which may cause an error But
nowadays, there are more scientific instruments, but there are still time and cost issues.
The researcher, therefore, wanted to develop a diagnostic system based on photographs. to help farmers
who lack knowledge of rice diseases Image recognition and classification were performed with trained models
and visualization was performed by photographic disease identification of rice.
Methodology
The process for developing the rice disease classification consists of 5 steps
OT2_14_02/1
148
12th SCiUS Forum
1. Data collection
This research collected data from websites related to rice disease. Data samples were collected from 3
different rice disease sources: 43 images of brown blight rice, 25 images of brown blight rice, and 34 images
of brown blight rice.
2. Data preparation
It is to use the collected data to adjust various values such as
rotation, contrast, noise, and blur to get different image data. is an
increase in the amount of data
3. Rice disease classifier development
In the modeling section, there are two stages of preparation which are the learning process to create the
model and the model testing stage. Because this research uses a teachable machine to create a model. As a
result, teaching data is divided into 85% and test data 15% per class. For model training, the batch size and
learning rate will affect the model's
performance and therefore should be
Determined the appropriate
parametersbefore training the model.
There willbe experiments in the
selection of model parameters
consisting of batchsizes, batch sizes
of 16,32, and 64, and learning rate.
The experiment will be conducted at
the learning rate
of 0.1, 0.01, 0.001, and 0.0001. Then the experiment Epoch (number of cycles) will be conducted for
10,20,30 training cycles.
4. Classification model evaluation. In this study, the researcher measured the model's efficiency by using the
Accuracy and Confusion Matrix.
5. Classification model deployment. The classifiers from this research can be developed into a
smartphone application to make it easier to use. It can receive images from a smartphone camera and
quickly show theresults of the disease prediction.
OT2_14_02/2
149
12th SCiUS Forum
Results Table 1 shows the results of the model's accuracy.
Batch sizes Accuracy
16 Learning rates
32
64 0.0001 0.001 0.01
86.98% 93.01% 89.52%
82.22% 93.02% 94.29%
72.70% 93.65% 86.67%
Table 1 it is shown that in the model parameters In 32 batch-sized trials, the learning rate was 0.01. In 10
rounds, the highest accuracy was 94.29% and was therefore chosen as the modeling parameter.
Table 2 Results of model performance testing, 10 round
Round Class images TP FP FN Precision Recall
10 85.71%
Rice Blast 140 18 2 3 90.00% 95.23%
Disease
90.48%
Narrow 140 20 1 1 95.24%
Brown 140 19 3 2 90.48% 90.47%
Spot
Disease
Brown
Spot
Disease
Total 420 57 6 6 91.91%
Create a model, set the batch size at 32, learning rate 0.01, and the number of 10 rounds, then calculate
Precision,Recall.
Table 3 Results of model performance testing, 20 round
Round Class images TP FP FN Precision Recall
20 100.00%
Rice Blast 140 21 1 0 95.45% 100.00%
Disease 90.48%
Narrow 140 21 1 0 95.45% 96.83%
Brown
Spot
Disease
Brown 140 19 0 2 100.00%
Spot
Disease
Total 420 61 2 2 96.97%
Create a model, set the batch size at 32, learning rate 0.01, and the number of 20 rounds, then calculate
Precision,Recall.
OT2_14_02/3
150
12th SCiUS Forum
Table 4 Results of model performance testing, 30 round
Round Class images TP FP FN Precision Recall
30 100.00%
Rice Blast 140 21 0 0 100.00% 95.23%
Disease 90.48%
Narrow 140 20 2 1 90.91% 95.24%
Brown
Spot
Disease
Brown 140 19 1 2 95.00%
Spot
Disease
Total 420 60 3 3 95.30%
Create a model, set the batch size at 32, learning rate 0.01, and the number of 30 rounds, then calculate
Precision,Recall.
Conclusion
The development of rice disease classifiers from rice leaf photographs by using machine learning
techniques can be divided into 5 main steps as follows: Step 1: image data collection; Step 2: data
preparation. The data is then divided into 2 datasets: 1) Training data and 2) Testing data. Training data will
be used in Step 3 to create a classification model and Testing data will be used in Step 3. Step4 is a
classification model performance test. Then select a high-precision model to be used in step 5, which is to
predict the actual data. Data samples were collected from 3 different rice disease sources: 43 images of
brown blight rice, 25 imagesof brown blight rice, and 34 images of brown blight rice. Augmentation was
performed using Cira core to determine to add information 140 images per disease data were used to create a
classification model. By using Teachable Machine and experimenting with selecting suitable parameters to
optimize the model, it wasfound that the parameters that made the model the most accurate were 32 batch
size, the learning rate of 0.01 epoch 20. The modeling results were shown. The highest accuracy was
96.83%, indicating the efficiency ofmodeling for rice disease classification
Acknowledgments
This project was supported by Science Classroom in University Affiliated School (SCiUS) under
Thaksin University and Paphayompittayakom School. The funding of SCiUS is provided by the Ministry of
Higher Education, Science, Research, and Innovation, which is highly appreciated. This extended abstract is
not for citation
References
Nopparut Pattansarn, Nattavut Sriwiboon. Image Processing for Classifying the Quality of the Chok-Anan
Mango by Simulating the Human Vision using Deep Learning. Information Science and Technology
2020;10:24-9
Anuwat Pattanachian. The Development of Rice Grain Classification from Images using Image Processing
and Data Mining Techniques 2018:1-17
OT2_14_02/4
151
12th SCiUS Forum
Title : Image Classification and Data Transmission of OT2_17_02
Weeding Detection Using ESP32CAM with
Field : LoRaWAN Technology
Author : Technology and Computer
School : Mr. Kasidate Jaroensiripun
Advisor : Mr. Weerachai Sornprom
PSU.Wittayanusorn Surat Thani School
Prince of Songkla University, Surat Thani Campus
Asst. Prof. Dr. Nathaphon Boonnam
(Prince of Songkla University, Surat Thani Campus)
Miss Oatchima Sinchim
(PSU.Wittayanusorn Surat Thani School)
Abstract
One of the problems in agriculture is that weeds absorb nutrients that plants use to grow. As a
result, farmers are burdened with more work, which causes mental health problems. Bio-fermented water is
one of the weed-killing alternatives with less impact on the environment than the use of popular chemicals.
Therefore, this research aims to develop a weed detection system, study data transmission with LoRaWAN
technology, and study the relationship between the data in the experimental area that results in weeds. Our
system can send messages via LINE Notify from the processor of the ESP32CAM microcontroller to identify
weeds and green oak lettuce. It uses the 'Edge Impulse' platform to build deep learning models generated
through the MobileNetV1 with a 96x96 input size and a 0.1 dropout (no final dense layer) neural network
architecture via the Tensorflow library. A total of 5 models were classified for weed measurements on green
oak lettuce, including 5-7, 10-12, 17-20, 25-28, and 37-40 days, with an accuracy of 51.2%, 57.5%, 97.0%,
63.4%, and 68.3%, respectively. A system also shows the values obtained from data collection in the
experimental area and issues orders to turn on and off the biofertilizer pipes for fermentation weeding through
web applications. The distance of the LoRaWAN modulation between the receiver and transmitter is 70 meters.
It is a system for monitoring conditions in the experimental area and is controlled by the Internet of Things
technology, which makes it very convenient for users.
:Keywords LoRa, Wireless Area Network, Image Classification
Introduction
Agriculture is the main source of income for the Thai population. For agriculture to create quality
products that meet the needs of consumers, such as vegetables, fruits, etc., it must adopt methods to create
products that are safe and free of additives. However, there is a barrier that slows crop growth: weeds that
absorb the nutrients that plants use to produce crops. Productivity is largely determined by external factors
using force by human beings or living beings. It may affect them both physically and mentally. Furthermore,
weeds are plants that damage the environment. They are growing and propagating well and quickly, making
OT2_17_02/1
152
12th SCiUS Forum
them difficult to control and remove. But there is still a way to get rid of examples used for weeding reduction,
including plucking manually, using a lawn mower, or spraying the chemicals. One method that is highly
popular today is the use of bio-fermented water. The researcher found that the use of bio-fermented water can
help resist and reduce the number of weeds. It can be adapted to a variety of characteristics to suit agriculture.
The Internet of Things (IoT) is a working system. Various devices have been linked to everything
and the Internet. It allows humans to control the use of equipment via the Internet, such as agricultural tools.
Combined with network technology, LoRaWAN is namely ‘Long Range (LoRa)’, meaning a long distance,
and the word ‘WAN’, meaning wireless area network. LoRaWAN acts as a communication network that uses
the signal. A radio designed to support wireless low-power transmission is a network system capable of
transmitting long-distance signals. It is built to support the IoT and M2M (Machine to Machine). LoRaWAN
is integrated into a microcontroller, which is an important part to connect to the database to store information.
Therefore, this study is to operate the weed control and measurement system. It can be adapted to
IoT devices supported by LoRaWAN through the database management system for agriculture, linked to IoT
devices and applications that process and analyze data obtained from field measurement devices. We test to
find the relations that are related to the presence of weeds in the area and receive orders from the application
to release bio-fermented water to get rid of weeds. It meets the criteria for the incidence of weeds.
Methodology
Part 1: Requirements survey and system analysis
Weed measurement results are
referenced from the ESP32-CAM module at the
experimental site. If classification of image
processing is the main goal, then the system can
notify users via LINE and send the data stored in the
database from various sensors to display the
application web page. Next, it depends on the user's
need to turn on and off the bio-fermentation pipe to
send the bio-fermented water to ruin weeds. It shows
the on-off status of the bio-fermentation pipe as well.
After the system has detected weeds, it uses this
information to find the relationship between the data Figure 1 : Use case diagram
and working features as shown in Figure 1.
Part 2: Create Classification model
We built a weed measurement model
using Edge Impulse, a classification learning
modeling platform, to create a library that is
compatible with the Arduino IDE. We collected the
datasets by surveying small weeds in the vicinity of
the experimental area. Weeds can be collected for Figure 2 : Top view of green oak’s growth
OT2_17_02/2
153
12th SCiUS Forum
datasets by placing them in a virtual weed test area. Simultaneously with green oak lettuce at different ages,
including 5-7, 10-12, 17-20, 25-28, and 37-40 days, was the first label in each model; green oak lettuce
according to different age ranges was the two labels in each model, for a total of 5 models. Each image was
reduced to 96 x 96 input size, and the color depth of the image was selected as grayscale to reduce the size of
the learning model. The ratio in each label as Train set : Test set : Validation set is approximately 64 : 20 : 16,
as shown in Table 1.
Table 1: Division of data in each model growing green oak
Model Label Train set Test set Validation set
1 5 – 7 days with weeds 85.00% 24.00% 21.00%
2 19.00%
5 – 7 days without weed 76.00% 25.00% 20.00%
3 10 – 12 days with weeds 82.00% 28.00% 19.00%
4 10 – 12 days without weed 76.00% 25.00% 13.00%
5 20.00%
17 – 20 days with weeds 50.00% 19.00% 20.00%
17 – 20 days without weed 78.00% 22.00% 21.00%
21.00%
25 – 28 days with weeds 80.00% 27.00% 20.00%
25 – 28 days without weed 84.00% 19.00%
19.40 ± 0.0225 %
37 – 40 days with weeds 83.00% 26.00%
37 – 40 days without weed 81.00% 28.00%
Average 77.50 ± 0.0964 % 24.30 ± 0.0316 %
Because the dataset is not very large, transfer learning is used to help the model training process. The training
cycle count is set to 20, the learning rate is set to 0.0005 (to reduce overfits), and the validation set is divided
by 20% of the training set. Additionally, using the auto-balance dataset increases the problem for the dataset.
Data augmentation increases the number of training images by taking the images from the training set and
randomly rotating the left and right images in each image.
Part 3: Set up data transmission
The DHT22, soil moisture, soil temperature, and analog pH sensors are connected to the Dragino
LoRa Shield for the Arduino board, set to act as a transmitter for data collected from different sensors via the
LoRaWAN signal network by setting SpreadingFactor The value of 7 and the signal bandwidth is 250E3. The
TTGO LoRa board serves as the receiver of the data from the Dragino LoRa Shield for Arduino, which has the
same settings as the data transmitter. and send data to the phpMyAdmin database to collect data from a sensor
via Wifi signal.
Part 4: Create a database and dashboard
We developed a database built into Webhost that serves to embed dashboards on the Internet.
There is a database service in the back-end: phpMyAdmin, which serves as a storage area for sensor data;
ThingSpeak, which takes sensor data and displays results in graphs; and Visual Studio Code, a dashboard
development tool. The dashboard retrieves data from the phpMyAdmin database and displays the data in graphs
using the ThinkSpeak API. It enables or disables the biofertilizer pipeline.
Part 5: Analyze the relationship between the data
We took data from a database that collects sensor data (soil moisture, air humidity, soil
temperature, air temperature, and acid-base) from the simulation area. Then we found the relationship of
variables with the SPSS program using multiple regression analysis.
OT2_17_02/3
154
12th SCiUS Forum
Results and Discussion
As we mentioned above, the 5 models of different ages were able to calculate their quantity using
the confusion matrix as shown in Table 2. The test set was accurate with the confidence criterion set at 0.5.
Table 2: Model quantity
Model Accuracy Recall F1 Score Precision
1 0.51 0.74 0.84 0.94
2 0.58 0.59 0.68 0.77
3 0.97 0.49 0.65 0.81
4 0.63 0.64 0.63
5 0.68 0.60 0.58 0.61
0.56
We found that 17-20 days with and without weed are appropriate model for image classification since the
accuracy is onvoiusly precise. On the other hand, we consider the environmental factors influencing weed
incidence, including acid-base, soil temperature, and soil moisture, which affect the temperature in the soil.
Weeds will grow more if the soil temperature rises. At the same time, when the soil temperature decreases,
weeds also decrease, so the temperature in the soil is directly proportional to the incidence of weeds. And if
acid-base or decreased soil moisture is present, weeds will increase as shown in Table 3.
Table 3: Coefficients
Model Unstandardized Coefficients Standardized T Sig.
Coefficients
1 (Constant) B Std. Error 85.775 0.000
Acid-base 6.465 0.075 Beta -69.501 0.000
-0.696 0.010 33.690 0.000
2 (Constant) 5.883 0.175 -0.950 -69.738 0.000
Acid-base -0.693 0.010 0.000
Soil Temp. 0.024 0.006 -0.945 3.681 0.000
5.890 0.173 0.050 34.011 0.000
3 (Constant) -0.679 0.011 -63.606 0.001
Acid-base 0.022 0.006 -9.927 3.471 0.002
Soil Temp. -0.209 0.066 0.047 -3.153
Soli Moist. -0.046
Therefore, acid-base and soil moisture in mixed vegetable and weed incidence which will get the weed
incidence equation as
Y=0.890 – 0.679X1 + 0.022X2 – 0.209X3
where X1 denotes acid-base, X2 temperature, and X3 soil moisture. In addition, we also found that the power to
predict the incidence of weeds was 90.6% as shown in Table 4.
Table 4: Model summary
Model R R Squared Adjusted R Std. Error of the
Squared Estimate
1 0.950 0.902 0.902 0.135
2 0.951 0.904 0.904 0.133
3 0.952 0.906 0.906 0.132
OT2_17_02/4
155
12th SCiUS Forum
Oral presentation
Technology and Computer Group 2
Sunday August 28, 2022
No. Code Title Author School
PSU.Wittayanusorn
1 OT2_15_04 Forecasting model for Mr. Khanchayakrit Meekaew School
Surawiwat School,
Dogecoins marketing Mr. Akarachai Yongsuwankul Suranaree University of
Technology
2 OT2_18_04 Modified and built an antenna Mr. Chawakorn Phongpansanga
Suankularbwittayalai
to receive more than 2 Mr. Sakchai Thanaphasawat Rangsit School
kilometers of LoRa signal Mr. Insee Thaopech
PSU.Wittayanusorn
3 OT2_11_02 Portable Device for Mr. Konlathad Dusadeekulchai School
Detecting Microplastic in Mr. Kittitat Kanta-Anantaporn PSU.Wittayanusorn
Water Mr. Panat Suwanasing School
Engineering Science
4 OT2_15_13 flaxibility: Drag and Drop Mr. Phuwit Puthipairoj Classrooms
(Darunsikkhalai School)
Factory Managing Puzzle Mr. Chetsada Kanatong Chiang Mai University
Demonstration School
Game Mr. Krissanakorn Pundanguem
5 OT2_15_09 Robot Following Human PSU.Wittayanusorn
School
Mr. Kreetat Duangkura
6 OT2_09_07 Qubit Visualization with VR Mr. Thitibhat Rittikulsittichai
Mr. Sippapas Charoenkul
7 OT2_01_01 Text Generation using Miss Woradee Chonnapastid
Artificial Intelligence Mr. Kampanat Yingseree
Miss Kanjanapond Sukonthachart
Mr. Chananan Chaichanan
8 OT2_15_07 Reducing Snoring Pillow Mr. Kritabhas Nuprakob
Mr. Denphum Thoungtakuk
156
Title: Forecasting model for Dogecoins Marketing 12th SCiUS Forum
Field: Technology and Computer
Author: Mr. Khanchayakrit Meekaew OT2_15_04
Mr. Akarachai Yongsuwankul
School: PSU. Wittayanusorn school, Prince of Songkla University
Advisor: ASST.PROF.DR. Kitsiri Chochiang, Mr. Winai rattanapol
Abstract
Cryptocurrency investment is a new way of investment that is in the mainstream for these 3 years.
But there is a lot of choice for a new investor for instant which coins they should invest in, which indicator I
should use etc. But the thing that highly interested investors still be the research area of cryptocurrency price
prediction. For a good or successful investment, investors should try every way to know the future of the
coins, not only with the close price but everything about it. And for the effective prediction should have
supportive information. Nowadays, technology has developed so far and there are recurrent neural networks
(RNN) and Long-Short-Term Memory (LTSM) that can predict cryptocurrency market indices. Dogecoins is
one cryptocurrency that was phenomenal, and it was an almost sarcastic coin. That means it isn’t to predict
the Dogecoins market.
As the result of this study, we have found that our model has only a 4.91% error. By being able
to apply this research, it could be better to compare with other models regarding efficiency and corrections.
Keywords: Dogecoins, Recurrent neural network (RNN), Long-Short-Term Memory (LTSM)
Introduction
In these, 3 years is like a golden time for investing in cryptocurrency. Not only with the popular
coins for example Bitcoins, Ethereum, Cardano etc. grow up dramatically but in the new currency, they also
rocketed in value and price too. Nowadays, we use cryptocurrency in many ways such as investing,
transferring currency, buying stuff etc.
Investment in cryptocurrency there are a lot of ways. But the most common way is to buy
cryptocurrency directly and that is the most common way to add crypto exposure to your portfolio. There are
a lot of indicators, some are easy to use, and some are too complicated and hard to use for new investors. So
that means some indicators require the skill to read or draw lines on a graph to give investors a guideline.
But nowadays, with the developed technology, there are many people who use Recurrent Neural Networks
(RNN) to develop a model that can process data continuously.
OT2_15_04/1
157
12th SCiUS Forum
LSTM (Long Short-Term Memory) is one of the most successful RNNs architectures. LSTM model
introduces a memory cell, a unit of computation that will replace the traditional artificial neurons in the
hidden layer. With memory cells, networks are able to effectively associate memories.
In every day of our life nowadays, technology comforts us in many ways. There is a function that
can help our life safe and sound is forecast. The forecast shows us the possibility of what they want to
predict with a lot of proof. And if we use this technology on cryptocurrency, it will make a lot of benefits to
investors.
Methodology
In this section, we will discuss the methodology of our system. There are several stages which are
as follows.
• Stage 1: Raw Data:
In this stage, we collected the historical Dogecoin data from
https://finance.yahoo.com/quote/DOGE-USD/history/ and this historical will be used for prediction
of future stock prices.
• Stage 2: Data Preprocessing:
Before we're going to train model with the data set. We have to analyze our data first for instant
find out which time we should use the data to train, delete the blank data etc. After we have done all
of analyzing data, we separate data into training data set and testing data set. In our project, the ratio
between training data set, and testing data set in 80:20.
• Stage 3: Training models:
In this section, the data is fed to the neural network and trained for prediction. Our LSTM model is
composed of a sequential input layer with ReLU activation and then finally dense output layers
with linear activation function.
• Stage 4: Output Generation:
In this layer, the output will become a graph that shows the data set that compare with the model
prediction and the error percentage. After all, we have to analyze the output and develop the model.
Result
The table shows the comparison of the error values. between changes in parameters and epoch values
OT2_15_04/2
158
12th SCiUS Forum
sssssssssClose and Epoch 1 time Close and Epoch 10 time
Close and Epoch 50 time Close and Epoch 100 time
OT2_15_04/3
159
12th SCiUS Forum
Close + shuffle and Epoch 1 time Close + shuffle and Epoch 10 time
Close + shuffle and Epoch 50 time Close+shuffle and Epoch 100 time
Conclusion
The study found that the Dogecoin prediction model developed by us can predicted the value with
Close data-based training and had an epoch value of 100 cycles, with a minimum error of 4.910%
Acknowledgement
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by Ministry of Higher Education, Science, Research and Innovation. This
extended abstract is not for citation.
References
Roondiwala M, Patel H, Varma S. Predicting Stock Prices Using LSTM. International Journal of
Science and Research (IJSR). 2021 Apr;6(4):1754–6.
OT2_15_04/4
160
12th SCiUS Forum
Title : Modified and built an antenna to OT2_18_04
receive more than 2 kilometers of
LoRa signal
Field : Technology and Computer
Author : Mr.Chawakorn Phongpansanga
Mr.Sakchai Thanaphasawat
Mr.Insee Thaopech
School : Surawiwat School, Suranaree University of Technology
Advisor : Associate prof. Dr. Rangsan Wongsan School of Environmental Engineering, Suranaree
University of Technology
Abstract
An antenna is a device used for radiating electromagnetic waves or a device used for radiating
the power of electromagnetic waves, or vice versa. For receiving the power of electromagnetic waves, the
antenna will serve as a connection between the guiding device for the waves, such as a transmission line or
waveguide, and free-space. The objective of this study was to design and build an antenna to receive LoRa
signal data. The antenna design was simulated using the CST Microwave Studio Suite software with a
Yagi-Uda antenna at 920–925 MHz, and the actual antenna was built according to the model obtained. The
parameters obtained from the actual measurements were compared with the results obtained from the design
using the CST Microwave Studio Suite software. The results obtained from the Yagi-Uda antenna experiment
with the Network Analyzer found that the value at 922.5 MHz was about -16 dB, less than -10, meaning
11
the antenna had a good performance and could be used. The antenna has a gain of 18.35 dBi. In the distance
test, the YagiUda antenna covers a distance of 2 km in urban use.
Keywords : Antenna, LoRa technology, Yagi–Uda antenna
Introduction
Wireless communication is one of the most effective ways to communicate between two or more
devices as it reduces the time and distance barriers of wired communication. Wireless communication
technology communicates and transmits data wirelessly using electromagnetic waves such as radiofrequency,
infrared, and satellite instead of cables. LoRa is a wireless connection technology suitable for Internet of
Things (IoT) applications where primary data comes from sensors that receive various types of data as a
technology that is widely used. LoRa technology has the advantages of communication distance and energy
saving due to the relatively low power consumption in the transmission range. Implementation of LoRa
technology requires zoning considerations or country-specific requirements to be able to use LoRa devices at
the frequency specified by that country. For Thailand, the operating frequencies are listed in the frequency
range of 920 - 925 MHz, which is considered an unlicensed frequency. The transmit power (EIRP) does not
exceed 20 dB (not more than 100 mW). fHowever, in general, LoRa signals used in cities tend to have a
OT2_18_04/1
161
12th SCiUS Forum
usable distance of fewer than 2 kilometers. An antenna is a device for transmitting and receiving radio
frequencies. It acts to convert electrical energy into electromagnetic waves and, vice versa, convert
electromagnetic waves into electrical energy as well. Antennas come in a variety of sizes and designs
depending on the use case. Yagi-Uda antenna is one of the best antennas as it is easy to build and has a high
gain. which is generally greater than 10 dB. Yagi-Uda antennas typically operate in the HF to UHF band
(approximately 3 MHz to 3 GHz). The bandwidth of the antenna is typically small. It is about 2%-3% of the
mid-frequency. As a result, the goal of this project is to use a Yagi-Uda antenna to extend the range of the
LoRa receiver by combining the Yagi-Uda antenna with the LoRa receiver. Due to its high gain, It has a
simple layout that is easy to modify and build. To receive signals over a distance of more than 2 kilometers,
the antenna will be modified to have a high gain and support the operating frequency of LoRa devices. The
antenna will be constructed from materials that are readily available locally.
Methodology
Part 1: Design and simulate a Yagi-Uda antenna in CST Microwave Studio Suite
1.1 Calculate the length of the Dipole.
1.2 Define the number of elements.
1.3 Calculate the length of Directors and Reflectors.
1.4 Calculate the distance of each element.
1.5 Create an antenna in CST Microwave Studio Suite.
1.6 Simulate the antenna and analyze the Return Loss value.
1.7 Modify the length and spacing between elements until the Return Loss value is appropriate.
Part 2: Build the prototype antenna as designed
Figure 1: Structure of the Yagi-Uda Antenna
Part 3: Measure the value of Return Loss: and modify the antenna
11
3.1 Determine the prototype antenna's Return Loss value.
3.2. Modify the length of each element until the Return Loss value is appropriate.
3.3. Adjust the distance between each element until the Return Loss value is appropriate.
Part 4: Antenna Gain Test
4.1 Measure the wave transmittance of the Yagi-Uda antenna.
4.2 Calculate the gain. By substituting the measured wave pass values into the Friis equation and
calculating.
Part 5: Antenna Range Test
5.1 Install transmit antenna (Heltec ESP32 Wifi LoRa Oled V.2 module connected to original
monopole antenna).
OT2_18_04/2
162
12th SCiUS Forum
5.2 Receive data with receiver antenna (Heltec ESP32 Wifi LoRa Oled V.2 module connected.
Connected to the Yagi-Uda antenna. and the original monopole antenna of the module) according
to the specified path.
5.3 Compare the distance that the two antennas can receive.
Results
1. Measure the value of Return Loss: of the antenna and
11
modify the antenna
According to the graph, The antenna is non-resonant
at a frequency of 922.5 MHz, with a frequency of 922.5 MHz. The
antenna has the Return Loss value of about -9 dB, which is greater
than - 10 dB and is considered unusable moreover, this may be
caused by discrepancies in the antenna, during the construction of
the prototype antenna. The prototype antenna was modified to
obtain an appropriate value. by adjusting the length of each
element.
Figure 2 : The Return Loss value graph of the 1st test
According to the graph, The antenna is resonant at a
frequency of 922.5 MHz, with a frequency of 922.5 MHz. The
antenna has the Return Loss of an acceptable value of about -16
dB but has a bandwidth between 920 MHz and 925 MHz at -8 dB,
which is not covered by LoRa. The prototype antenna has been
modified to an optimal value. by changing the distance between
each element.
Figure 3 : Compare 1st test and 2nd test of the Return Loss value graph
1st test 2nd test
According to the graph, The antenna is more
matching, with a frequency of 922.5 MHz. The antenna has the
Return Loss of an acceptable value of about -16 dB and bandwidth
between 920 MHz and 925 MHz at -10 dB, consider covering
LoRa. It can be applied to devices that require antennas with
specific propagation directions.
Figure 4 : Compare 2nd test and 3rd test of the Return Loss value graph
OT2_18_04/3
163
12th SCiUS Forum
2nd test 3rd test
2. Antenna gain test
To calculate the gain of the prototype antenna, the Friis equation can be computed by using a
monopole antenna as the transmit antenna, first of which a monopole antenna is used. It operates in the 922.5
MHz frequency band as the transmitter and the Yagi-Uda antenna as the receiver, placed at a distance of 3
meters. When the measured wave transmittance equal to -23.3 dB is substituted into the equation, the gain of
the antenna is 18.35 dB.
3. Antenna Range Test
As a result of testing, the Yagi-Uda antenna covers a distance of approximately 2.0 km, which is
a longer distance compared to the original monopole antenna of the Heltec ESP32 Wifi LoRa Oled V.2
module which covers a distance of 1.2 km.
Discussion and Conclusion
The Yagi-Uda antenna was built and can be used in practice. The theoretical values were
different from those obtained from the test of the prototype antenna. The prototype antenna has a
non-resonant Return Loss value of 922.5 MHz, which may be due to the discrepancy during the building of
the antenna. Therefore, the prototype antenna has been modified to be an appropriate value by adjusting the
length and spacing of elements. After that, the antenna has a resonance frequency of 922.5 MHz, And at a
frequency of 922.5 MHz, the antenna has the Return Loss of an acceptable value of about -16 dB and has a
bandwidth between 920 MHz and 925 MHz at -10 dB, consider covering LoRa. The Yagi-Uda antenna can
be used with devices that need an antenna with a specific wave propagation direction. In the antenna test, the
antenna gain was 18.35 dB, which is greater than the simulation of CST Microwave Studio Suite software,
and the antenna covered a distance of 2 km in use in the urban area, serving its purpose.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS).
The funding of SCiUS is provided by the Ministry of Higher Education, Science, Research and Innovation.
This extended abstract is not for citation.
References
[1] รองศาสตราจารย์ ดร.รั งสรรค์ วงคส์ รรค.์ (2555).วิศวกรรมสายอากาศ (พิมพค์ รั ง้ ท่ี 3) นครราชสมี า:
มหาวทิ ยาลัยเทคโนโลยีสุรนารี
[2] ธีราภรณ์, ธ.; จงรวมกลาง, ภ.; วนาพันธพรกุล ช., การออกแบบสายอากาศโมโนโพล 5λ /8
สำหรั บสถานี วิทยุ FM 2558, 1
[3] ชยาวุธ ภูยางตูม, ชาญวทิ ย์ โชคบัณฑิต.(2549). สายอากาศแถวลำดั บโดยใช้อีลิเมนตข์ ั บและชี้
ทิศแบบไดโพลโค้ง. นครราชสมี า: มหาวทิ ยาลัยเทคโนโลยีสุรนารี
OT2_18_04/4
164
12th SCiUS Forum
Title : Portable Device for Detecting Microplastic in Water OT2_11_02
Field : Technology and Computer
Author :
Mr. Konlathad Dusadeekulchai
School :
Advisor : Mr. Kittitat Kanta-anantaporn
Mr. Panat Suwanasing
Suankularbwittayalai Rangsit School, Thammasat University
Asst.Prof. Dr. Rawat Jaisuti, Department of Physics, Faculty of Science and Technology,
Thammasat University
Abstract
Problems regarding microplastic pollution in water have been increasing rapidly in recent years;
current methods of detecting microplastic are often costly and ill-fitted to be used on field expeditions. In this
research, the objective was to create a low cost and portable device for detecting microplastic. The researchers
used a light sensor and camera with a convolutional neural network to find microplastic in the pictures taken.
Two methods were tested: measuring reflected light from LED and measuring laser light transmitted through
the microplastic piece; whilst the reflection method and neural network model were tested together, the laser
method was tested separately. When testing for the model’s results and measurements of the reflected signal
from LED light, the experimental sets contained five or fifteen pieces of microplastic, the type of plastics used
were LDPE and PET of 0.5 1.0 and 5.0 millimeters. On the other hand, when testing for measurements of the
laser, the signal transmitted through the sample was measured and though the same plastic types were used,
each test set only contained only one piece. It was found that in both LED and laser experiments the amount
of light reflected from LDPE plastics increased with the size of microplastic; however, in five-piece sets, the
difference between each size wasn't very significant. The previously mentioned relationship, though it appears
when testing PET with laser experiments, it doesn’t appear in the results when testing PET with reflective
experiments. The model was able to account for the majority of the plastic, though it would often miss those
in positions of bright light or mistake background noise as microplastic; this indicated that the model needed
a dataset that better suited its environment. The optical sensor and LED setup need to be changed to converge
light or change to use the laser as the main light source since it has proven effective in separating types and
sizes of individual pieces. The researchers should also add a GUI to the device and modify its structure for
easier usage in the field.
Keywords: microplastics, sensor, computer vision, microprocessor
Introduction
The problem of microplastic contamination in water is increasing rapidly, especially in natural
water sources; a 2019 report by the World Health Organization (WHO) reported that microplastics affect a
wide range of aquatic life such as fish: through their digestive system.
Microplastics are plastics which are smaller than 5 millimeters and measuring microplastics in natural
waters is complicated. Microplastic measurement methods such as Raman spectroscopy and Fourier
Transform Infrared Spectroscopy (FTIR) are bulky and have high costs, making them unsuitable for field
OT2_11_02/1
165
12th SCiUS Forum
measurements. Later, the use of light sensors was developed in the study of microplastic measurements;
however, the data obtained often have lesser detail and accuracy making the application of other methods
along with it often preferable.
In this project, the researchers wanted to develop a portable microplastic detector using a light sensor
coupled with a camera and Raspberry Pi which has a low cost and is usable in the field.
Methodology
The researchers used a light sensor and camera with a convolutional neural network to find
microplastic in the pictures taken. Two methods were tested: measuring reflected light from LED and
measuring laser light transmitted through the microplastic piece; whilst the reflection method and neural
network model were tested together, the laser method was tested separately.
Part 1 : Traning Object Detection Model.
When training the model the dataset consisted of one hundred pictures each of 5mm PET and LDPE
plastics, and 200 pictures each of 1 mm and 0.5 mm of PET and LDPE plastics; these pictures were augmented
through means of adjusting saturation hue and brightness. The architecture used was Yolov5 which is a popular
model of neural network.
Part 2 : Defining Test Set for Evaluation.
When testing for the model’s results and measurements of the reflected signal from LED light, the
experimental sets contained five or fifteen pieces of microplastic, the type of plastics used were LDPE and
PET of 0.5 1.0 and 5.0 millimeters. On the other hand, when testing for measurements of the laser the signal
transmitted through the sample was measured and though the same plastic types were used, each test set only
contained only one piece.
Part 3 : Device Setup
Device was design in a U-shape with 15 cm tall, 14 cm long and 10 cm wide. The camera is attached
on top with a holder in the middle for the dish. Below would be the light sensor and LED for testing the
reflective qualities of a sample. Also, there is a laser attached to the upper left of the device in order to test
individual pieces of plastic. Device can connect to the Raspberry Pi through LAN cable or WIFI.
Part 4 : Evaluating Results.
When experimenting with both reflective and laser methods each test set was tested five times; the
results for the light signals were averaged and recorded. The results from the model are calculated into two
variables and are created: precision, which was calculated from the number of correct finds divided by the
number of all finds, and recall, which is calculated from the number of correct finds divided by the number of
items present.
OT2_11_02/2
166
12th SCiUS Forum
Results
Figure 1: Averaged Graph of Laser Result
It was found that in both LED and laser experiments the
amount of light received by the light sensor when testing LDPE
plastics increased with the size of microplastic; however, in five-
piece sets, the difference between each size wasn't very
significant. The previously mentioned relationship though it
appears when testing PET with laser experiments, it doesn’t
appear in the results when testing PET in reflective experiments.
Figure 2: Result of 5-piece sets Figure 3: Result of 15-piece sets
Regarding the model, it was able to account for the majority of the plastic, though it would often miss
those in positions of bright light or mistake background noise as microplastic. Since the dataset was trained in
a different environment, it could be inferred that this caused the model to be unfamiliar with the test
environment leading to some false finds and blind spots.
Conclusion
Overall the device needed slight improvement. The model was able to account for the majority of the
plastic, but its failure in certain areas indicated that the model needed a dataset that better suited its
environment. The optical sensor and LED setup need to be changed to converge light or change to use the
laser as the primary light source since it has proven effective in separating types and sizes of individual pieces.
The researchers should also add a GUI to the device and modify its structure for easier usage in the field.
OT2_11_02/3
167
12th SCiUS Forum
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS) under
Thammasat University and Suankularb Wittayalai Rangsit School. The funding of SCiUS is provided by the
Ministry of Higher Education, Science, Research and Innovation, which is highly appreciated. This extended
abstract is not for citation.
References
1. Asamoah B. O., Kanyathare B., Roussey M., Peiponen K.-E. A prototype of a portable optical
sensor for the detection of transparent and translucent microplastics in freshwater. Chemosphere
[Internet]. 2019 Sep [cited 2021 July 20]; 231, 161-167. Available from:
https://www.sciencedirect.com/science/article/pii/S0045653519310203?via%3Dihub
DOI: 10.1016/j.chemosphere.2019.05.114
2. Asamoah, B.O.; Uurasjärvi, E.; Räty, J.; Koistinen, A.; Roussey, M.; Peiponen, K.-E. Towards the
Development of Portable and In Situ Optical Devices for Detection of Micro-and Nanoplastics in
Water: A Review on the Current Status. Polymers [Internet]. 2021 Feb [cited 2021 Aug 5]; 13(5),
730. Available from: https://www.mdpi.com/2073-4360/13/5/730 DOI: 10.3390/polym13050730
3. Elprocus. Optical Sensor Basics and Applications [Internet]. 2021 [updated 2021; cited 2021 Aug
13]. Available from: https://www.elprocus.com/optical-sensors-types-basics-and-applications/
4. Glenn Jocher, Alex Stoken, Ayush Chaurasia, Jirka Borovec, NanoCode012, TaoXie, et al.
ultralytics/yolov5: v6.0 - YOLOv5n \'Nano\' models, Roboflow integration, TensorFlow export,
OpenCV DNN support. Zenodo [Internet]. 2021 Oct [cited 2021 Dec 21]; Available from:
https://zenodo.org/record/5563715#.YlPmKMjP1PY DOI: 10.5281/zenodo.3908559
5. Raspberry Pi Foundation. Raspberry Pi Documentation [Internet]. 2021 [updated 2021; cited 2021
Aug 18]. Available from: https://www.raspberrypi.org/documentation/computers/
6. World Health Organization. Microplastics in drinking-water [Internet]. Switzerland: WHO
publication; 2019. 8-48 p.
OT2_11_02/4
168
Title: 12th SCiUS Forum
Field:
Author: flaxibility: Drag and Drop Factory Managing Puzzle Game
School: OT2_15_13
Advisor:
Technologies and Computers
1. Mr. Jasada Kanatong
2. Mr. Phuwit Puthipairoj
PSU. Wittayanusorn School, Prince of Songkhla University
Mr. Winai Rattanapol
Prof. Dr. Chinnapong Angsuchotmetee
Abstract
flaxibility: Drag and Drop Factory Managing Puzzle Game is a game developed under a science
project in the technology and computers field. How we think, or so called, algorithms, is often overlooked by
most people. That is why we are trying to help address that problem in the format of a video game, which is
an interactive form of entertainment that players have an active role participating in it. This project aims to
analyze, design, and develop a game to help improve how the player think while having fun. It is a game about
thinking, designing, and executing a plan by building factories. Developed using Godot game engine,
Inkscape, and other technologies. Players will play the role of a humble entrepreneur who is suffering from
poverty from the previous business that failed terribly and goes bankrupt. They got to act fast, so it is time to
crack on with another venture, this time, let’s work with clothing, and decided to open a textile factory.
However, our entrepreneur is very cheap and focused on short-term growth, so they always must rethink and
redesign the factory over and over, and that’s where the player came in where they help think and design the
factory.
Keywords: algorithms, puzzle, factory builder, management, video game
Introduction
The quality and efficiency of one's thoughts and the execution of one's plans play a role in the
completeness of one's objectives. To obtain a better outcome of the goals, systematic thinking for an efficient,
future-proof solution is a required talent. (ธวชั ชยั ,2563)
More often than not, people use video game only to provide entertainment. We intend to change that
by making video games provide not just entertainment, but also hope to help improve player’s thinking skills.
By using the format of a video game, which is not only attractive and easily accessible to players, but also is
a rapidly growing industry (พชิ ชากร. 2563; NALISA, 2564). We analyze, design, and develop a game for a
purpose to help improve how the player think while making a game that is still enjoyable for players.
OT2_15_13 / 1
169
Methodology 12th SCiUS Forum
Tools and Equipments Operation Plan
1. Git 1. Designing
2. GitHub a. Game Mechanics
3. Godot Game Engine • How does the game work?
4. Visual Studio Code • How will player interact with the
5. Inkscape game?
6. After Effects • What is the win/ lose condition?
7. FFmpeg • What is the challenge?
8. Animation Recorder b. Look and Feel
9. ImageMagick • Sprites
10. Freesound.org • Sound/ Music
11. Audacity • Animation
c. Designing Levels
• How many levels should the
game have?
• Difficulty
• Teaching the players how the
game works
• Win condition for each level
• Available resources
d. Backstory/ Plot
2. Developing
3. Play Testing/ Fixing Glitches
Results, Discussion, and Conclusion
Game Mechanics
Players will interact with the game using their mouse to drag and drop objects
like the machine in the game, it is intuitive and easy to understand for player with
and without gaming experiences. The drag and drop mechanics are modified
heavily from the git repository “godot_tutorial_content” made by the user
bramreth on GitHub. It works by detecting mouse click on the collision shape of
every object that can be drag and drop. Then, after clicking on it, the game moves
the position of the object to the mouse position every frame. Until, the player
release the mouse button, where the game will snap to the nearest rest zone, which is a place
where the machine can snap to or drag from, by doing liner interpolation (lerp) to get a smooth
animation.
OT2_15_13 / 2
170
12th SCiUS Forum
Look and Feel
Sprites are designed to be simplistic and easy to recognize. They are made
as a vector art in the format of Scalable Vector Graphics (SVG) using the
program “Inkscape”.
The animation takes disassembled sprites and manipulate its position,
scale, and rotation in a 1-second loop in 60 frames per seconds. Then, we
make a sprite sheet out of every frame of 1 loop of the animation to further help improve on
loading efficiency and compatibility.
Most sound effects are taken from Freesound.org, some of them are edited using Audacity
and all of them then got converted to Vorbis sound format using FFmpeg.
We commissioned 4 soundtracks for the game from the user ricardofunk via Fiverr. Three
of them are soundtracks uses as a background music randomly selected to play in levels, another
to use as a music when you boot up the game and use it for the game trailer.
Levels
Each and every level can be divided into 3 parts: HUD, drawer, and grid.
HUD The HUD will not only display all available
resources in every level, including money. But
also, where the player can control the game,
starting, resetting, and exiting the level.
Drawer Grid The drawer is where the player can buy any
machines, if you change your mind, you can
return it and get a full refund. All available
machines in each level will differ from one level
to another.
The grid is your factory floor to build your factory. Arranged in 6 by 8 grid and there will
be 2 tiles to get input and output to and from the warehouse it provides player with resources
and receive the finished product, totaling to 46 usable spaces for machines.
The drawer grid is storing every rest node (a place where a machine can snap to or drag
from) in a 2-dimensional array to help abstract the positioning of each node and help the
development.
Conclusion
While we were developing the game in about 4 months, we found that the game still has
some problems, and the direction of the project has steered away from our original vision.
However, it could be further developed to meet other target audiences. Other than that, it has
met its objective and could help to make players develop on their algorithmic thinking skills.
OT2_15_13 / 3
171
12th SCiUS Forum
Discussion
This project is heavily restricted by time constraints, so we got to lower our expectations
and the scope of the game. That makes the game diverge from our original vision.
Although we are developing a game, we could have developed a puzzle mod for another
game, for example, Minecraft. This may have made the development go smoother and faster,
due to writing less boilerplate codes and having libraries that do some cumbersome tasks that
we otherwise have to do it ourselves.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS). The
funding of SCiUS is provided by Ministry of Higher Education, Science, Research, and Innovation. This
extended abstract is not for citation.
References
1. Juan L, Ariel M, the Godot community. Godot Engine - Free and open source 2D and 3D game
engine [Internet]. Godot Engine. 2022 [cited 2022 Mar 25]. Available from: https://godotengine.org
2. Juan L, Ariel M, the Godot community. Godot Docs – 3.4 branch [Internet]. Godot Engine
documentation. 2022 [cited 2022 Mar 25]. Available from: https://docs.godotengine.org/en/stable/
3. Linehan C, Bellord G, Kirman B, Morford ZH, Roche B. Learning curves: Analysing pace and
challenge in four successful puzzle games. In: Proceedings of the first ACM SIGCHI annual
symposium on Computer-human interaction in play. New York, NY, USA: ACM; 2014.
4. Mora-Cantallops M. Transhistorical perspective of the puzzle video game genre. In: Proceedings of
the 13th International Conference on the Foundations of Digital Games. New York, NY, USA: ACM;
2018.
5. Nalisa. ตลาดเกม 2564 ทาไมจึงเป็นโอกาสทางธุรกิจของสื่อบนั เทงิ ในยคุ ดิจิทลั (วิเคราะห์) [Internet]. Marketeer Online. 2021
[cited 2022 Mar 25]. Available from: https://marketeeronline.co/archives/209418
6. พชิ ชากร โ. ปัจจยั การเติบโตของอตุ สาหกรรมเกมในระดบั ประเทศและระดบั โลก [Internet]. Digital Economy Promotion
Agency. [cited 2022 Mar 25]. Available from: https://www.depa.or.th/en/article-view/growth-factor-
gaming-industry
OT2_15_13 / 4
172
Title: Robot Following Human 12th SCiUS Forum
OT2_15_09
Field : Technology and Computer
Authors :
Mr.Kreetat Duangkura
School : Mr.Krissanakorn Punsangium
Advisor :
PSU.Wittayanusorn School, Prince of Songkhla University
Mr.Winai Rattanapol
Ms.Chouvanee Srivisal
Abstract
Building a user-guided robot holding a remote using a Bluetooth module This is a module for
enabling hardware that can use Bluetooth to send data to another hardware or receive data from that
hardware for further processing or calculations. By writing the code through the Arduino IDE program to
make the Arduino board on the remote send the values to the Arduino board on the robot for use in
calculating the left-right turn of the robot and then write code for an ultrasonic sensor that measures the
distance between the person holding the remote to determine the motor (BO) on the robot, the motor is
rotated forward or backward and encoded to provide the Bluetooth module. To act as a transmitter (sender)
or a receiver (control) to send values from the Compass Module on the remote to the robot to compare the
angle values to determine facing in the direction. corresponding to the user's
location
Keyword : Bluetooth module, Compass Module
Introduction
Today, robots are a very interesting innovation for humans in an era where technology plays a huge
role.It cannot be said that today's technology has an effect on everyday life. The robot itself is considered a
technology that is very advanced. It will be a robot that helps facilitate various aspects. It is very important to
human life, such as the replacement of labor.It is used by humans to reduce the risk of workplace hazards. used
to reduce working time, so students are interested in studying and developing automatic robot which will be one
of the devices that will help us facilitate
This robot that tracks people using a remote which has a body that transmits information to the robot
was developed using a microcontroller.Arduino UNO controller, designed on the Arduino IDE program using a
sensitive transceiver (Ultrasonic Sensor) in order to control the motor and the Bluetooth module will have a
function that sends data (Sender) of Compass Module module to another Arduino UNO board which must have
a Bluetooth module that waits receive values from another (Control) to control motor of mecanum wheel.
OT2_15_09
173
12th SCiUS Forum
Project of a remote walk-behind robot by Bluetooth module to reduce accidents in various cases. and
can also be used as a guiding robot It is convenient in other areas as well.
Methology
1)Write the code on the Arduino ide application. First code we are
writing is about the compass module which can show the degree along the x
y z-axis and then we design robot and remote we set the compass module on
remote and robot .
2)We are writing code about bluetooth module for send information
from compass module(Master) on remote to robot for calculate degree at
which robot has to rotate bluetooth module on robot receive information from
remote(Slave).
Figure 1 :Diagram of the overall robot operation.
3)Write the code about the motor for forward, backward or turn left
turn right. We chose the mecanum wheel because it can move in all
directions according to the picture.
4)Write the code about ultrasonic sensor for order robot move
forward or backward which we set value distance between user and robot
are 1 m.
Figure 2 :Show all possible wheel movements
Results
1) Moveable of robot
1.1)turn left and turn right of robot
It's an experiment to measure the width. of the angle as the
robot makes a turn, which measures the width of the angle between
the wheel angle at the starting point using PWM (% duty) = 6.7 and
the angle at which the wheel turns 10 and the switch pulse
modulation value . feed into servo motor
OT2_15_09
174
12th SCiUS Forum
Figure 3 :Chart of turn left and turn right
1.2)Ultrasonic sensor distance measurement
Later, experiments were carried out in the
Ultrasonic Sensor that could measure distances accurately.
Determine the distance measurement experiment 10 times,
specifying the distance measurement as follows: 60 cm.
100 cm. 150 cm. 200 cm. and 250 cm.
Figure 3 :Chart of ultrasonic sensor
2)Human-tracking experiments of robots
Figure 4 :Chart of human-tracking experiments of robots
OT2_15_09
175
12th SCiUS Forum
Conclusion
Movement forwards and backwards according to the distance of the user. The robot can move exactly
as expected. And in terms of turning left-right of the robot, it can be done satisfactorily. But there may be times
when the robot cannot turn in the direction the user is in due to a problem with the Bluetooth module there may
be times when the connection is lost on its own, which may need to be resolved by turning off the system and
then turning it back on for Bluetooth to reconnect, or by using a more stable module.
Acknowledge
This project was supported by Science Classroom in University Affiliated School (SCiUS) under
Suranaree University of Technology and Rajsima Wittayalai School. The funding of SCiUS is provided by
Ministry of Higher Education, Science, Research and Innovation, which is highly appreciated. This extended
abstract is not for citation.
Reference
พรเฉลมิ พงศ,์ พ., 2022. Ultrasonic sensor / เซนเซอรช์ นดิ ใชเ้ สยี ง หรอื เซนเซอรช์ นดิ อลั ตราโซนกิ - Food Wiki
| Food Network Solution. [online] Foodnetworksolution.com. Available at: <http://www.
foodnetworksolution.com/wiki/word/4348/ultrasonic-sensor-เซนเซอรช์ นดิ ใชเ้ สยี ง-หรอื เซนเซอรช์ นดิ
อลั ตราโซนกิ > [Accessed 25 December 2021].
OT2_15_09
176
12th SCiUS Forum
Title : Qubit Visualization with VR OT2_09_07
Field : Technology and Computer
Author : Mr. Sippapas Charoenkul
Mr. Thitibhat Rittikulsittichai
School : Ms. Woradee Chonnapastid
Darunsikkhalai School, KMUTT (King Mongkut’s University of Technology Thonburi)
Advisor : Assoc. Prof. Dr. Siam Charoenseange
Dr. Tanapat Deesuwan
Dr. Ekapong Hirunsirisawat
Abstract :
The development of quantum computing technology is advancing rapidly and has the potential to play a
significant role in the future due to its performance in numerical calculations theoretically predicted to be several
times faster than classical computers. Quantum computers may be used in solving many major problems, for
instance, molecular structure simulation of medicine or economic analysis for the most efficient and effective
business strategy. Moreover, quantum theory is fundamental to the present technologies around us. Despite its
importance, the understanding in quantum principle has been obscured due to its difficulty and out-of-experienced
phenomena. Therefore, we develop learning materials on the basics of quantum theory and quantum bit operations
using virtual reality technology (VR). The goal is to increase people's interest and understanding in quantum
computing technology. Our VR-based learning media has been developed using Unity to create the virtual
environment and Blender to design objects for stimulating interactive engagement such as the Bloch sphere and
quantum gate. To measure the learning efficiency from playing the VR for visualizing qubits, a well-designed set
of pretest and posttest is done by participants, and then the collected data are analyzed to evaluate their
improvements in comprehension and satisfaction. These results can help us to develop further the learning media.
Keywords : virtual reality, Bloch sphere, quantum computer, quantum bit, learning media
Introduction
At present, humans can achieve many of our needs by using classical computers but there are some problems that
classical computers cannot solve efficiently. Therefore, scientists have been developing quantum computers to
make processing faster. This new type of technology relies on quantum mechanics to process data, estimated to
have much higher processing power than classical computers. A quantum computer differs from a classical one in
the basic unit of information. Bits are the basic unit of information in classical computers, and they can only be in
one state at a certain time. But qubits can be in the superposition, making a difference in data collection. Qubit
state is presented by Bloch sphere of the form. [1]
.
In addition, the operation of quantum computing is completely different from classical computers. Classical
computers can only process one possibility at a time, but quantum computers can process all possibilities at the
same time. Quantum computers can be much faster than classical computers, and they have the potential to change
technology in today's world dramatically. Those who understand qubit and how quantum computer functions will
be able to create opportunities for themselves. In this work, we used virtual reality or VR as a learning material to
explain how qubit works allowing users to interact and understand content more quickly, as well as prepare users
for this technology in the near future.
OT2_09_07/1
177
12th SCiUS Forum
Methodology
Our project is divided into 3 main parts.
1. Studying and searching data
We studied content related to
• Bloch sphere [1]
• Qubit [2]
• Superposition [3]
• Quantum gate [4]
• Quantum computer [5]
• Research using VR to present information [6 - 9]
2. Program Design and Development
We started by designing the user interface. This includes separating our content into 3 main learning areas
consisting of quantum computer, superposition, and quantum gate, as shown in Figure 1. Next, we refine the
content that we collected so that the participants can easily understand it.
Figure 1 : Main room is the room that user can choose the topic to study and can direct them to any room
The three-dimensional illustration of a quantum computer was included for the participants to view along with
reading the content, as shown in Figure 2.
Figure 2 : Room 1 is the room that present the basic of quantum physics as well as the contents of quantum computers
with illustrations.
Participants can interact with the quantum bit model to understand how superposition and measurement can make
changes on quantum bit in as shown in Figure 3.
OT2_09_07/2
178
12th SCiUS Forum
Figure 3 : Room 2 is the room that present quantum superposition as well as the Bloch sphere model.
Simulations of quantum gate are displayed with a puzzle for participants to solve using quantum gates
and quantum bits. It will help participants understand how quantum gates transforms quantum bits more efficiently
as shown in the Figure 4.
Figure 4 : Room 3 is a room that we will present the quantum gate content and see how the qubits change as
they pass through the quantum gate.
3. Evaluating the functionality of the application
After the final review of the VR, we have participants evaluating the application with 3 sets of tests: comprehension
test before using the VR, comprehension test after using the VR, and satisfaction test. The application’s success
was assessed using a comparison of the scores obtained from the comprehension test before using the VR and the
comprehension test after using the VR and the difference in scores between these two exercises.
Result
A sample of 10 high school students between the ages of 15 and 18 years old performed pre-tests and post-tests
showed that the average pre-test score was about 4.1 points, and the average post-test score was approximately 6.3
points. From the comparison of the scores, the score increased from about 53.66%. Looking into the scores of each
part, the largest increase was in the quantum gate section, with an increase of 66.67%, the second is in the
superposition part with an increase of about 64.5%, and the least increase was the basic part of quantum physics,
with the increase of about 29.41%.
Conclusion
We developed a VR application to be used as learning material on the basics of quantum theory. Our VR
learning material is designed to be interactive to increase participants’ understanding and enjoyment. Participants
can choose a room where they want to learn by entering the main room. The materials are divided into 3 rooms
consisting of the basics of quantum theory, superposition, and quantum gate. Each room has its unique features
such as a quantum computer 3D model and Bloch sphere to help participants comprehend the contents. Our project
has a sample of 10 high school students between the ages of 15 and 18 years old, but, due to the Covid-19 situation,
we cannot have our participants testing on the VR device directly. To virtually investigate the learning efficacy of
OT2_09_07/3
179
12th SCiUS Forum
experiencing our media, participants did the comprehension test on qubits before and after reading the application’s
contents to make sure the contents we wrote are understandable. We hope to continue our project and directly test
participants on VR devices for more accurate results.
Acknowledgment
This project was supported by Science Classroom in University Affiliated School (SCiUS) under King
Mongkut s University of Technology Thonburi and Darunsikkhalai School. The funding of SCiUS is provided
by Ministry of Higher Education, Science, Research, and Innovation, which is highly appreciated. This extended
abstract is not for citation.
References
[1] Alain Michaud , 2004, Bloch sphere [Online], Available : https://en.wikipedia.org/wiki/Bloch_sphere/.
[2021,10 July]
[2] Jeffrey Yepez , 2013, Lecture notes: Qubit representations and rotations [Online], Available :
https://www.phys.hawaii.edu/~yepez/Spring2013/lectures/Lecture1_Qubits_Note .pdf. [2021,30 May]
[3] Michal Hardy, 2019, Quantum Superposition [Online], Available :
https://en.wikipedia.org/w/index.php?title=Quantum_superposition&action=history. [2021,14 June]
[4] Charles Matthews , 2004 , Quantum Gate [Online], Available :
https://en.wikipedia.org/wiki/Quantum_logic_gate. [2021,19 June]
[5] The Jupyter Book Community, n.d., Learn Quantum Computation using Qiskit [Online], Available :
https://qiskit.org/textbook/ch-labs/Lab08_QEC1.html. [2021,30 May]
[6] Jessica Francis, et-al , 2019, “Augmented Versus Virtual Reality in Education”, Research Gate [Online],
vol.0, No.0, หนา้ 1-5,Available : https://www.researchgate.net/profile/Kuo-Ting-[7]
Huang/publication/330490113_Augmented_Versus_Virtual_Reality_in_Education_An_Exploratory_Study_Exa
mining_Science_Knowledge_Retention_When_Using_Augmented_RealityVirtual_Reality_Mobile_Application
s/links/6005d96192851c13fe1f2724/Augmented-Versus-Virtual-Reality-in-Education-An-Exploratory-Study-
Examining-Science-Knowledge-Retention-When- Using-Augmented-Reality-Virtual-Reality-Mobile-
Applications.pdf?origin=publication_detail. [2021,4 August]
[8] Devon Allcoat and Adrian Von Muhlenen , 2018, “ learning in virtual reality”, Association for learning
technology [Online], Vol.26, No.1, หนา้ 1-8 ,Available :
https://www.researchgate.net/publication/329292469_Learning_in_virtual_reality_Effects_on_
performance_emotion_and_engagement [2021,3 August]
[9] ณฐั พงษ์ พระลบั รักษา, 2559, การพฒั นาส่ือมัลติมีเดยี ในรูปแบบเทคโนโลยีเสมือนจริง (VR) เพ่ือการ ประชาสัมพนั ธ์แหล่งท่องเที่ยวในจงั หวดั มหาสารคาม
[Online], Available : http://research.rmu.ac.th/rdi-mis//upload/fullreport/1615018193.pdf. [ 24 กรกฎาคม 2564 ]
OT2_09_07/4
180
12th SCiUS Forum
Title : Text Generation by Artificial Intelligence OT2_01_01
Field : Technology and Computer
Author : Mr.Chananan Chaichanan
Mr.Kampanat Yingseree
School : Ms.Kanjanapond Sukonthachart
Advisor : Chiang Mai University Demonstration School, Chiang Mai University
Asst. Prof. Dr. Jakarin Chawachat
Department of Computer Science, Chiang Mai University
Abstract
Language is an art. It is a tool we use to communicate with each other. In addition, different
language use experiences result in a variety in writing. Our team foresees that artificial intelligence models
may be able to create literature, so in this project, we aim to experiment with creating an artificial intelligence
model that can generate text using Sequence to Sequence model.
In this experiment, we used ‘The Project Gutenberg eBook of Grimms’ Fairy Tales’ by Jacob
Grimm and Wilhelm Grimm as the dataset to teach three classes of AI models: GRU with no word embedding,
GRU with word embedding (including such models with no attention, with Dot Product attention, and with
Bahdanau attention), and Transformer. Then, the results are collected by testing 50 sentences to see how each
model outputs and by measuring them with the perplexity value.
From the experiment with 50 sample input sentences, it was found that the sentence
characteristics obtained from the GRU (no word embedding) model could be read as a sentence, albeit with
some words misspelled. On the other hand, the GRU with Bahdanau attention and Dot Product attention result
in repetitions in the output. Additionally, the GRU with no attention model’s result can be read as sentences.
However, the product of the Transformer model is the most readable of all.
The mean perplexity values of the GRU with no word embedding, GRU with Bahdanau
attention, GRU with Dot Product attention, GRU with no attention, and Transformer models are 5.2654,
5.7908, 6.0645, 6.144, 5.2447, respectively. The numbers illustrate that the GRU (No Embedding) and
Transformer models are approximately the same, indicating that they can produce syntax-like outputs in the
Language Model's dataset. However, perplexity is not a measure of cohesion. Therefore, when considering
the connection of the story in a human-readable way, the Transformer model results in more cohesion than the
GRU (No Embedding) model, as the Transformer model includes attention and word embedding, resulting in
more computer readability than the GRU model (No Embedding). However, it was found that all 3 models
were unable to create sentences that are more complex than the dataset, therefore it is a point that should be
studied and developed further.
Keyword : Text Generation, GRU, Sequence to Sequence, Transformer, Attention
OT2_01_01/1
181
12th SCiUS Forum
Introduction
Language is an art. It is a tool we use to communicate with each other. Sometimes, in creating
sentences with the same meaning, each human uses different words. Different language use experiences, thus,
result in a variety in writing.
Artificial intelligence is a tool that mimics the human mind, but it is difficult for computers to
understand human language. We foresee that If computers are able to do this skill in the future, they will be
able to produce computer-based literary works—which may result in market competitiveness. Due to the
increasing demand for writings among consumers in the market, creating literary works is considered a time-
consuming task.
As mentioned above, it was interesting to experiment with computers to create sentences.
Therefore, the purpose of this project is to experiment with creating a model that can result in text following
what the user enters. This is so that one day it may be further developed into a model that can create writings
in the future. The GRU model and Transformer model is used in this project.
Methodology
This experiment uses the python programming language and comprises the following steps:
1. Choosing the dataset for model training: for this experiment ‘The Project Gutenberg eBook of Grimms’
Fairy Tales’ by Jacob Grimm and Wilhelm Grimm was selected.
2. Data cleansing
3. AI model creation, including GRU with no word embedding, GRU with word embedding (with no
attention, with Dot Product attention, with Bahdanau attention), and Transformer model.
4. Collecting the results obtained from each model of 50 samples and measuring their perplexity value.
To measuring perplexity value, at first we need to have the language model. In this experiment we
use 3-gram LiBriSpeech Language Model from OpenSLR.
Perplexity is the value arising from the joint probability of every single word given by n-gram
preceding word(s) in the sentence multiplied together and then normalized by the length of the sentence,
where the n-gram language model represents the probability distribution for all n adjacent words in all
adjacent groups from the dataset it has trained. The calculations may be smoothed in some cases where
the probability is 0.
The perplexity is calculated as follows :
( ) = !$ ( ! |1 !"# )
&'
!%#
In which is the number of words in the sentence
And is the word in the sentence
5. Result analysis, summary, and discussion
OT2_01_01/2
182
12th SCiUS Forum
Results INITIAL TEXT OUTPUT OUTPUT’S
PERPLEXITY
MODELS An honest king has An honest king has picked up some money for
picked up some his. However, they all set out on their journey 5.0421
GRU money together, and when he awoke the next morning he
(no word embedding) observed and asked of his money. ‘No, you are to 4.7206
cell that he did,’ said the sparrow, ‘you shall have
GRU with Bahdanau some more if you will; so come with me.’ Then 5.8669
attention
Goet all the same, flew away
GRU with Dot Product
attention An honest king has An honest king has picked up some money at at
picked up some at at at at at at at at at at at at at at at at at at at at
money at at at at at at at at at at at at at at at at at at at at
at at at at at at at at at at at at at at at at at at at at
at at at at at at at at at at at at at at at at at at … at
An honest king has An honest king has picked up some money mind
picked up some of his three years . but one at last he was roaming
money
long of long as long as
GRU with no attention An honest king has An honest king has picked up some money i am 5.8669
picked up some king ? the king replied the king s daughter again
money
called to him that one of the
Transformer An honest king has An honest king has picked up some money could 5.1950
picked up some not earn enough to live upon and at last all he had
money in the world wheels of wood , at last it was all in
a wood . at last it was a ! for the mouse , a soft
voice cried come ! oh , yes , he has taken tom !
Figure 1: The table shows an example of the results obtained from each model and the output perplexity
GRU(No GRU with Bahdanau GRU with Dot Product GRU with no Transformer
Embedding) attention perplexity attention perplexity attention perplexity perplexity
perplexity
x̄ 5.2654 5.7908 6.0645 6.1444 5.2447
S.D. 0.4235 0.5830 0.4067 0.3188 0.1149
C.V. 8.04% 10.07% 6.71% 5.19% 2.19%
Min 4.8509 4.7206 4.7785 5.7819 5.1512
Max 7.4045 7.4287 7.0007 7.1277 5.6971
Figure 2: The table displays the average, standard deviation, coefficient of variation, and minimum and the
maximum values of perplexity measured in the sample output of each model.
OT2_01_01/3
183
12th SCiUS Forum
GRU (no word embedding) model could be read as a sentence, albeit with some words
misspelled. On the other hand, the GRU with Bahdanau attention and Dot Product attention result in repetitions
in the output. Additionally, the GRU with no attention model’s result can be read as sentences. However, the
product of the Transformer model is the most readable of all.
Discussion and Conclusion
From the results, it can be observed that the mean perplexity of the GRU(No Embedding) model
and the Transformer model are 5.2654 and 5.2447, respectively, which are marginally different, meaning that
the two models can generate an output with similar grammatical structures to the dataset language model
trained. However, when considering cohesion, the Transformer model outperforms other models, as it uses
attention and word embedding, which helps the computer understand the meaning of words better than the
GRU (No Embedding) model, which uses neither nor word embedding.
At the same time, from 50 sample sentences, one sentence output in the GRU with Bahdanau
attention model measures perplexity of 4.7240, which is a small value compared to other values. When the
sentence “The countryman held his leathern cap to milk” is input, the output is “The countryman held his
leathern cap to milk to be to be to be to be to be to be to be to be to be to be to be to be to be to be to be to be
to be to be to be to be … to be”. From the human perspective, this shows repetition and incohesion, but the
reason why the perplexity is low is due to trigram, which results in the model considering sets of three
consecutive words in finding the probability that grammar will repeat itself in a language model dataset. For
the examples, “to be to” and “be to be” both exist in the data. Perplexity, thus, does not indicate sentence
cohesion, but rather it is a value that tells how much similar the grammar of output is to that of the Language
Model dataset.
Acknowledgements
This project was supported by Science Classroom in University Affiliated School (SCiUS)
under Chiang Mai University Demonstration School. The funding of SCiUS is supported by Ministry of
Higher Education, Science, Research and Innovation. In addition, we would like to express our sincerest
gratitude to Asst. Prof. Dr. Jakarin Chawachat for his kind advice and guidance. The extended abstract is not
for citation.
References
1. Text generation with no attention. (2021, June 26). Text generate with RNN.
http://www.tensorflow.org/text/tutorials/text_generation
2.Sequence to Sequence. (2021, October 3). Neural machine translation with attention.
http://www.tensorflow.org/text/tutorials/nmt_with_attention
3.Text generation with Transformer. (2021,December 19). Transformer model for language understanding.
http://www.tensorflow.org/text/tutorials/transformer
4. Language modelling resources (2022, May 21). for use with the LibriSpeech ASR corpus
https://www.openslr.org
OT2_01_01/4
184
Title: Reducing Snoring Pillow 12th SCiUS Forum
Field: Technology and Computer OT2_15_07
Author: Mr.Kritabhas Nuprakob Prince of Songkla University
PSU.Wittayanusorn School
Mr.Denphum Thongtakuk
School: PSU.Wittayanusorn School, Prince of Songkla University
Advisor: ASST. PROF. DR.Pattara Aiyarak Faculty of Science
T.Wareerat Pumitummarat Technology and Computer
Abstract:
Reducing Snoring Pillow is a science project in the field of Technology and computers. The objective is
to create a pillow that can reduce the snoring of sleepers by using the technology to apply together. Through the
cause of snoring is caused by the slack of the muscles around the pharynx and back wall of the throat while
sleeping, This results in the obstruction of the respiratory tract at some point causing muscle vibrations and
turning into snoring.
The project organizers have decided to consider noise and vibration in the nape of the neck to determine
whether the sleeper is snoring or not. Including the use of pressure sensing devices to determine where the user
is lying on the pillow before inflating the air into the airbag installed under the pillow to allow the pillow to
gradually increase in height for the sleeper to change the position of the head while lying down this will cause
the sleeper's airway to be structured open and thus the sleeper's snoring sound disappears without waking the
sleeper.
As a result of the project, it was found that the sleep quality of volunteers who have normal snoring
problems compared to those using and not using the Reducing Snoring Pillow. There was a decrease in the
frequency of snoring and a tendency to change for the better without causing the sleeper to wake up while
sleeping, it can also be used with pillow shapes and other materials without affecting the performance of the
device or compromising sleeper comfort, this makes the Reducing Snoring Pillow suitable for sleepers who
suffer from the snoring of all ages, and it can also be used with all types of pillows.
Keywords: Reducing Snoring Pillow, Sleep quality, Snoring, Normal snoring
OT02_15_07/1
185
12th SCiUS Forum
Introduction:
Sleep is no less important to living beings than eating or breathing fresh air. It can be seen from the fact
that we humans spend up to a third of our lives sleeping. Because sleep is the time when the organs of the body.
Especially the cardiovascular system will be rested. In addition, while humans sleep, they repair the wear and
tear and balance the various chemicals in the body.
Studies have shown that people who sleep insufficiently for a long time will have a negative impact on
their health and immune system. Makes the risk of illness or death more than usual.
Snoring is one of the most common sleep problems in families. According to statistics from the Thai
Health Promotion Foundation, in 2019 have shown that people about 25% of Thai people snore regularly, and
with apnea, about 5% or 3 million people. The sound of snoring is caused by the vibration of the breath. Through
the narrowed and reshaped upper airway, in which early snoring is not dangerous but can lead to sleep apnea.
From all of the above, there has been an interest in bringing technology to help solve sleep problems. It
is a material that can be shaped to be put inside the pillow. This will help adjust the sleeping posture of the
sleeper to change. And will help make the respiratory system of the narrow sleeper open. Making the sound of
snoring disappear, will affect the sleep quality of the sleeper, and those who sleep together change for the better.
Methodology
This process will divide into 3 parts as follows,
Part 1: Study physiology and general sleeping habits to determine the location for installing the device.
Figure 1: Example of placing your head on a pillow.
From the pictures and information above will notice that Sleep generally has a similar head position in
which the head is placed in the middle of the pillow and the nape of the neck, which is the origin of snoring, is
located on the lower edge of the pillow
The organizers, therefore, consider placing the device used
for receiving various information in the area that is expected to
receive the most value from the sleeper. The area is divided into
4 parts as follows.
Figure 2: Shows the working location of the device.
OT02_15_07/2
186
Part 2: Write the code to get the device working properly. 12th SCiUS Forum
2.1 Overview of the pillow. Figure 3: An overview outline of the pillow.
Part 1: The part of the device outside the pillow.
All of them are mounted in a box and covered
with sound-absorbing material to reduce noise while the
system is running.
Part 2: The part of the pillow.
All devices receive data, sound, vibration, and
pressure, and send it to the MCU for case-based action
through code written to distinguish whether the sleeper
is snoring or not. Including considering that the sleeper
is lying in the position of the pillow.
Figure 4: Outline of working steps of pillows. 2.2 The steps and working order of the pillow are
divided into 4 steps as follows.
Step 1: Determine if the sleeper is snoring.
Step 2: Consider where the head of the sleeper is
on the pillow.
Step 3: Take the data values into account.
Through the code written in the Arduino MCU.
Step 4: Blow air through the Air pump into the
airbag under the pillow.
Part 3: Bring pillows to test with the user and collect data from the experiment.
The pillows were tested on sleepers with the normal stage of snoring
for one night, and the pillows were videotaped to assess the effectiveness of the
pillow in reducing snoring problems of sleepers.
Figure 5: QR code to see the actual pillow working.
Results
The results of the test are divided into 2 parts as follows.
Part 1: System Operations.
From the test for a period of one night, the system can process and accurately determine where the user
is lying on the pillow.
OT02_15_07/3
187
12th SCiUS Forum
Part 2: The working of the airbag.
While the airbag is blown higher, the sleeper's head was slowly pushed up. This encourages the sleeper
to move his head. Including changing the sleeping position by themselves, results in the sound of the sleeper's
snoring disappearing. And does not disturb the sleeper.
Conclusion
From the analysis and design of pillows to reduce snoring According to studies on the causes of snoring
Including the sleeping body and the head position of the sleeper that can be found in most and put it to the test
with users who have problems with snoring for a period of one night to the organizing committee found that
Reducing Snoring Pillow can help reduce users' snoring problems. It can reduce the rate of snoring in sleepers.
The device can be used in sleepers weighing up to 100 kg. and does not make the sleeper feel pain after use.
In addition, the device in the pillow to reduce snoring can also be installed and used in other material-
shaped pillows without affecting the performance of the pillow and equipment or compromising the comfort of
the sleeper.
Acknowledgments
This project was supported by Science Classroom in University Affiliated School (SCiUS) under Prince
of Songkhla University and PSU. Wittayanusorn School. The funding of SCiUS is provided by Ministry of
Higher Education, Science, Research and Innovation. This extended abstract is not for citation.
References
Dr. Worawat Suwannaruxa. อาการนอนกรนเกิดจากอะไร มอี นั ตรายหรือไม่ รักษาท่โี รงพยาบาลไหนดี [Internet] . Depa.or.th. 2017 [ cited
2021 Aug. 26 ] . Available from: https://www.nksleepcenter.com/snoring-sleep-apnea/
SnoreLab. Sleeping Position and Snoring [Internet] .The MATTER . 2018 [cited 2021 Aug. 26] .
Available from: https://www.snorelab.com/sleeping-position-and-snoring/
Dr. Worawat Suwannaruxa. มาทาํ ความรู้จกั ระดบั การนอนเเละวงจรการนอน [Internet] . Depa.or.th 2020 [cited 2021 Oct. 15] .
Available from: https://www.nksleepcenter.com/sleep-cycle/
ปารยะ อาศนะเสน . อาการนอนกรนและหยุดหายใจขณะหลบั [Internet]. Depa.or.th 2005 [cited 2021 Aug. 26] .
Available from: https://www.siphhospital.com/th/news/article/share/obstructive-sleep-apnea
กลั ยา ปัญจพรผล. สารพนั ปัญหาการนอนหลบั [Internet]. Depa.or.th 2016 [cited 2021 Aug. 26] .
Available from: http://sasuksure.anamai.moph.go.th/file/6d7f625e-c5bc-41e3-a302-ff851a5d9619/preview
คุณนนั ทา มาระเนตร์. 108 ปัญหาของคนนอนกรน[Internet]. Depa.or.th 2013[cited 2021 Aug. 26] .
Available from: http://excellent.med.cmu.ac.th/meccmu/wp-content/uploads/2020/07/108-ปัญหาของคนนอนกรน.pdf
OT02_15_07/4
188
12th SCiUS Forum
ORGANIZED BY
คณะวทิ ยาศาสตร มหาวทิ ยาลัยทกั ษิณ
Faculty of Science, Thaksin University