Thailand – Japan Students ICT Fair 2022
“Seeding Innovations Through Fostering Thailand-Japan Youth Friendship”
21 – 23 December 2022
Thailand - Japan Student ICT Fair 2022
Princess Chulabhorn Science High Schools are proud to make the arrangement for the
Thailand-Japan Student ICT Fair 2022 (TJ-SIF 2022) from December 21 to 23, 2022 at
Princess Chulabhorn Science High School Chiang Rai
This collaborative international event is under the motto of TJ-SIF 2022 “Seeding Innovations
Through Fostering Thailand-Japan Youth Friendship” which will be graciously presided by the
deputy minister of education who has been the driving force in the promotion of education to
all Thai youths, especially in the learning of science and technology.
The goal of this Thailand- Japan Student ICT Fair 2022 is to bring together groups of talented
students in science and technology, from Thailand and Japan, to share and exchange their
research findings and ICT projects to build closer and stronger collaboration between the two
countries.
There will be thirty-seven schools from Thailand, seventeen schools from Super Science High
school and eleven colleges from KOSEN participating in TJ-SIF 2022.
Thailand-Japan Student ICT Fair 2022 will be a cordial bilateral relation between Japan and
Thailand. The event will not only provide a platform for the exchange in science and
technology between these like-minded young scientists, but will act as a springboard for the
sustainable development and promotion of many more 21st century skills which range from
communication, collaboration to long lasting friendships among youths.
Thank you for being part of our dreams. We truly hope that Thailand - Japan Student ICT Fair
2022 will be a great challenge for students and also for fostering stronger friendship among the
new generation of citizens of Japan and Thailand.
Welcome to the Thailand - Japan Student ICT Fair 2022. Hope you are ready to share, learn,
and enjoy!
I
Table of Contents I
II
Thailand - Japan Student ICT Fair 2022 III
Table of Contents IV
Congratulatory Message from the Ambassador of Japan to Thailand V
Congratulatory Message from Minister of Education of Thailand VI
Congratulatory Message from Deputy Minister of Education of Thailand
Congratulatory Message from Secretary General of Office of Basic VII
Education Commission 1
Keynote Speakers 2
Abstract of Contributed Paper: Oral and Poster Presentation 12
Contents 40
66
Section A: Artificial Intelligence and Machine Learning 94
Section B: Games and Virtual Reality 122
Section C: Intelligent Devices Robot and IoT 150
Section D: Software and Application 156
Section E: Other Topics in Information Technology
Author Index
Contributors for TJ-SIF 2022
II
Congratulatory Message from the Ambassador of Japan to Thailand
On behalf of the Government of Japan, it is a great pleasure to see the Thailand-Japan Student
ICT Fair 2022 (TJ-SIF 2022) again that can confirm our close and strong bond between
Thailand and Japan. This is a little pace of science and technology education. It is obvious that
we have already made significant progress within the students and teachers gathering from
Super Science High Schools of Japan, KOSEN Institutes of Japan and Princess Chulabhorn
Science Schools of Thailand in this event, Thailand – Japan Student ICT Fair 2022.
I would like to congratulate all students from both Japan and Thailand for their creative
capacities with their ICT projects. I sincerely believe that these new generation students will
play a vital role to develop their countries in the future.
I hope this TJ-SIF 2022, which is based on cooperative work and new ways of diffusing
knowledge by using science and technology methods, will further motivate both Japanese and
Thai students to expand their golden opportunities in their higher education and contribute to
the development of relevant fields.
Finally, may I express my appreciation to an excellent arrangement and the hospitality of the
Thailand - Japan Student ICT Fair 2022 (TJ-SIF 2022) at Princess Chulabhorn Science High
School Chiang Rai. Last but not least, I would like to express my great respect to all concerned
with this great event.
H.E Mr. NASHIDA Kazuya
Ambassador of Japan to Thailand
III
Congratulatory Message from Minister of Education of Thailand
I am pleased to extend my heartfelt congratulations to everyone here today at the Thailand –
Japan Student ICT Fair 2022 between Japan and the Princess Chulabhorn Science High
Schools.
The Princess Chulabhorn Science High Schools have cooperated and developed relationships
with many Super Science High Schools and KOSEN institutes in Japan and other academic
agencies. The goal was to encourage and prepare Thai and Japanese students to enhance their
Science, Math, and technology skills, and to prepare them for the challenges of the 21st century.
The event will also provide opportunities for students and teachers to perform their potential
on technology; therefore, students will have a chance to present their projects on ICT. The
support from Academic Agencies in Japan, MEXT, JST, SSHs, EOJ, KOSEN Institutes, the
JICA, Japan Foundation, has helped to inspire students and sustain them as our future
generation. I believe that science and technology is the key driving force for economic and
social development. The cooperation among new generations in both of our countries will
enhance the prosperity for our societies.
On a final note, I strongly believe our cooperation and friendship is what has contributed to our
past successes. I hope our past successes will continue to move forward. Rest assured that I am
fully committed to providing my full ongoing support. Thank you to everyone for all your hard
work as this program could not have succeeded without it. I am confident that the continued
collaboration between our two countries will lead to educational sustainability for generations
to come.
Miss Trinuch Tienthong
Minister of Education of Thailand
IV
Congratulatory Message from Deputy Minister of Education of Thailand
It’s my great pleasure to praise the event of Thailand – Japan Student ICT Fair 2022 between
Japan and the Princess Chulabhorn Science High Schools.
I would like to express my deepest admiration for the contribution to the collaboration between
Thailand and Japan which promotes the fostering of the next young generation of students
through various activities. The cooperation programs between the Princess Chulabhorn
Science High Schools, the Super Science High Schools and KOSEN in Japan show how much
of an achievement they are and how much they empower students of both countries.
Information and communication technologies (ICT) play a significant role in all aspects of
digital society. The Thailand-Japan Student ICT Fair (TJ-SIF) ensures collaborative teaching
and learning with an emphasis on student ICT projects. The event has helped strengthen
student’s scientific and technological competences. The activities included in the event
encourage students to be creative, analytical, and critical thinkers who will ultimately
contribute significantly to both countries. The students were fortunate enough to participate in
various activities supported by MEXT, JST, SSHs, EOJ, KOSEN Institutes, JICA, the Japan
Foundation and other Japanese agencies.
I whole heartily wish the Thailand – Japan Student ICT Fair will continue to flourish for many
more years to come and make our two nations more progressively advanced in the future.
Khunying Kalaya Sophonpanich
Deputy Minister of Education of Thailand
V
Congratulatory Message from Secretary General of Office of
Basic Education Commission
On behalf of the General of Office of Basic Education commission, it is an honor to extend my
sincere congratulations for Thailand – Japan Student ICT Fair 2022.
I am pleased to see the cooperation mature into something above and beyond our expectations.
The number of students and teachers in both our countries often comment on the unique
experiences and scientific knowledge they gained. It was not easy for all Princess Chulabhorn
Science High Schools to maintain and sustain their vision. However, with the perseverance of
the students, who represent the younger generation, I am confident they will improve their
creative, analytical, and critical thinking skills for the 21st century which include achieving a
sustainable future in line with the Sustainable Development Goals.
It is that potential future and realization of the SDGs that makes me really appreciate all the
cooperation of Thailand – Japan Student ICT Fairs over three times of the collaboration
between the Princess Chulabhorn Science High Schools and the Super Science High Schools
of Japan and National Institute of Technology (Kosen). Students gained valuable experiences
related to their projects on Information and Communication Technology (ICT). The event also
provides opportunities for students and teachers to perform their potential on technology;
therefore, students have the chance to present their projects on ICT. The topics cover Automatic
Control, Robotics and Artificial Intelligence, Software and Smart Electronics, Virtual Reality
and IoT Applications. Besides, “The 1st Thailand-Japan Educational Leaders Symposium:
Technology for Education (TJ-ELS 2022)” is a stage for teachers to exchange their knowledge,
experiences and share their best practice that is beneficial for students. The programming of
the Hackathon competition is provided for Thai and Japanese students in this event as well. All
of these activities have not only led to the goal of the collaboration program but also created a
deep relationship and network amongst students, teachers, schools and our two nations. All the
support and opportunities are created by academic agencies in Japan, providing a great chance
for our students to explore the scientific world and learn Japanese culture.
Once again, I would like to take the opportunity to wish the academic cooperation continued
success in the future for many more brilliant milestones to come.
Dr. Amporn Pinasa
Secretary General of Office of Basic Education commission
VI
Keynote Speaker
Dr. Naito TOMOYUKI
Vice President
of Kobe Institute of Computing (KIC)
Tomoyuki (Tomo) Naito is Vice President and Professor at Graduate School of
Information Technology, Kobe Institute of Computing, Japan. In his over 25 years of
professional career, he has been working with clients on digital economy acceleration policy
and strategy formulation as well as its implementation for effective development; in particular
ICT use leapfrogging practice in developing countries.
His professional interests include digital economy, distance learning, ICT innovation
ecosystem, Internet of Things, FabLabs, Mobile Big Data solution and other related areas.
Prior to assuming his current position as a graduate school professor, he was Senior Advisor in
charge of ICT and Innovation for the development field at Japan International Cooperation
Agency (JICA). Previously, he was Program Manager at the World Bank in charge of the
Tokyo Development Learning Center, Director of Planning as well as Director of
Transportation and ICT at JICA headquarters.
He is serving for several public advisory committees as designated member including
Global Steering Committee of “Internet for All” project at the World Economic Forum (2016-
2019), Regional Governing Committee of the Global Development Learning Network Asia-
Pacific (2011–2021), Global Strategy Working Group under the Minister for Internal Affairs
and Communications of the Government of Japan (2018–2019), and others.
His recent paper “Redefining the Smart City for Sustainable Development” is contained
in the Brookings Institution’s book “Breakthrough (2021).” The paper “Role of ICT in
education redefined by COVID-19” is contained in the book “SDGs and International
Contribution under the Pandemic Era (2021: in Japanese).” Another paper “Indispensable ICT
for achieving SDGs” is contained in the book “International Contribution and Realization of
SDGs (2019: in Japanese).”
He is also a registered 1st class architect in Japan since 1997. He holds a Master of
Arts in international relations degree from Graduate School of Asia-Pacific Studies, Waseda
University, Japan
VII
Keynote Speaker
Asst. Prof. Dr. Anucha Promwungkwa
Chiang Mai University
Expert’s contact information :
E-mail : [email protected],
Phone : 66 5394 4146 Ext 962, 66 89 4333209
Education: Ph.D. in Mechanical Engineering, Virginia Polytechnic Institute and State
• 1998 University, U.S.A.
M.Eng. in Energy, Asian Institute of Technology, Thailand
• 1989 B.Eng. in Mechanical Engineering, Chiang Mai University, Thailand
• 1986
Work Experiences:
2013 – present
Head of Mechanical Engineering Department, Faculty of Engineering, Chiang Mai University
2007 – 2011
Deputy Director of Energy Research and Development Institute - Nakornping, Chiang Mai University
Adequacy for the Assignment:
Detailed Tasks Assigned on Consultant’s Team of Experts:
• Study and propose methodology for Specific Energy Consumption (SEC) analysis of the
DF&Bs
• Grouping of the DF&Bs – 11 Sectors
• Analyze the energy data and SEC of the target DF&Bs
• IoT system for Smart Energy & Smart City
Reference to Prior Work/Assignments that Best Illustrates Capability to Handle the Assigned
Tasks
Consultant in development of energy database to analyze and plan national energy strategic plan
project, the project is under Office of the Permanent Secretary, Ministry of Energy for 3 phases
● Designed guidelines for energy data collection in province level
● Analyze the data for using in direction of national energy plan
VIII
Keynote Speaker
Dr. Pattaramon Vuttipittayamongkol
Mae Fah Luang University
Contact information :
[email protected]
66-53916762, 66-918549559
Linkedin: pattaramonv
Education: Ph.D. in Computer Science (Data Science), Robert Gordon University, UK
● 2017 – 2020 M.Sc. in Electrical Engineering, University of Southern California, USA
● 2011 – 2013. B.Sc. in Electrical Engineering, University of Illinois at Urbana-Champaign, USA
● 2007 – 2011.
Work Experiences:
● 2016 – 2017, 2022 – Present
Assistant to the dean, Mae Fah Luang University, Thailand
● 2013 – Present
Full-time faculty member at Mae Fah Luang University, Thailand
Courses taught: Machine learning, Electrical and electronic circuits, Introduction to computer
engineering, Mathematics for information technology
Data science projects: A Machine Learning-Based Tool for Predicting Depression in College Students,
Oil and gas offshore infrastructure decommissioning option prediction using machine learning,
predicting personality type (MBTI) based on Twitter posts, Covid-19 patients discharging protocols
verification, Psychosocial impact of 8 weeks covid-19 quarantine on parents and their children,
Predicting factors of university students’ academic performance, IT support system promoting
accessibility to health services for hill tribe children, etc.
● 2018 - 2020
Part-time course demonstrator at Robert Gordon University, UK
Courses taught: Business analytics, Big data analytics, Advanced data science
Honors and Awards:
● PhD Thesis Award winning from the National Research Council of Thailand
● Fully funded PhD studentship from Robert Gordon University
● Full scholarship for bachelor’s and master’s degrees from the Thai Government
Research Expertise:
● Data Science
● Machine Learning
● Imbalanced Data Classification
IX
Abstract of Contributed Papers
Oral and Poster Presentation
Section A: Artificial Intelligence and Machine Learning
Section B: Games and Virtual Reality
Section C: Intelligent Devices Robot and IoT
Section D: Software and Application
Section E: Other Topics in Information Technology
1
Contents
Section A: Artificial Intelligence and Machine Learning Page
AI - 01 Research on Autonomous Navigation of Mobile Robots Using Color Recognition 14
AI - 02 Sota Chubachi1, Shimma Tanaka, Fuminobu Imaizumi, Akira Okada, Katsumi Hirata 15
AI - 03 AI-Assisted Facial Exercise Software 16
AI - 04 Tanakrit Saiphan, Jaturapat Roothanavuth, Atthaphon Lorphan1, 17
Art-ong Mukngoen
AI - 05 Smart shopping cart with machine learning 18
AI - 06 Thepbordin Jaiinsom, Ratchanon Mookkaew, Supachok Butdeekhan, 19
AI - 07 Nonthapan Wongkanha, Phanawat Boontasarn 20
The application for analyzing Thai and Japanese traditional food dishes using
AI - 08 Machine learning 21
AI - 09 Thanachot Thetkan, Chayakorn Charoensri, Naruepat Sreerattana, 22
AI - 10 Ratthaphoom Songkrow, Chanichkorn Sornsongkram, Tanatat Mokmongkolkul, 23
AI - 11 Aoi Ozeki, Thanabordee Luangtongbai, Yukari Wada, 24
AI - 12 Chutharat Chaingam, Takayuki Fukuda, Naeem Binibroheng 25
AI - 13 Developing a Simple Web Application to Identify Dangerous Diseases in Cassava 26
AI - 14 Using Deep Learning Techniques 27
AI - 15 Kunanon Seehamat, Nachanok Kladsamniang , Khunthong Klaythong 28
AI - 16 Classification of leukemia from cell images processed by deep learning 29
Rachata Malaisri, Siwakorn Seesuwan, Usanee Noisri, Todsapor Fuangrod, Thoranin
AI - 17 Intharajak 30
Development of Artificial Intelligence in Cloud Classification for Weather
Forecasting using Image Processing
Pannawit Wantae, Pongsapat Suporn, Patompong Oupapong,
Songkran Buttawong, Suthut Butchanon, Pathapong Pongpatrakant
Development of A Model for Predicting Base Sequence Mutation In COVID-19
Using Transformer Techniques
Phongit Khanthawisood, Pawat Rattanasom, Nattawat Tosutja, Teerasak E-kobon
Transfer Learning for Image Classification of Monkeypox
with Convolutional Neural Networks
Phakkhaphon Artburai, Natthawee Naewkumpol, Ekkachai Wattanacha
Using Deep Learning for Differentiating Chest X-ray of Covid Pneumonia
Patients from Other Diseases
Panida Rumriankit, Passakorn Pornsornsaeng, Tamonwan Nawacharoen
Development of Gesture Detection to Translate Sign Language
Sirasit Tanajirawat, Ratcharit Ngaoda, Yupaporn Premkamol
Thai sound speech emotion recognition with deep learning
Pongsakon Kaewjaidee, Nattachai Chujeen, Thapanawat Chooklin
Sorting Parcels with Text Recognition
Wanwisa Boonkong, Saksorn Yoothong-in, Taweewat Srisuwan
Automatic Face Mask Detects and Collects Bin Robot by Object Recognition
System
Chachchay Sang-arsa, Nichpol tuecomesopakul, Nuttawooth Maikaen
iSupskin, Artificial Intelligence Program for CNN-Image classify
Panrapee Pandum, Natsinee Ruengsiri, Wichai Buaniaw
The sweet mamarlade melon’s quality grading system by using deep learning
techniques
Montuch Klaytae, Weranan Hemrungrot, Natpassorn Laonet,
Tanyaporn Kraiwongruean, Rattapoom Waranusast
System for Classify Character Style of People by Artificial Intelligence
Santipab Tongchan, Pichayut Sridara, Ekkachai Watanachai,
Pera Boncharat
2
Section A: Artificial Intelligence and Machine Learning Page
AI - 18 Chok-Anan Mango's Sweetness Prediction System using Color-based and AI 31
AI - 19 Nuttacha Puttrong, Thinnaphat Kanchina, Parunchai Keawkhampa, 32
AI - 20 Jirawat Varophas, Manatchanok Tamwong 33
AI - 21 Earthquake Analysis for Future Earthquake Prediction in Japan 34
AI - 22 Srikokcharoen Phongwit, Suzuki Shota, Limsupaputtikun Kantapisit, 35
Zhu Lin
AI - 23 Classroom Attendance System Using Face Recognition Development 36
AI - 24 Warisara Naebsamran, Thanachote Wattanamanikul, 37
AI - 25 Traimit Roopsai, Kanokon Phasikai 38
Wireless Motion capture suit using IMU Sensors
Pakkawat Kaolim, Wasin Jitmana, Aut Kongthong, Tipanan Pothagan, Prasit
Nakhonrat
Psychological Advice System from Analysis of Emotional Tendency using Thai
Text and NLP
Chanikan Katti, Khanidthee Singkul, Nantirat Toontaisong,
Siriporn Thongu, Manatchanok Tamwong, Surapol Vorapatratorn
A Document Transportation Model between School Buildings
Pimmada Makheaw, Suwichada Na Lamphun,
Natthakamon Ungboriboonpisan
Automatic Garbage Separation Machine
Chise Ito, Yumeka Murakami, Kyoji Komatsu
Maker-less Motion Capture for Human Body Using Azure Kinect
Kokoro Emi, Nanami Okada, Takafumi Yamada
3
Section B: Games and Virtual Reality Page
GVR - 01 Real Time Robot Control by Motion Capture 42
GVR - 02 Arin Suvun, Tanuson Deachaboonchana, Phanuwat Thongwol, 43
GVR - 03 Thapanawat Chooklin 44
GVR - 04 Developing a Prototype Remotely Artificial hand 45
GVR - 05 Thanat Varasajja, Pongsapuk Meekuson, Vichien Donram 46
Signlent: Sign Language Translator
GVR - 06 Rujrada Lourith, Pethcharat Taweeratthanandorn, Peera Bunchalad 47
Development of Game to Improve Socialize Skill for Children with Autism
GVR - 07 Kawinpob Pattankul, Tharinya Jongko, Weerayuth Ninlaor , 48
GVR - 08 Kittisak Keawninprasert 49
GVR - 09 THE DEVELOPMENT OF DIGITAL BOARD GAME 50
GVR - 10 TO ENHANCED DIGITAL CITIZENSHIP 51
Pheerawit Talikan, Thitichaya Panphrom, Nanthakran Pinpech,
GVR - 11 Wachirawit Eiamwilai, Jirapha Chotchun 52
A Game for finding personal working aptitude by Myers–Briggs Type Indicator
GVR - 12 (MBTI) 53
Thiwat Thajumpu, Pattanit Buatip, Chanokpon Jantapoon,
GVR - 13 Manatchanok Tamwong, Satit Thamkhanta, Nilubon Kurubanjerdjit 54
GVR - 14 Teaching Basic Programming with Online Game through HTML5 Website 55
Titi Tiyachaipanich
GVR - 15 Phana Sarayam, Usanee Noisree 56
GVR - 16 The sign-language translator glove for helping people with hearing disabilities 57
Thanat Ayarangsaridkul, Phudis Tansakul, Wattana Rumma-ed
GVR - 17 One-Handed Joystick 58
Nateepat Kamkong, Nipitpon Nunthana, Aut Kongthong,
Thummaros Rugthum
Game-based computer aided assistant for learning the Celestial Sphere VR
Kongphop Kongkiatjaroen, Phurinut Sangob, Adisorn Parama
Manatchanok Tamwong, Sunee Yamee, Pimphaphon Boonsanga,
Nut Kortakulsin
The project on Computer Games for Developing Perspectives and Educating
About Depression for General People (KEEP)
Thanakon Raksawong, Rakphumthai Chaiton, Kan Jaroansook,
Manatchamok Tamwong, Teanjit Sutthaluang
Computer Game for Introducing Dangers and Methods
for Preventing The Covid-19 (THE COVID TALE)
Nawaphol Tunprasert, Poramud Ponya, Pattarapon Srisombat,
Manatchanok Tamwong, Satit Thamkhanta, Nontawat Thongsibsong
Application for caring for babies up to 1 year old
Chamaiporn Aiadrat, Buntita Tuntisivaku, Taweewat Sesuwan,
Sisira Janchanapon
Providing a Comfortable Environment for Pets: Pet Ownership Simulation Using
AR Technology
Shutaro Watanabe, Ryo Sugiura, Ryoma Sato, Yuta Komori, Jintaro Yagi, Yuta Arai,
Chika Kondo
Rehabilitative tool for muscle weakness in arms
Anothai Thammakultheerakij, Punyaporn Pannoi, Srisuda Wongon,
Suthut Bhutchanon, Pathapong Pongpatrakant
Influence of the human skin color on the temperature distribution of the human
eye during the Intense pulsed light (IPL) therapy: Computer simulation
Kunyarat Torsuwan, Rachadhorn Ungpakorn,
Wutipong Preechapolakul
Creation And Research of Digital Makeup
SATO Rin, ITO Momo, TSUZUKI Keita, EGUCHI Keiko
4
Section B: Games and Virtual Reality Page
GVR - 18 Drone System for Exercise and Fun 59
SHIMIZU Yoshito, UETA Sota, IGAMI Shunji, TSUZUKI Keita, HIRANO Manabu,
GVR - 19 EGUCHI Keiko 60
GVR - 20 Periodic Table Game 61
GVR - 21 Supanat Kampapan, Chayapat Kruthnim, Thapanawat Chooklin 62
Development of real-time projected makeup system
GVR - 22 ITO Yuta, ASAKURA Yui, TSUZUKI Keita, EGUCHI Keiko 63
Strategy for Holey LightsOut Games: A Quest for Finding the Optimal Solution
GVR - 23 Rui Nishii, Kyosuke Neishi, Shotaro Kubo, Yuya Tokunaga, Otoya Murakami, 64
Tatsunori Murakami, Yusuke Shogimen
The Effect of the Transition Between Reality and Virtual Reality on Recall
Accuracy
Miyu Hirota, Takato Mizuho, Kurato Tominaga, Takuji Narumi
Robots improve language and communication development in children with
autism
Bunyaporn Seephathing, Tanapat Namsomboon, Suthut Budchanon, Daranee Chaiveij,
Pathapong Pongpatrakant
5
Section C: Intelligent Devices Robot and IoT Page
IOT - 01 Smart Shoe Cleaning Box 68
IOT - 02 Pattarisa Rerganan, Mai Nemoto, Himari Onuki, Himeno Kubo, 69
IOT - 03 Ririko Funyu, Abbas Alshehabi, Koh Ikeda 70
Smart money sterilizer
IOT - 04 Chisanupong Pareewiwat, Phurinat Sangphai, Atcha Chaloeyjit, 71
IOT - 05 Phana Sarayam 72
IOT - 06 RFID-based usage control system for disabled parking space with notification via 73
IOT - 07 LINE Notify 74
IOT - 08 Jirasita Taecho, Jirawan Aekchat, Sophon Klomkliang, Wenika Rodlunda, Songchai 75
Jitpakdeebodin
IOT - 09 Air Environment Observation and Control System 76
IOT - 10 Minato ABE, Shion SAITO, Ai YACHIDATE, Susumu TOYOSHIMA 77
IOT - 11 Distance medicine unit robot 78
IOT - 12 Nonthapad Sirianancharoen, Atid Thongleung, Mingkwan Khaodee, Rattachai 79
IOT - 13 Wongtanawijit 80
IOT - 14 Development and Public of Stair Climbing Robot by Subcrawler 81
IOT - 15 Yuki Uemura, Shohei Fujieda 82
IOT - 16 Development of Robotic Arm 83
Yuto Kuriyama, Koki Uno
IOT - 17 Automatic Metal and Plastic Separator 84
IOT - 18 Banchakhan Apaisorn, Kittaport Onkhong, Thitayapa Sawarok, 85
IOT - 19 Chawannut Prompak, Phattarawadee Suwannadee, 86
Pathapong Pongpatrakant
Programming an Autonomous Stair Climbing Robot
Ikuto Shimazu, Yuta Tokiwa
Medicine Sending Machine
Saris Boonfruang, Chanasuek Chaiyasorn, Pira Boonchalad,
Sakolkeit Khantong
Smart Door Opening / Closing System
Chadaporn Kantasin, Sagepaween Sanbho, Korrachai Sanchai,
Chawanwit Warichotphuminon, Rachanok Tangjai
Creating Remote-Controlled Robotic Fingers Using Arduino
Yuki Hosoi, Shotaro Baba
AN AUTOMATED GUIDED VEHICLE ROBOT FOR TRANSPORT IN
CONFINED PLACE
Nonphakon Kantakad, Thanpisit Peekaew, Sarit Phromthep
A Current Detection from Muscles to Control Wheelchair’s Movement
Buraporn Ruangchaem, Chayada Pakpoomkamonlert,
Kasamaphorn Sutapak, Theparak V Palma
Anemometer by KidBright board
Natthanicha Chirapornpisit, Panitnan Phanitnantho, Suwanlee Binsalay
The study of RF module and sound system for innovate tool
that guide blind in closed area
Chitisak Mongkhonkheha, Patiphan Chomhairat, Siridonnaya Kunsen,
Phitphasin Aintharasak, Soontree Montrisri, Pathapong Pongpatrakant
Electronic Large Gong Circle
Saroot Areerattanawetch, Chittapan Phahongsa, Natpassorn Laonet,
Sirirat Prom-In, Rattapoom Waranusast
Sensors to prevent loss from abandoning children in the car
Rujikarn Tanachotboonpun, Natchanon Promchan, Pracha Khamphakdi,
Aut Khongthong
Smart Bed for Bedridden Patients
Prushyapong Manosilapakorn, Nanfa Khuntawee, Mahaphan Jantakun, Tanyaporn
Kraiwongruean, Jiraporn Pooksook
6
Section C: Intelligent Devices Robot and IoT Page
IOT - 20 The system to save elderly people from heatstroke by controlling air conditioner 87
IOT - 21 Souta Kanno, Keiichiro Kikuchi, Masahiro Takahashi 88
Home electrical control system devices via IOT
IOT - 22 Teeranan Pimthep, Chanatip Suwanamorchat, Suchanan Putthaponpitak, 89
IOT - 23 Phitphasin Aintharasak, Jirayus Arundechachai, Pathapong Pongpatrakant 90
IOT - 24 Small Temporary Vaccine Storage Box with Dry Ice 91
IOT - 25 Faosan Muensman, Asreema Hayeewang, Wichai Buaniaw 92
Fine Motor Exercise Equipment
Nonthaya Phetkong, Chanikarn Leauheem, Jirayut sangsin
Development of Assistive Devices for the Disabled Using Small Devices
Keiju Iwasaki, Takuya Uemura, Koshi Kikuchi
Fire alarm with Localized Mesh Technology and Auditorial announcement system
utilizing the ESP32 development board (F.L.A.M.E.)
Chayapon Iemsonthi, Tanabodhi Mukura, Ratsameetip Wita,
Peerapong Tuptim
7
Section D: Software and Application Page
SWA - 01 Automatic Drugs Dispenser with Distress Call System via Line for Elderly 96
SWA - 02 Persons 97
SWA - 03 Pantita Yamplien, Rungnapa Homkeanchan, Likit Thoppadids 98
Negative Filter: browser plugin for Thai twitter sentiment analysis
SWA - 04 Nattgarni Aphimookkul, Tossaporn Saengja 99
The Development of an Application to Inspect Sweet Young Coconut Meat to
SWA - 05 Identify the Estimated Harvest Period 100
SWA - 06 Teerapad Pipadboonyarat, Taksin Kaewwongsa, Kawao Promchat, Putthapond Inorn, 101
SWA - 07 Tanyaporn Kraiwongruean 102
SWA - 08 One way to protect children from Internet crime 〜Handwritten character 103
SWA - 09 recognition using AI〜 104
SWA - 10 Chika Kudo, Hina Ishiwata, Yuta Takizawa 105
SWA - 11 Korie Application for Learning Korean in 4 Weeks 106
SWA - 12 Nattanan Rewreab, Siriyakorn Sucharitjivavongse, Anek Tanompol, Thanathip 107
SWA - 13 Limna 108
SWA - 14 Demonstration for The little researcher 109
SWA - 15 Pimmada Wetchayun, Phraeorung Muangpool, Anek Tanompol, 110
SWA - 16 Thanathip Limna 111
SWA - 17 Obstacle Detecting Helmet for The Visually Impaired 112
SWA - 18 Chalisa Thipjuntha, Yanin Khetrattana, Suwimon Tanompol, 113
Rattachai Wongtanawijit
Automatic Foot Length Pedal with Program to Calculate Shoe Size, Shirt Size,
and Chest Length by Mathematical Relationship
Sirawich Yimsuan, Sorawit Boonnee, Thanapong Limpajeerawong
The Chat Application Facilitates the Visually Impaired with Speech Recognition
and Speech Synthesis
Warisara Phoprasat, Wanwisa Rungrueng, Chutikarn Boonprasert,
Jirawan Sawangnam
Expansion of Open Calculation
Hayato Kato, Ayane Ishio, Chiaki Sugimoto, Hana Hattori, Kaho Furuhashi, Sayaka
Mizuno, Yuna Misu
NAHSNS — a new chat system
Akira Harada, Takuto Akai, Namaizawa Sogo, Yoshiyuki Konishi
Mathematics Creates Beauty: Calculating the Best Makeup
Azumi Hashimoto, Matsuri Fujii, Keisuke Sakai, Aoi Takahashi,
Mitsugu Hashimoto
TUR-TIEAW: smart tourist guide web application for senior tourism
Primwari Phuengrit, Manlika Phomsre, Kamonphop Aramrak,
Sivarak Chituthas, Jirawan Sawangnam
Application to Distinguish Varieties of Para Rubber (Hevea brasiliensis) for the
Benefit of Proper Care of Trees
Patsagorn Yuenyong, Phubest Srikoon, Pira Boonchalad, Ekkachai Wattanachai
Color Blindness Assessment Applications for Children Ages 3-7 years in
Android
Pratiparn Kongkumnert, Natnicha Danprakron, Khunthong Klaythong
Basic health and symptom checkup application (D.examinate)
Theerahpath Kittisarapong, Thanongsak Naksuk, Yupaporn Premkamol
Educational Document Request System Project
Raphiphat Hwangsuk, Nanthanat Charoensuk, Ronnakit Saenprom
Developed information systems through applications Khanom-Sichon Tourism
Nakhon Si Thammarat
Nuttida Kongthong, Kanyanat Wiriyasomphob, Prinkarat Srimai, Krisanadej
Jaroensutasinee
8
Section D: Software and Application Page
SWA - 19 Voice control system for home lighting and appliances using Google Assistant 114
SWA - 20 Weeraphat Kangnikorn, Kantapon Khunpiluk, Wenika Rodlunda, 115
SWA - 21 Sophon Klomkliang, Songchai Jitpakdeebodin 116
SWA - 22 A Mobile Application for Nutrition Suggested in a Day (Good Health Good Life) 117
SWA - 23 Natthapong Chaiwong, Rachata Kawila1, Suphakorn Insee, Sunee Yamee, 118
SWA - 24 Soontarin Nupap 119
Verification of Difference in Memorability of the Application’s Different
SWA - 25 Background and Text Colors 120
Suthinee Watcharalertvanich, Akie Azuma, Hyodo Momoka, Sakai Kazuki,
Uchida Yuki, Tamura Fumihiro
Borrowing and Returning Machine and Website for Checking Status
Thanyanop Sriwanis, Siwakorn Chutiporn, Araya Florence,
Aut Kongthong, Khanchai wongsit
Flood Notifier
Nalin Pijitkamnerd, Krittaphas Sakorntrakul, Kitsada Doungjitjaroen
Target of Interest System's Third body observation via Transiting Exoplanet
Survey Satellite database
Kittituch Suratanachaiboonlert, Pathit Techataweekul, Vichien Donram,
Supachai Awiphan
“Life Time Machine” Computer Assisted Instruction
Thanaporn Artidtayamontol, Papavee Feaungfung, Kamonpan Musikapan,
Suwimon Tanompol, Thanathip Limna
9
Section E: Other Topics in Information Technology Page
SWA - 01 Fruit Calorie Scale for diabetics 124
Phatchara Chanthamat, Pajeeranan Pingkhunthod
SWA - 02 Thanopong Limpajeerawong 125
SWA - 03 NEW GEN IV Stand 126
SWA - 04 Pitiwat Chimplee, Thanaphum Masayamas, Supatra Thomya, Adirek Wilai 127
SWA - 05 Production of a Curved Mirror for an Omnidirectional Camera 128
SWA - 06 Haruki Kunishima, Iori Fujiwara, Shinji Nitta 129
SWA - 07 Improving the performance of the athletic club at school 130
SWA - 08 Naho Oki, Shotaro Baba 131
SWA - 09 Over Compaction Detection System 132
Kaho Seki, Haruna Kunimitsu
SWA - 10 The Study and Development of a Novel CubeSat for Managed Space Debris 133
SWA - 11 Naruephat Yajai, Narakorn Tanikkul, Khunthong Klaythong 134
Parabola-Shaped Solar Tracking Power Generators Compact Size
SWA - 12 Taisuke Miyazaki, Syotaro Baba 135
SWA - 13 Forest Spontaneous Combustion Detector 136
Sasikan Unyasit, Pipat Hopet, Weerabong Banjong
SWA - 14 Web Application for managing Online Learning of Thasilabumrungrat School 137
under the Circumstance of the Coronavirus Disease
SWA - 15 (COVID-19) Epidemic 138
SWA - 16 Atittaya Raksaboh, Anyanee Kobmaeng, Sawat Sawalang 139
SWA - 17 Let Me Lend 140
Fitdao Heempong, Lada Limoh, Rungnapha Boontham
SWA - 18 Represent the Language with Our Hands for Everyone 141
SWA - 19 Namito Hayakawa, Sawa Matsuda, Wataru Ohno, Keita Tsuzuki, 142
Keiko Eguchi
SWA - 20 Eyesight beside you 143
Nattakorn Klangkhong, Piyakorn Rattanamoosik
SWA - 21 Application of Image Processing Techniques to Assess Nitrogen Deficiency Status 144
in Rice Leaves
Gridtagon Klangprapan, Pasathon Prasomsri, Theerawut Chantapan,
Chavalit Buaprom, Bongkoj Sookananta
Snuff faded
Nicha Wongmatcha, Jiranun sungketjai, Dootkamon Wangpo,
Prakit Phatijirachote
Development of the System Monitoring Weather in Construction Sites
Tomoki Tabei, Uryu Okada
A new programming method using Node-graph
Nemoto Kouki, Itabashi Yukei, Yamaki Naohiro
A student-friendly information retrieval system for "Self-directed research"
Sumire TAKITA, Kotoha FURUDATE, Kotomi NAKAMURA,
Yoshimichi NAKAMURA
The equipment for taking students’ school attendance
Tayida Malila, Sunisa Yoodeeram
System for checking the status of location data
Lertdilok Saelok, Rattiwat Jantawong, Duangnapa Sompong,
Mayura Thongheng, Jiraporn Pooksook
The Making of a New Texture of Milk Jelly through High Methoxyl Pectin
MUNETOMO Haruka, NISHIKAWA Yusuzu, YANAGIHARA Rika, ABE Shiori,
IKEDA Shinya
Paper-based Electrochemical Sensor for Cadmium and Lead Detection in Food
Samples
Pimchanok Charoensin, Akkrawat Aniwattapong, Pemika Natekuekool,
Prasongporn Ruengpirasiri, Abdulhadee Yakoh
10
Section E: Other Topics in Information Technology Page
SWA - 22 A Mathematical Study on the Sustain Phenomenon of Overtone in Violin 145
SWA - 23 Flageolet Harmonics 146
Shodai Tanaka
SWA - 24 The Effect of Tannin Extracts from Malabar Leaves and Wood Vinegar on 147
SWA - 25 Preventing and Inhibiting Acidovorax avenae subsp. citrulli causing Bacterial 148
Fruit Blotch in the Cucurbitaceae Family
Kanokwan Pintadong, Naree Metheekittikhun, Parichat Mongkonrodkun,
Sumarin Niroj
Composting of split chopsticks using wood rotting fungi
Shio Toma, Yuki Tomioka, Khosuke Sekine
An Increase in the Number of Chlorophyll in Yamatomana by Hot-water
Treatment
Sakura Nakagawa, Kazuma Takahara1, Kotono Inui, Ruisei Fujiwara, Nao
Matsushima, Yoriko Ikuta
11
Section A: Artificial Intelligence and Machine Learning
Artificial Intelligence
and Machine Learning
12
Research on Autonomous Navigation of Mobile Robots Using Color Recognition
Sota Chubachi1, Shimma Tanaka1
Advisors: Fuminobu Imaizumi1, Akira Okada1, Katsumi Hirata1
1National Institute of Technology (KOSEN) Oyama College
Study • Map creation using LiDAR
• Robot autonomous navigation with the maps
• Recognizes colors with camera
• Control of the mobile robot with color recognition
• Distinguish between RGB color intensity
Color Recognition Autonomous Navigation Color Intensity of Target
Mobile Robot R
with LiDAR
(Lightrover) G
Color B
Target
(Blue) Back Stop Forward
Select Red Area! Left Right
Turn Turn
Abstract
Thanks to the recent advanced technology in the field of mechanical engineering such as
automobiles and airplanes, our lives have become convenient ever after. In spite of the
benefit of such an advancement, traffic accidents still often occur with the cause of traffic
jams, dangerous and careless drivings, and so on. It is necessary to come up with the possible
solutions urgently in order to decrease the number of the traffic accidents. In addition,
Japanese government recommends senior citizens to return their driver’s license, because
most of the traffic accidents are caused with their poor skills. However, Japan has many
senior citizens who feel inconvenient for leading their lives without cars. Considering these
situations, we have focused on “autonomous navigation” to solve these problems. We have
been dedicated ourselves to the study about the autonomous navigation to solve the above-
mentioned problems. So far, the five experiments have been conducted. First, we created a
map using LiDAR (Light Detection and Ranging) attached to a mobile robot. Second, we
moved the robot autonomously with the maps we had created before. Third, with a camera
mounted on the mobile robot, we created a system that recognizes colors. Fourth, we created
a system that recognizes each of the RGB colors and controls the direction of the mobile
robot to the differences in color. Fifth, we created a system that uses a camera to distinguish
the intensity of RGB colors. In the future, we hope these systems will be used in autonomous
navigation.
Keywords: Autonomous Navigation, Color Recognition
14
AI-Assisted Facial Exercise Software
Tanakrit Saiphan1, Jaturapat Roothanavuth1
Advisors: Atthaphon Lorphan1, Art-ong Mukngoen1
1Chitralada school
Abstract
Facial exercises are becoming tremendously trendy nowadays, with people across ages
indulging in the practice. This growing popularity is certainly related to the wide range of
benefits attached to the activity. Among these benefits, one can mention muscle stimulation,
which assures the firmness of the skin while slowing down the ageing process. In fact, by
lifting and reinforcing the muscles under the skin, facial exercises smooth the lines and
wrinkles on the face. Face exercises also help to improve blood circulation. However, if done
improperly, without proper guidance, facial exercises are susceptible to be harmful to the
appearance instead. To prevent this, a facial exercise software has been developed using
Facial Landmark Detection, a computer vision task aiming at detecting and tracking key
points on a human face. The software is designed to detect sixty-eight landmarks on the face
—facial feature points— to locate facial components. These facial components are then
compared to the right facial feature points to check the compatibility with artificial
intelligence. The result of the study indicates that the performance of the software in
accurately classifying facial exercise by facial feature points is 89 percent.
Keywords: Facial Exercises, Facial Landmark Detection, Artificial intelligence, Facial feature points, Computer
vision
15
Smart shopping cart with machine learning
Thepbordin Jaiinsom1, Ratchanon Mookkaew1, Supachok Butdeekhan1,
Nonthapan Wongkanha1
Advisor: Phanawat Boontasarn1
1Yupparaj wittayalai school
Abstract
Time is crucial in the world we live in today. As a result, time management is important.
Consequently, the project author developed a STEM invention project and a Smart Cart for
Freshmart Smart Shopping Cart for Freshmart. The problem's solution was discovered by
the organizers after extensive research and study. The following objectives: 1. To facilitate
and expedite user interaction, give users access to a shopping cart. 2. Create a back-end
system that allows shops or service providers to manage inventories and alert users to critical
events. 3. Assess the system by replicating a retail environment. 4. Make system
improvements based on the test results. The fundamental guiding idea When an item is added
to the cart, we use artificial intelligence (machine learning) and an Internet of Things (IoT)
network to help us determine the item's pricing. The cart will be equipped with a camera that
will work with a weight sensor and artificial intelligence (ML) to process photos. To be able
to manage stock for the seller by knowing what kind of goods is in the cart, calculating the
price based on weight or the quantity of pieces, and more. Alerted by API Line Please let me
know when the product is sold out. or let us know if the product has a unique requirement.
The Streamlit Library is used to develop quick and effective web apps, which powers all of
this. Users and service providers will find this convenient.
Keywords: Machine learning, Object detection, Shopping cart, Python, Raspberry pi
16
The application for analyzing Thai and Japanese traditional food dishes
using Machine learning
Thanachot Thetkan1, Chayakorn Charoensri1, Naruepat Sreerattana1,
Ratthaphoom Songkrow1, Chanichkorn Sornsongkram1, Tanatat Mokmongkolkul1,
Aoi Ozeki2, Thanabordee Luangtongbai1, Yukari Wada2
Advisors: Chutharat Chaingam1, Takayuki Fukuda2, Naeem Binibroheng1
1Princess Chulabhorn Science High School Pathum Thani
2Ritsumeikan Keisho Senior High School
Abstract
One of the identities that could be associated with a group of people is food. The food reflects
culture and location, so, it plays a dominant role in tourism spots. The foreigners might learn
from the media about famous food, but they might confuse because of the similarities of
Asian food. There are various types of Thai and Japanese food, it’s interesting to identify
what it is. Moreover, it would be great to encourage tourists to pay attention and enjoy Thai
and Japanese traditional food. This program will be a tool for classifying food using machine
learning. Firstly, famous traditional Thai and Japanese foods were selected to be identified.
Food pictures were collected from internet or taken at least 20 photos per kind of food and
uploaded to the database on Google Drive. The database was transferred from google drive
to Microsoft Azure. Machine learning with Microsoft Azure was used to train the Artificial
Intelligence (A.I.). The A.I. was tested with a photo/picture that out of the database to check
the efficiency and accuracy of the program and expand the database to improve it.
Rechecking was also repeated until the accuracy is higher than 90 %. Moreover, various Thai
and Japanese food have been added to the database for more diversity. This project results
in an artificial intelligence that will be able to differentiate the different kinds of Thai and
Japan traditional food. The food that can be used in this database should be unique and has
a dominant appearance. In order to increase the accuracy of A.I., we should add a large
number of photos and spend more time in calculation. Train the A.I.
Keywords: Machine learning, Traditional food, A.I.
17
Developing a Simple Web Application to Identify Dangerous Diseases in Cassava
Using Deep Learning Techniques
Kunanon Seehamat1, Nachanok Kladsamniang1
Advisor: Khunthong Klaythong1
1Princess Chulabhorn Science High School Pathum Thani
Abstract
Cassava is an important economic crop of Thailand, with an increasing number of export
sales every year. It is considered a popular crop among farmers due to the many advantages
of cassava can be easily grown. It grows well even in areas with low fertility, but the main
obstacle in cassava cultivation is the incidence of disease in cassava which results in lower
growth rates and yields. When an outbreak of disease occurs, what farmers need to do is
diagnose the type of disease that requires specialists or skill in the use of specialized tools,
as many farmers either do not have the expertise to use the tools or have the funds to employ
specialists. The authors, therefore, developed a web application for cassava diagnosis using
deep learning techniques. It consists of all three diseases: Cassava Mosaic Disease, Cassava
Bacterial Blight and Cassava Brown Streak. The experiment found that the most efficient
architecture for the cassava dataset was EfficientNetB6, with an F1-Score of 0.75. It was
found that using the number of layers of 4_Block, the F1-Score value was 0.79, which was
the highest. Later, the model's efficiency was studied using different Optimizers and found
that Optimizer Adam gave the highest F1-Score at 0.83. The experimental model was then
deployed to the web application for farmers to use.
Keywords: Cassava Disease, Image Classification, Deep Learning, Web Application, Convolutional Nueral
Network
18
Classification of leukemia from cell images processed by deep learning
Rachata Malaisri1, Siwakorn Seesuwan1
Advisors: Usanee Noisri1, Todsapor Fuangrod1, Thoranin Intharajak2
1Princess Chulabhorn Science High School Lopburi
2Princess Srisavangavadhana College of Medicine
Abstract
Leukemia is a treatable disease and requires a long course of treatment. Leukemia can be
diagnosed in many different ways. which can be observed in white blood cell abnormalities
Leukemia requires a number of healthcare professionals to diagnose it. It takes time to
diagnose the disease for a period of time. Based on these factors, the authors have therefore
developed a project to detect leukemia from cell images processed with Deep Learning using
a large database of white blood cell images obtained from the database. Let's identify and
classify with artificial intelligence. The process begins with the use of Image Processing in
image processing. The processed images were then reprocessed to differentiate groups of
abnormal white blood cells. After that, deep learning is used to identify the white blood cells
that want to check if it is leukemia or not. This research can effectively and accurately
identify patients with leukemia. Leukemia diagnosis by Artificial Intelligence makes
Leukemia diagnosis Instead of Human Personnel It saves time in diagnosis. It can allow
patients to receive treatment from leukemia in a timely and accurate manner.
Keywords: Leukemia, Image Processing, Deep Learning , AI, Classification
19
Development of Artificial Intelligence in Cloud Classification for Weather Forecasting
using Image Processing
Pannawit Wantae1, Pongsapat Suporn1, Patompong Oupapong1
Advisors: Songkran Buttawong1, Suthut Butchanon1, Pathapong Pongpatrakant1
1Princess Chulabhorn Science High School Loei
Abstract
Weather forecasting is very important to human life and weathertight inspections requires
various types of equipment to inspecting weathertight. Most of them can be detected
automatically except to detect cloud genus. This project has developed artificial intelligence
in cloud classification, using Transfer Learning in conjunction with Convolutional Neural
Network (CNN) to increase the ability to use cloud genus data, to be combined with other
weather forecast values for more accurate and automatic weather forecasting. The
methodology starting from the collections of cloud genus images, broken down into 10
classes, including Altocumulus, Altostratus, Cirrocumulus, Cirrostratus, Cirrus,
Cumulonimbus, Cumulus, Nimbostratus, Stratocumulus, and Stratus with approximately
200 pictures of each from internet, then processing data to train the model. Next, build
classification models using Transfer Learning with 3 pre-trained models: ResNet50,
MobileNet v3 small, and VGG-16 with batch normalization and find the best learning rate
for each model, then evaluate the models using a confusion matrix. Accuracy was 56%, 39%,
and 49% respectively, and the Precision, Recall, and F1 Score of all 10 classes are within
acceptable limits. In conclusion, the model that was able to classify the cloud had a greater
chance of being correct than a random guess, that is, a random guess had only a 10% chance
of being correct. As a result, the data can be combined with other meteorological
measurements for more accurate and automatic weather forecasting.
Keywords: Cloud Classification, Transfer Learning, Convolutional Neural Network, Weather Forecasting
20
Development of A Model for Predicting Base Sequence Mutation In COVID-19
Using Transformer Techniques
Phongit Khanthawisood1, Pawat Rattanasom1
Advisors: Nattawat Tosutja1, Teerasak E-kobon2
1Princess Chulabhorn Science High School Pathum Thani
2Department of Genetics, Faculty of Science, Kasetsart University
Abstract
COVID-19 is a disease caused by the COVID-19 virus. A serious epidemic is transmitted by
phlegm from coughing. More than 224.61 million people have been infected and more than
4.62 million have died worldwide. The symptoms of this disease range from asymptomatic.
Common flu-like symptoms until severe lung infection, as a result, it could be fatal. As a
result, around the world, vaccines have been produced to help reduce the chance of
contracting the virus. However, all viruses can mutate, and some are able to survive and
multiply. They also could spread more easily. Evade the body's immune system better. This
makes it necessary to monitor the mutation of the virus to cope with the situation. Viral
mutations are caused by tiny dots, genetic material, or the sequencing of the virus itself. This
gave the authors an idea to apply knowledge and technology to develop a model for
predicting mutated genetic material in the COVID-19 virus. As for the method, starting from
the preparation of the COVID-19 sequencing data from the NCBI and GISAID databases.
Pre-Train Model using the learning model Self-Supervised Learning Measures with
Perplexity, the model is ready for experimentation. The first study examined the efficacy of
the BERT architecture in Masked Language Modeling in predicting mutations. In the second
experiment, mutation models were studied in the Next Sentence Prediction and in the third
experiment. It is a study of the relationship of genetic material to external factors. To study
the relationship results and will be used to help predict the situation of the virus in medicine
and public health.
Keywords: Covid-19; Mutation, Transformer Model, BERT, Masked Language Modeling
21
Transfer Learning for Image Classification of Monkeypox
with Convolutional Neural Networks
Phakkhaphon Artburai1, Natthawee Naewkumpol1
Advisor: Ekkachai Wattanacha1
1Princess Chulabhorn Science High School Buriram
Abstract
In many different sciences today, image processing techniques are widely used. Science in
healthcare is one of them. The problem of image classification is one that deep learning can
be a fantastic answer for, however, from a technical standpoint. Transfer learning is a rapid
method for resolving many image classification problems. Furthermore, the World Health
Organization recently declared monkeypox as a public health emergency of global concern.
Due to the rapid rise in illnesses, Thailand's Department of Disease Control is monitoring
the situation and being cautious. In order to visualize monkeypox or other non-monkeypox,
the developer proposes a method for using transfer learning techniques in the instruction of
an extensive convolution network model. The three trained models, ConvNeXt Small,
RegNet Y 16GF, and Wide ResNet50-2, three trained models, found that RegNet Y 16GF
produces the best results. After RegNet Y 16GF + AdamW was fine-tuned, it was compared
to the model's optimizer and found that it had the best accuracy (95.21%) and best loss
(0.0957). Consequently, it can be said that using convolutional neural networks with transfer
learning to classify images of monkeypox can be highly effective. Additionally,
it could be enhanced by including more image data and applying it in healthcare situations.
Keywords: Monkeypox, Convolutional neural network, Transfer learning, Optimizer fine-tuning model
22
Using Deep Learning for Differentiating Chest X-ray of Covid Pneumonia Patients
from Other Diseases
Panida Rumriankit1, Passakorn Pornsornsaeng1, Tamonwan Nawacharoen1
1Princess Chulabhorn Science High School Chonburi
Abstract
Using Deep Learning to Differentiate Chest X-Ray of Covid Pneumonia Patients from Other
Diseases has an objective to study and develop a web application for differentiating chest x-
ray of covid pneumonia patients from other 12 diseases. The target groups are doctors and
hospitals. Methodology, bring image data from NIH Chest X-rays to create a model. Then
write a Deep Learning program in Python programming language by programming in Kaggle
using Keras Library for modeling. After this step, create a folder in Visual Studio Code and
store the web page, requirements file, and model file in this folder. The web page program
for using the model is written in HTML and CSS. There are 4 pages: instructions, main
program, references, and information. Making the web application available by using Flask
framework for building web application. Upload files in the folder to GitHub for public use.
From recording the model accuracy test results, it was found that the model test result is
88.89%. The time it took to process the images in Web Application processing took an
average of 1.4 seconds
Keywords: Covid, Deep Learning, Visual Studio Code, Web Application, HTML
23
Development of Gesture Detection to Translate Sign Language
Sirasit Tanajirawat1, Ratcharit Ngaoda1
Advisor: Yupaporn Premkamol1
1Princess Chulabhorn Science High School Chonburi
Abstract
From interviews by BBC Thai news agency interviews with hearing impaired students who
use eyesight and sign language to learn. at the Deaf Studies branch Rajasuda College
Mahidol University The students gave an interview to a reporter who said that “Normal deaf
people want a sign language interpreter. Having a sign language interpreter is a medium that
allows us to convey what we want. and then eliminating obstacles If the deaf will learn or
train If there is a sign language interpreter to help translate the content, it will be able to learn
in its entirety. and able to convey what he wants to convey.”This project has purpose to
translate sign language to a writing language for the people that get lack of communication
to be easy to communicate with the normal people. By writing code in JUPYTER coupled
with the use model Tensorflow, opencv, mediapipe ,sklearn and matplotlip for the result of
the software that can translate the sign language to writing lauguage.We will take advantage
of these modules to create a software that can capture human gestures in real time by creating
a database or data set of basic sign language in the form of Videos on basic sign language
and train it with the code we've created to produce a written language result. It considers
various sign language gestures in realtime when the gestures are within the range of our
trained video. The code then processes and outputs a written result that corresponds to the
gesture.
Keywords: Sign language, Deaf people, Software, Translate, Modules
24
Thai sound speech emotion recognition with deep learning
Pongsakon Kaewjaidee1, Nattachai Chujeen1
Advisor: Thapanawat Chooklin1
1Princess Chulabhorn Science High School Nakhon Si Thammarat
Abstract
A project on the emotional recognition of Thai language sounds by deep learning, which is
classified by 3 models, namely CNNLSTM, CNNBLSTM and CSABLSTM. All 3 models
are different in structure. Which can be ordered from the most complex to the least complex
is CSABLSTM CNNBLSTM CNNLSTM then all 3 models will be learned with the
THAISER data set, which requires feature extraction before learning. We want to study
which feature has the greatest effect on accuracy by studying three factors: 1. The number
of feature lengths (num mel) including 15, 30, and 60 2. The length of the sound used in
feature extraction. (max length) is 2, 5, and 10 3. Movement phases of feature extraction
(frame length) are 25, 50 and 100. After learning, the model will be tested for performance
against datasets that have not yet been encountered or test set by standard. Used for
comparison is model CNNLSTM with the following features: num mel equals 60 max length
equals 2 frame length equals 50 and accuracy 74.97 ± 1.19. When tested and recorded, the
model and feature with the highest accuracy is model CNNLSTM where the feature is that
num mel is 60, max length is 10, frame length is 50, and accuracy is 77.17 ± 0.86, so the
above model and feature are the most efficient for THAISER datasets.
Keywords: CNNLSTM, CNNBLSTM, CSABLSTM, Model, THAISER, Feature extraction, Num mel,
max length, Frame length, Feature, accuracy
25
Sorting Parcels with Text Recognition
Wanwisa Boonkong1, Saksorn Yoothong-in1
Advisor: Taweewat Srisuwan1
1Princess Chulabhorn Science High School Nakhon Si Thammarat
Abstract
At present, consumer behavior of goods and services through online channels has grown
exponentially in the past two years. which leads to the transportation and delivery of the
parcel to the recipient One of them is students and staff at Princess Chulabhorn Science High
School Nakhon Si Thammarat. When parcels are large, it is important to organize them. The
authors therefore want to create a program that can convert image data into character data
and store it in a database by studying the functionality of Text technology. Recognition (light
character reading) is therefore presented by applying to the reading of parcels and the need
for programs to facilitate the reading of images and store data in a database to help facilitate
parcel management. and distribution the organizing committee hopes that this program will
more or less facilitate and reduce work in small to large organizations.
Keywords: Text Recognition technology, Optical Character Recognition (OCR)
26
Automatic Face Mask Detects and Collects Bin Robot by Object Recognition System
Chachchay Sang-arsa1, Nichpol tuecomesopakul1
Advisor: Nuttawooth Maikaen1
1Princess Chulabhorn Science High School Phetchaburi
Abstract
The main goal of this project is to create a bin robot that uses object recognition technology
to find face masks on the floor and pick them up so that cleaners and face masks don't come
into close contact. The robot that can recognize face masks uses artificial intelligence. We
separate the robot into two sections before building it. The movement of the robot and its
structure come first; we start by drafting the robot before building it from scratch. The robot's
frame is made of plastic, and its two 12 volt direct current wheel motors are battery and
raspberry pi4 powered. using the robot's camera on top for robot vision. To sweep and store
the masks as we collected them, we utilized a brush. The second is a face mask detection
system, which has to be trained to identify face masks by categorizing photographs of face
masks that are used as basis data to build a model for face mask detection. After combining
these two, we discovered a slight issue since our battery couldn't create enough voltage to
power the robot. As a result, we switched to a battery with a greater output voltage. Finally,
using an object recognition system, we were able to design a fully autonomous face mask
detects and collects bin robot.
Keywords: Robot, Object recognition, Artificial Intelligence
27
iSupskin, Artificial Intelligence Program for CNN-Image classify
Panrapee Pandum1, Natsinee Ruengsiri1
Advisor: Wichai Buaniaw1
1Princess Chulabhorn Science High School Satun
Abstract
Computer project titled iSupskin, an artificial intelligence program for analyzing basic skin
diseases with CNN-Image classify, was prepared with objectives 1.) To study the operation
of artificial intelligence system in the machine learning process using Algorithm CNN-Image
classify and developed with Python. 2.) To develop iSupskin, an artificial intelligence
program analyzing basic skin diseases with CNN-Image classify. 3.) To test the efficiency
in analyzing images of iSupskin, an artificial intelligence program for analyzing basic skin
diseases with CNN-Image classify. 4.) To study on user satisfaction of iSupskin program, an
artificial intelligence program that analyzes basic skin diseases using CNN-Image classify.
Development of iSupskin program, artificial intelligence program for basic skin disease
analysis using CNN-Image classify, developed in python language which has two parts
development, which is Train model part and GUI/Interface part, use Visual studio code
program and use Algorithm CNN-Image classify. To develop a model for training the
program image and test the program's performance. The results of the iSupskin efficacy test
were able to detect primary skin diseases, including normal skin, eczema, tinea,
inflammatory skin disease, and psoriasis. The analysis accuracy was 84%, the analysis error
was 16%, and the user satisfaction rating was 4, which was very satisfactory.
Keywords: Artificial Intelligence, Python, CNN-Image classify
28
The sweet mamarlade melon’s quality grading system
by using deep learning techniques
Montuch Klaytae1, Weranan Hemrungrot1
Advisors: Natpassorn Laonet1, Tanyaporn Kraiwongruean1, Rattapoom Waranusast1
1Princess Chulabhorn Science High School Phitsanulok
Abstract
Melon is a profitable crop and categorized as an expensive fruit. Sorting the grade of melon
necessitates precision and accuracy. This project aims to devise a more accurate method for
sorting and grading Sweet Marmalade melons than human classification and to develop an
Artificial Intelligence (AI) for classifying photos by deep learning to assess Sweet
Marmalade melons quality. In this study, the model of graded melons were constructed by
the CNN algorithm and it was found that the model yielded 50.95% accuracy by using 1277
images of grade A melons and 1244 images of grade B melons. The model was then
developed into a prototype system for determining the Sweet Marmalade melon quality. The
texture of each sweet marmalade melon was photographed and analyzed according to the
models. The results showed that the average classification success rate for grade A image
tests was 72.67%, whereas the average classification success rate for grade B image tests
was 60.00 %. Concerning the melon quality analysis web application test, the classification
success rate for grade A image tests varied 84.00 ± 9.17%, while grade B image tests had the
classification success rate of 64.00 ± 6.93%. The image tests of Grade B melons had a lower
classification success rate than those of Grade A because several images of Grade B Sweet
Marmalade melons resembled those of grade A. It can be concluded that deep learning
algorithms were capable of accurately identifying melon quality based on the Sweet
Marmalade melon rind textures.
Keywords: Artificial intelligence, Sweet marmalade melon, CNN algorithm
29
System for Classify Character Style of People by Artificial Intelligence
Santipab Tongchan1, Pichayut Sridara1
Advisors: Ekkachai Watanachai1, Pera Boncharat2
1Princess Chulabhorn Science High School Buriram
2Buriram Rajabhat University
Abstract
From the news that can be found online on online websites about missing people
announcements. which is one problem that sometimes the matter is not very big It only takes
a few hours to fix those problems. Likewise, there are times when it comes to such a large-
scale case, such as the disappearance of an old man with Alzheimer's disease in a famous
department store. By disappearing this time, no one could see and notice at all. As the old
man walked to the staff area and couldn't find a way out, he died, etc. Due to the many
problems related to the search for missing people or basic security in those areas, the
organizing committee therefore the idea was to solve these problems by using the artificial
intelligence on the CiRA Core platform to detect passersby through their character by using
Kaggle's outerwear datasets and recorded walking time. Through that, including external
appearance through Google Sheet for business use and the safety of people in different
places.
Keywords: Artificial intelligence system to apply the benefits in terms of safe
30
Chok-Anan Mango's Sweetness Prediction System using Color-based and AI
Nuttacha Puttrong1, Thinnaphat Kanchina1, Parunchai Keawkhampa1
Advisors: Jirawat Varophas1, Manatchanok Tamwong1
1Princess Chulabhorn Science High School Chiang Rai
Abstract
This research aims to create a Line bot for testing the sweetness of Chok-Anan mango with
the surface color. Chok-Anan mango (Mangifera indica L.) with different colors and
sweetness levels were used to study and collect the data, using machine learning algorithm
comparison of 4 models including Support Vector Machine (SVM), Logistic Regression, K-
nearest neighbors (KNN), and Decision Tree to learning relationship between the mango
images and the mango sweetness levels. The prediction accuracy was determined by
comparing the actual mango sweetness using a Brix-refractometer and the predicted mango
sweetness from the AI model. The experiment found that the mango sweetness was
significantly correlated with the mango surface color. The SVM model reaches an average
accuracy rate of 93.22% (tested with data that has never been trained). In conclusion, the
sweetness level of Chokanan Mango can predicted by using Support Vector Machine and
can applied to line bots for easier use.
Keywords: Image processing, Machine learning, RGB, Chokanan mango, Sweetness
31
Earthquake Analysis for Future Earthquake Prediction in Japan
Srikokcharoen Phongwit1, Suzuki Shota1, Limsupaputtikun Kantapisit1
Advisor: Zhu Lin1
1National Institute of Technology, Sendai College, Hirose Campus
Abstract
In Japan, earthquakes occur frequently. We analyzed and created prediction models for
earthquakes with various method. We first visualized earthquake data in three ways and
arrived at several conclusions. Plotting all earthquakes onto a map: most of the earthquake
are on the Ring of Fire in the Pacific Ocean. Histogram of earthquakes annually: all large
earthquakes must accumulate energy before occurring. Plotting the number of earthquakes
from each magnitude: from Gutenberg-Richter Law, we found the relationship between the
number of earthquakes(N) and magnitude(M), is as N = 10^(-0.758737M+7.1146147). We
then created an AI-based prediction model, we built the neural network model with
timestamps, latitude, and longitude as an input. And magnitude, and depth as an output. The
model we used is two layers of 16 neurons and alterable activation functions, and third
additional layer of 2 neurons with Softmax activation function. We then test different settings
and found that ReLU activation function with SGD optimizer is the best for the model.
Lastly, we used existing Smooth Seismology model, the Relative Intensity (RI) model. The
RI model measures past seismic activities in areas and use it to determine the potential of a
new earthquake. We first compute the RI score, ranges from 0 to 1. Using Receiver Operating
Characteristic (ROC) and Pierce’s Skill Score (PSS), we then find a threshold for RI score
that fit the past data the most. An area with exceeding RI score is then listed as area with
earthquake potential. Using method listed above, we are able to find relationships, and
predict earthquake in Japan with significant accuracy.
Keywords: Gutenberg-Richter Law, Neural Network; Relative Intensity, Receiver Operating Characteristic,
Pierce Skill Score
32
Classroom Attendance System Using Face Recognition Development
Warisara Naebsamran1, Thanachote Wattanamanikul1
Advisors: Traimit Roopsai1, Kanokon Phasikai1
1Uttaraditdaruni School
Abstract
In this recent years, Covid-19 has been striking human ways of living in many aspects.
People have been told to keep social distancing and avoid touching high-contact surfaces.
However, young people still need to get educated in schools which are challenging places to
avoid infection. The researchers tried to find a way to facilitate living in the new normal by
developing a face recognition for classroom attendance system, which does not need surface
contact and gathering of people in small spaces. This study aimed to develop a system for
checking student attendance using face recognition. The process comprises three parts: 1)
Face recognition; for each student, we collect 15 pictures from the laptop's camera and then
define the folder's name as the identification number. Then, face detection and Haar
Cascades are applied to detect faces and identify students. 2) Database; Mysql is used to
generate the database. In the database, there are two tables. The first database is the student
database, assembled from the window where students fill in the information. The second
database, called "time in," is used to record timestamp class attendees. 3) User interface;
Tkinter is used to create the application interface to indicate the consequence class attendee,
use face recognition for classroom attendees, and store student data. The result showed that
the system is usable and has the potential to use as a facility for school attendance checking
and avoiding a common touchpoint in the era of pandemics.
Keywords: Class attendance, Face recognition, Haar Cascades
33
Wireless Motion capture suit using IMU Sensors
Pakkawat Kaolim1, Wasin Jitmana1
Advisors: Aut Kongthong1, Tipanan Pothagan1, Prasit Nakhonrat2
1Princess Chulabhorn Science High School Mukdahan
2Faculty of Engineering, Ubon Ratchathani University
Abstract
Using Esp8266 and IMU sensor over the body, which consists of an Accelerometer and
Gyroscope module tracking the posture and getting three-axis data, can transfer the data to a
real-time database on firebase through Wi-Fi. Then, retrieve the data from cloud storage into
all joints on the simulator model in the Unity program. Wireless Motion capture suit using
IMU Sensors has two main objectives. Firstly, it aimed to build an affordable and workable
motion capture suit. Secondly, to adapt this suit for further benefits in other sciences. Thirdly,
to test the suit’s performance and compare to professional suit in the market.
Keywords: Motion-capture suit, Wireless
34
Psychological Advice System from Analysis of Emotional Tendency
using Thai Text and NLP
Chanikan Katti1, Khanidthee Singkul1, Nantirat Toontaisong1
Advisors: Siriporn Thongu1, Manatchanok Tamwong1, Surapol Vorapatratorn2
1Princess Chulabhorn Science High School Chiang Rai
2School of Science, Mae Fah Luang University
Abstract
This research aims to create a web application that provides psychological advice based on
analyzing emotional trends with Thai text input and the NLP technique. In this study, we
used 20 student samples from Chulabhorn Science High School Chiang Rai, Academic Year
2021. We collect their daily lives in Thai text. Then, the program will run the AI for Thai
code that we used for NLP in the sentiment analysis category to the five emotions following
the basic emotions. The results of all system accuracy 50 tests show that our web application
has an accuracy of 86 percent. The satisfaction result was at a good level with an average
score of 4.40 from 5. Finally, we demonstrated that the web application provides advice
based on analyzing emotional trends with NLP, which is accurate in user emotions analysis,
and helps users feel better and relaxed after using our web application.
Keywords: Natural Language Processing, NLP, Emotion analysis, Basic emotions
35
A Document Transportation Model between School Buildings
Pimmada Makheaw1, Suwichada Na Lamphun1
Advisor: Natthakamon Ungboriboonpisan1
1Chak Kham Khanathon School
Abstract
A document transportation model between school buildings was designed and constructed
based on IPST-WiFi and Arduino UNO Microcontroller. The model consisted of the
collecting function and measurement the speed of the document transport system between
school buildings by analyzing the number of documents comparing to the speed of document
transportation. The security system was tested using collaboration of Servo Motor and RFID
Card Reader. The design of this system could be widely applied other transportation systems.
The operation could be divided into two parts: the rail system and the document transfer box.
The rail system was controlled by a motor and the IPST-WiFi microcontroller. The part of
the box was controlled by an Arduino UNO to control a Servo Motor which was used to
close the box while an RFID Card Reader was used for security. The data was sent to Google
Sheets in order to collect the time and the school buildings the box arrived. All function was
controlled by the automatic system. The results of the study showed that a document
transportation model between school buildings was effective. It could be able to send
documents between the school buildings in the right location. The box's security and the data
could be also collected in Google Sheets. The model could be applied to various transport
systems.
Keywords: RFID, Pulley system, Servo Motor, IPST-WiFi, IFTTT
36
Automatic Garbage Separation Machine
Chise Ito1, Yumeka Murakami1
Advisor: Kyoji Komatsu1
1National Institute of Technology (KOSEN), Sendai Collage
Abstract
“A trash can changes the world.” It is a trash can that people want to put in. This is a first
step to change the world. That is the creation of this trash can. The purpose of this project is
to separate garbage. We can lower the bar for separation with this trash can with sorting
function. We explain why we add the function to the trash can. First, we consider the
transition of energy (entropy) during sorting. Entropy is a measure of a
randomness(=disorder). In the natural environment, entropy(disorder) always increases.
Energy is needed to reinstate the disorganized state. The same applies to the issue of garbage,
energy is needed to separate a garbage for recycling. This is major obstacle of having eyes
on sustainable society. Therefore, we decided to develop this trash can separate trash in
advance without mixing. In addition, we shaped a way to make you want to put your trash
in the trash can. I aimed at more aggressive trash collection, namely resources recovery. We
use sensors to separate garbage. The trash can sorts bottles, cans and plastic bottles.
Therefore, we use pressure sensors and light sensors. And it makes a sound if there are too
many leftovers. This trash can encourages people to throw away their garbage. The amount
of separated garbage increase, and that leads to a sustainable society.
Keywords: Separation, Environment, Sensing
37
Maker-less Motion Capture for Human Body Using Azure Kinect
Kokoro Emi1, Nanami Okada1, Takafumi Yamada1
1National Institute of Technology, Tsuyama College
Abstract
Recently, high-accuracy depth cameras have been widely used in 3D face recognition or
automatic driving technology. Azure Kinect is the latest version of the ToF-based depth
camera developed by Microsoft, which was originally released as a motion controller
accessory for Xbox 360 game devices. Previously, large and expensive equipment was
required for motion capture or human recognition, but now small and low-cost Kinect can
be used for this purpose, and its adoption is being considered not only for gesture-based
machine control, but also for medical and nursing care applications. Depth cameras using
ToF (Time of Flight) method, which Kinect employs, take an image and simultaneously
measure the time required a round trip by infrared rays fired at the object to calculate the
distance for each pixel. Unlike conventional stereo photography, this method enables the
simultaneous capturing of an image and detailed pixel-by-pixel depth information with a
single camera. Similar ToF sensors are installed in Apple's iPhone 12 Pro series or iPad Pro
from 4th generation or later, and are used for face recognition and autofocus control. In this
research, high-precision depth data from Azure Kinect is used to measure the movement of
the human body with sub-millimeter accuracy. Based on this measurement data, the results
will be discussed and compared with conventional examination results and diagnostic
methods, and the significance of this data will be examined.
Keywords: Motion Capture, Maker-less, Azure Kinect
38