The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

Computing and Information Technology (1)

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by mintra.s, 2023-12-18 01:14:51

Computing and Information Technology (1)

Computing and Information Technology (1)

Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship”


Thailand – Japan Students Science Fair 2023 “Seeding Innovations Through Fostering Thailand-Japan Youth Friendship” 20 – 23 December 2023


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” Thailand – Japan Student Science Fair 2023 Princess Chulabhorn Science High School are proud to organize the Thailand - Japan Student Science Fair 2023 (TJ-SSF 2023) on December 19 to 23, 2023 at Princess Chulabhorn Science High School Loei. This collaborative international event is under the motto of TJ- SSF 2023 “ Seeding Innovations Through Fostering Thailand - Japan Youth Friendship” which will be graciously preside by her Royal Highness Maha Chakri Sirindhorn who has been the driving force in the promotion of education to all Thai youths, especially in learning of science, mathematics, and technology. The goal of this Thailand - Japan Student Science Fair 2023 is to bring together groups of talented students in science and mathematics, from Thailand and Japan to share and exchange their research finding and to build closer and stronger collaboration between the two countries. There will be 36 schools from Thailand 18 schools from Super Science High School and 12 college from KOSEN participating in TJ-SSF 2023. The Thailand Japan - Student Science Fair 2023 will be a cordial bilateral religious relation between Japan and Thailand. The event will not only provide a platform for the exchange in scientific knowledge between these like-minded young scientists but also will act as springboard for sustainable development and promotion of many more 21st century skills which range from communication, collaboration to long lasting friendships among youths. Thank you all for your participating in on this auspicious cooperation. We hope you will have pleasant stay in Loei, Thailand


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 1 Table of Contents Congratulatory Message from Minister of Education of Thailand 2 Congratulatory Message from the Ambassador of Japan to Thailand 3 Congratulatory Message from Deputy Minister of Education of Thailand 4 Congratulatory Message from Secretary General of Office of The Basic Education Commission 5 Keynote Speaker 6 Contents Computing and Information Technology 11 Contributors for TJ-SSF 2023 135 Contact Persons of TJ-SSF 2023 136


2 Congratulatory Message from Minister of Education of Thailand I would like to extend my heartfelt congratulations to everyone concerned for their efforts in organising the Thailand - Japan student science fair 2023 (TJ-SSF 2023 ). Building on the successes of previous years, the 2023 event will not only foster creativity, innovation and science awareness but bring out two countries even closer through future-oriented collaborative projects. Since there is establishment in 1993, the Princess Chulabhorn Science High Schools, which nature outstanding scientific achievement in Thailand, have been proactive in developing cooperation and building relationships with many Super Science High Schools, and National Institutes of Technology or KOSEN in Japan, with support from other academic agencies. The shared goal is to enhance the teaching and learning of science, mathematics and technology among gifted students, and to gear curricula towards addressing the challenges of the 21st century. Participating students and teachers to the Science Fair benefit from having opportunities to showcase their talents and express their potential in science through project presentations. Since science and technology are the key driving forces for economic and social development, cooperation among new generations from our respective countries can only enhance the prosperity and well-being of our societies. On behalf of the Thai government, I would like to reaffirm our continuing commitment and support for inspirational events of this kind. I would like to offer my sincere thanks to everyone who has contributed to making this year’s Science Fair a success. My special thanks go to the Embassy of Japan, Japan’s Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Science and Technology Agency (JST), participating Super Science High Schools (SSHs) and KOSEN Institutes, as well asthe Japan International Cooperation Agency (JICA), and the Japan Foundation. I am confident that the continued collaboration between our two countries will lead to educational sustainability for generations to come. Police General Permpoon Chidchob Minister of Education of Thailand


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 3 Congratulatory Message from the Ambassador of Japan to Thailand On behalf of the Government of Japan, it is a great pleasure to see the Thailand-Japan StudentScience Fair 2023 (TJ-SSF 2023) again that can confirm our close and strong bond between Thailand and Japan. This is a little pace of science and technology education. It is obvious that we have already made significant progress within the students and teachers gathering from Super Science High Schools of Japan, KOSEN Institutes of Japan and Princess Chulabhorn Science Schools of Thailand in this event, Thailand –Japan Student Science Fair 2023. I would like to congratulate all students from both Japan and Thailand for their creative capacities with their science projects. I sincerely believe that these new generation students will play a vital role to develop their countries in the future. I hope this TJ-SSF 2023, which is based on cooperative work and new ways of diffusing knowledge by using science and technology methods, will further motivate both Japanese and Thai students to expand their golden opportunities in their higher education and contribute to the development of relevant fields. Finally, may I express my appreciation to an excellent arrangement and the hospitality of the Thailand - Japan Student Science Fair 2023 (TJ-SSF 2023) at Princess Chulabhorn Science High School Loei. Last but not least, I would like to express my great respect to all concerned with this great event. H.E Mr. NASHIDA Kazuya Ambassador of Japan to Thailand


4 Congratulatory Message from Deputy Minister of Education of Thailand It’s my great pleasure to praise the event of Thailand – Japan Student Science Fair 2023 (TJ-SSF 2023) between Japan and the Princess Chulabhorn Science High Schools. I would like to express my deepest admiration for the contribution to the collaboration between Thailand and Japan which promotes the fostering of the next young generation of students through various activities. The cooperation programs between the Princess Chulabhorn Science High Schools, the Super Science High Schools and KOSEN in Japan show how much of an achievement they are and how much they empower students of both countries. Science is one of the most important channels of knowledge. It has a specific role, as well as a variety of functions for the benefit of our society: creating new knowledge, improving education, and increasing the quality of our lives. Thailand – Japan Student Science Fair 2023 ( TJ- SSF 2023) ensures collaborative teaching and learning with emphasis on students’ science projects. The event has helped strengthen student’s scientific and technological competences. The activities included in the event encourage students to be creative, analytical and critical thinkers who will ultimately contribute significantly to both countries. The students were fortunate enough to participate in various activities supported by MEXT, JST, SSHs, EOJ, KOSEN Institutes, JICA, the Japan Foundation and other Japanese agencies. I whole heartily wish Thailand –Japan Student Science Fair 2023 will continue to flourish for many more years to come and make our two nations more progressively advanced in the future. Mr. Surasak Phancharoenworakul Deputy Minister of Education of Thailand


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 5 Congratulatory Message from Secretary General of Office ofTheBasic Education Commission On behalf of the General of Office of Basic Education commission, it is an honor to extend my sincere congratulations for Thailand –Japan Student Science Fair 2023 (TJ-SSF 2023). I am pleased to see the cooperation mature into something above and beyond our expectations. The number of students and teachers in both our countries often comment on the unique experiences and scientific knowledge they gained. It was not easy for all Princess Chulabhorn Science High Schools to maintain and sustain their vision. However, with the perseverance of the students, who represent the younger generation, I am confident they will improve their creative, analytical, and critical thinking skills for the 21st century which include achieving a sustainable future in line with the Sustainable Development Goals. It is that potential future and realization of the SDGs that makes me really appreciate all the cooperation of Thailand – Japan Student Science Fair over three times of the collaboration between the Princess Chulabhorn Science High Schools and the Super Science High Schools of Japan and National Institute of Technology (KOSEN). Students gained valuable experiences related to their projects on science. The event also provides opportunities for students and teachers to perform their potential on science; therefore, students have the chance to present their projects on science. I believe that this memorable science fair will greatly inspire our students’ passion for science, provide valuable learning opportunities for both our students, promote cross-cultural exchange, enhance language skills, and cultivate lifelong friendships between our students. Besides, “ The 2nd Thailand- Japan Educational Leaders Symposium: Science Education for Sustainability (TJ-ELS 2023) ” is a stage for teachers to exchange their knowledge, experiences and share their best practice that is beneficial for students. All of these activities have not only led to the goal of the collaboration program but also created a deep relationship and network amongst students, teachers, schools and our two nations. All the support and opportunities are created by academic agencies in Japan, providing a great chance for our students to explore the scientific world and learn Japanese culture. Once again, I would like to take the opportunity to wish the academic cooperation continued success in the future for many more brilliant milestones to come. Acting Sub Lt. Thanu Vongjinda Secretary-General of the Vocational Education Commission Acting for Secretary-General of the Basic Education commission


6 Keynote Speaker Dr. TOMOYUKI Naito Vice President of Kobe Institute of Computing (KIC) Tomoyuki (Tomo) Naito is Vice President and Professor at Graduate School of Information Technology, Kobe Institute of Computing, Japan. In his over 25 years of professional career, he has been working with clients on digital economy acceleration policy and strategy formulation as well as its implementation for effective development; in particular ICT use leapfrogging practice in developing countries. His professional interests include digital economy, distance learning, ICT innovation ecosystem, Internet of Things, FabLabs, Mobile Big Data solution and other related areas. Prior to assuming his current position as a graduate school professor, he was Senior Advisor in charge of ICT and Innovation for the development field at Japan International Cooperation Agency ( JICA) . Previously, he was Program Manager at the World Bank in charge of the Tokyo Development Learning Center, Director of Planning as well as Director of Transportation and ICT at JICA headquarters. He is serving for several public advisory committees as designated member including Global Steering Committee of “Internet for All” project at the World Economic Forum (2016- 2019) , Regional Governing Committee of the Global Development Learning Network AsiaPacific (2011–2021), Global Strategy Working Group under the Minister for Internal Affairs and Communications of the Government of Japan (2018–2019), and others. His recent paper “Redefining the Smart City for Sustainable Development” is contained in the Brookings Institution’s book “Breakthrough (2021) . ”The paper “Role of ICT in education redefined by COVID-19” is contained in the book “SDGs and International Contribution under the Pandemic Era (2021: in Japanese). ” Another paper “Indispensable ICT for achieving SDGs” is contained in the book “International Contribution and Realization of SDGs (2019: in Japanese).” He is also a registered 1st class architect in Japan since 1997. He holds a Master of Arts in international relations degree from Graduate School of Asia-Pacific Studies, Waseda University, Japan


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 7 Keynote Speaker Prof. Dr. Nonglak Meethong Khon Kaen University Contact information : E-mail : [email protected] Phone : 043-009700 Ext. 50660 Education: B Sc. (Honors) in Ceramic Engineering, Alfred University, New York, USA Ph D. in Structural and Environmental Materials, Massachusetts Institute of Technology, Massachusetts, USA Work Experiences: Director, Battery and New Energy Science and Technology Factory, Khon Kaen University, Thailand (2022-present) Chair of the Battery and New Energy Science Program, Department of Physics, Faculty of Science, Khon Kaen University, Thailand (2022-present) Vice President & Committee of the Thailand Energy Storage Technology Association (TESTA) https://www.testa.or.th/ (2021- present) Professor, Department of Physics, Faculty of Science, Khon Kaen University, Thailand (since March 2023) Associate Professor, Department of Physics, Faculty of Science, Khon Kaen University, Thailand (2020-present) Assistant Professor, Department of Physics, Faculty of Science, Khon Kaen University, Thailand (2013-2019) Chair of the Materials Science and Nanotechnology Program, Department of Physics, Faculty of Science, Khon Kaen University, Thailand (2014-2017) Member of several national level committees for the Thai government and advisory boards of several leading energy related companies in Thailand. Selected Honors and Awards: National Innovation Award 2023 in the field of Society and Environment (2023) WIIPA Grand Prize for Commercial Potential Award, Kaohsiung International Invention & Design Expo (2022) Special Recognition award, National Research Council of Thailand (2020) Thailand New Gen Innovators Award (2020) Thailand Young Scientist Award from Foundation for the Promotion of Science and Technology Under the Patronage of His Majesty the King (2015) Excellent Ph.D. Thesis Award from National Research Council of Thailand (2014) Winner, Recipient of the H.R.H Princess Maha Chakri Sirindhorn Cup from the 4th National Competition on Innovative Nanotechnology (2012) Special Recognition award, Thai Rice Foundation Under the Patronage of His Majesty the King and National Innovation Association of Thailand (2014) Research Expertise: Synthesis of energy storage materials Characterizations of nanomaterials by advanced techniques Pilot scale production of materials for batteries


8 Keynote Speaker Dr. Hattapark Dejakaisaya Princess Srisavangavadhana College of Medicine Contact information : E-mail : [email protected] Phone : 061-115-5991 Work Experiences February 2022 – Present: Princess Srisavangavadhana College of Medicine Position:Neuroscientist,Lecturer May - August 2017: Chulabhorn Research Institute Position:Short-termResearcher Projecttitle: Sesame and cancer Supervisors: A/Prof. Jutamaad Satayavivad & Dr. Nuchanart Rangkadilok July - August 2014: Chulabhorn Research Institute Position:TraineeResearcherinLaboratory of Pharmacology Qualifications 2018 – 2021: PhDinNeuroscience Monash University, Melbourne, Australia Supervisors: Prof. PatrickKwan&A/Prof. NigelJones Thesistitle: The role of glutamate in the pathogenesis of epilepsy in Alzheimer’s disease Main techniques: Advanced animal handling and euthanisation skills(mouse) Mouse brainsurgery, electrode implantation, kindling-induced seizure&EEGrecording Experience inmetabolomic analysis using liquid chromatography-massspectrometry Trained in variousmolecular biologytechniques(western blot,immunohistochemistry) 2015 - 2016: Masters byResearch inBiological Sciences University ofLeeds, Leeds, United Kingdom Thesistitle: MolecularPharmacology ofthe Slo2.2Potassium Channel Supervisors:Dr.JonathanLippiat&Dr.StephenMuench *Graduated with Distinction/FirstClass Honour* Main techniques: Trained in two-electrode voltage clamps electrophysiology techniques Plasmid construction and pointmutation insertion Polymerase chain reaction 2011-2015: Bachelor of Science in Pharmacology University ofLeeds, Leeds, UK *Graduated with 2:1/Second Class Honour* 2007-2011: HarrowInternational School,Bangkok


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 9 Achievements 2021 Data-Blitz oral presentation at ‘Epilepsy MelbourneSymposium 2021’,Melbourne 2020 Poster presented at ‘Epilepsy Society ofAustraliaAnnual ScientificMeeting 2020’, Hobart Poster presented at‘Epilepsy MelbourneSymposium 2020’,Melbourne 2019 Won‘People’sChoice award’fromMonashUniversityCentralClinicalSchool ‘3 Minute Thesis’ competition, Melbourne. Poster presented at ‘Epilepsy Society of Australia Annual ScientificMeeting 2019’, Sydney Oral presentation at ‘Student ofBrainResearch Symposium 2019’,Melbourne Poster presentation at ‘CNS Diseases 2019’, Melbourne Competed in the ‘ 2019 Translational Research Symposium Poster Competition’ , Melbourne 2018 Recipient of‘The Scholarship inCommemoration ofHM KingBhumibol Adulyadej’s 90th Birthday Anniversary’fromChulabhornRoyal Academy,Bangkok,Thailand Position of Responsibility 2022– Present: 1. Course coordinator of “Mechanism of drug action” for year 3 medicine 2. Course coordinator of“Neuroscience & Behaviour”for year 3 medicine 3. Studentaffairs committeemember


10 Contents Section Computing and Information Technology Page CO-01T Web application for displaying data and controlling IoT systems 11 CO-02T Serverless URL Shortener Using Firebase Dynamic Link 14 CO-03T Development of Thai Handwritten Character Dataset with MNIST-inspired Style using Data Augmentation and GANs 17 CO-04T VannameiVision : An Optimized Probabilistic Deep Learning for Susceptible Shrimp Larvae Detection 21 CO-05T Web application for Thai to Thai sign language translation 26 CO-06T The development of an AI-assisted web application for detecting Mycobacterium Tuberculosis (M. Tuberculosis) from sputum with Acid-Fast Bacillus (AFB) method 30 CO-07T An application to read the drug label for the elderly 34 CO-08T Machine Learning Assisted Biomarker Discovery for Lung Cancer Diagnosis Based on Multi-omics Data 37 CO-09T The Environmental Controller of Photosynthesis Bacteria (PSB) Insemination. 41 CO-10T Obstacle Detection-Guided Robot Prototype for the blind with GPS Positioning 43 CO-11T Study and development system of the robot arm 47 CO-12T Wan wann land amazing thai dessert 50 CO-13T FAPPTHY: Facial Muscles Exercises Using Computer Vision. 55 CO-14T Combining Scientific Theories in MMORPG Open World Game Using Core Engine (Dragon’s Ends) 58 CO-15T Algorithm to Detect Minutiae on Latent Fingerprint and Apply in Forensic Science 61 CO-16T A model of the automatic lettuce planting system for a block farm 64 CO-17T Artificial Intelligence for Drum Note Translations 68 CO-18T Development of Titration Equivalence Point Warning Software by Measuring the Colors of the Acid-base Indicator Solutions 71 CO-19T Home automation cooling system using kidbright board 74 CO-20T The trigonometry distance calculator machine 76 CO-21J Bocchi Talk: A New Conversation Tool with ChatGPT 80 CO-22J TIGER BEETLE : Reverse Driving Prevention System with DSRC 84 CO-23J Development of Mobile Application for Learning Thai and Japanese for Learners 88 CO-24J Eliminated Dementia Camera For Assisting Dementia Patients 92 CO-25J Measurement Support System for One Leg with Eyes Open 96 CO-26J Handwritten Character Recognition Model for Foreigners' Handwritten Japanese 99 CO-27J TREASURE on FIND: Development of an application to support freely facility tours 104 CO-28J Oshi no Map ~Web Application to find new friend with the same interests~ 108 CO-29J Infer and control emotions from information expressed by different sensory functions 111 CO-30J Visually impaired people assistance By a Smart White Cane. 115 CO-31J Development of Swizzle-type Traveling Mechanism for Legged Robot with Freewheel 118 CO-32J Manufacturing of an inverted pendulum-type opposite dual-wheel robot 122 CO-33J Development and Supporting of an Educational Accessibility Switch Controller 126 CO-34J Functional sleeping bag with temperature control(DENEBU) 131 CO-35J Improve concentration with smell 133


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 11 Kazumasa Inoue1 And Wayupuk Sommuang1 Advisors: Jirayus Arundechachai2 And Soontree Montrisri2 Special Advisor: Parkpoom Chaisiriprasert3 And Papangkorn Pidchayathanakorn3 1,2 Princess Chulabhorn Science High School Loei, That, Chiangkhan, Loei, 42110, Thailand 3Rangsit University, Lak-Hok, Muang Pathumthani, 12000, Thailand Abstract Web application for displaying data and controlling IoT systems aim to 1 ) To create an efficiency web application for display data and control IoT system. 2 ) To study C, PHP and MySQL. and 3) To study the operation of ESP8266. The operation of this web application begins with selecting the card of the device to be controlled on the web application interface. The system will update the data in the database, and the controller will retrieve the data from the database and process it. Then, the processed data will be sent back to the web application interface to allow the web application to retrieve the output data from the sensor connected to the controller. In developing the web application for data display and IoT system control, the objective is to create an efficient web application for displaying data and controlling an IoT system. The project's progress will be evaluated by 100 evaluators to determine the average satisfaction score. The evaluation criteria are as follows 1) Easy to use. 2) Proper arrangement of various data on the screen. Web application for displaying data and controlling IoT systems 3) Modern design. 4) ability to control devices. 5) Proper data management. The average evaluation score from all evaluators is 4. 77, which falls within the highly satisfactory range according to the evaluation criteria. Keywords: IoT, Web Application, ESP8266, NodeMCU, Database Introduction Currently, technology plays a significant role in our daily lives, such as computers, smartphones, and more. Most of the technology we use is connected to the internet, which enhances communication and efficiency. For example, we can control lights through our mobile phones or view CCTV cameras remotely. Recognizing the importance and benefits of utilizing technology to enhance convenience for users, the creators have developed web applications for displaying and controlling IoT system. This application consists of a database, web application server, and client. The client operates in two types: 1. Receiving data from sensors and 2. Displaying data received from the server. The web application system is built using Arduino for communication between the server and the client. PHP is used for the web application interface and converting data from the database into JSON format. phpMyAdmin manages the database, and MySQL handles the database operations. To use the system, users access the web application and choose between two modes: 1. Monitor mode to view data from sensors, and 2. Control mode for managing the devices in use. Materials and Methods Materials 1. ESP8266 2. Jumper Wire 3. MicroUSB 4. Computer or laptop with programs


12 4.1 Xampp 4.2 Visual Studio Code 4.3 Arduino IDE Methods Part 1 Database management. 1. Open Xampp and click start on module Apache and MySQL. 2. Create database name “dbstatus” and 2 table “box_func” and “command” in database. 3. box_func create header column “ ID” , “label”, “board”, “type”, “pin” and “IO”. 4. command create header column “ id” , “board”, “d0”, “d1”, “d2”, “d3”, “d4” and “A0”. Part 2 Module client 1. Open folder and navigate C: \xampp\htdocs. Create a folder and name it as “ NodeMCU_ Get_Database” then create folder“data” in “NodeMCU_ Get_Database”. 2. Navigate to folder “data” and create new file named “database.php”. 3. Right click atfile “database.php” and click open with Visual Studio Code. 4. write program “database.php”. 5. Createfile GetTest.php and write a program. 6. Open Arduino IDE and name a file as “Client1.ino”. 7. Create a file named “status1. php” and write the program. Part 3 Create Web Application 1.Navigate to folder “NodeMCU_Get_Database” and createfile named “index.php” 2. Create a file named “insertcode.php” Operation Figure 1. Flowchart for Controller Figure 2. Flowchart for Web Results and Discussion Results 1. Result of performance evaluation forweb application From performance evaluation of web application for data display and control of an IoT system with 100 evaluators to find average satisfaction. 2. Result of evaluating the efficiency of web applications From evaluating the efficiency of web applications for displaying data and controlling IoT systems, by evaluating the performance with computers and phones by examining the responsiveness of the devices. Table 1: Shows evaluations score from 100 evaluator per each Table 2: Shows evaluating the efficiency of web applications.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 13 (A) Smartphone (B) Computer Conclusions By collecting data from web application users and evaluating satisfaction. The average result was 4.77, which is considered very satisfactory. Acknowledgments This project has been successfully completed, and I would like to express my gratitude to Mr. Gittichai Gruaythong, the Director of Princess Chulabhorn science high school loei, for his support and encouragement throughout the project. I would also like to thank Mr. Jirayus Arundechaichai and Ms. Sunthee montrisee, the project advisor, for his valuable knowledge, assistance, guidance, and insightful feedback, which greatly contributed to the development of the project. It is my sincere hope that this project report will be beneficial for those who wish to further study and explore this topic. References [1] Zhihong Yang, Yingzhao Yue, Yu Yang, Yufeng Peng, Xiaobo Wang and Wenji Liu, " Study and application on the architecture and key technologies for IOT," 2011 International Conference on Multimedia Technology, Hangzhou, 2011, pp. 747-751, doi: 10.1109/ICMT.2011.6002149. [2] Tao Liu and Dongxin Lu, "The application and development of IOT," 2012 International Symposium on Information Technologies in Medicine and Education, Hokodate, Hokkaido, 2012, [3] pp. 991- 994, doi: 10.1109/ITiME.2012.6291468. [4] QIAN Zhi- hong;WANG Yi- jun. IoT Technology and Application[ J] . ACTA ELECTONICA SINICA, 2012, 40( 5) : 1023- 1029 https:// doi. org/ 10. 3969/ j. issn. 0372- 2112.2012.05.026 [5] J. Mesquita, D. Guimarães, C. Pereira, F. Santos and L. Almeida, " Assessing the ESP8266 WiFi module for the Internet of Things," 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation ( ETFA) , Turin, Italy, 2018, pp. 784- 791, doi: 10.1109/ETFA.2018.8502562.


14 Patsagorn Yuenyong1* and Phubest Srikoon1 Ekkachai Watthanachai2 1,2Princess Chulabhorn Science High School Buriram, Satuek, Satuek, Buriam, Thailand 31150 *Email: [email protected] Abstract This project aims to develop a costeffective and sustainable URL shortening web application, leveraging existing services and technologies. By selecting Firebase Dynamic Link as the URL shortener service and utilizing the SvelteKit meta framework for frontend development, the application achieves a userfriendly interface without the need for a dedicated backend. The use of short URLs enhances user perception of the brand and adds credibility to the links, as they are created after logging in. Additionally, the application caters not only to content creators but also offers a flexible foundation for users with specific requirements. Deployment on the Vercel serverless platform ensures low latency access worldwide. The successful implementation of this project underscores its potential as an accessible and valuable solution for a diverse user base. Keywords: URL Shortener, SvelteKit, Firebase Dynamic Link, Serverless computing Introduction Online communication has become an integral part of people's lives, encompassing various mediums such as chat, social media, and Serverless URL Shortener using Firebase Dynamic Link calls. When it comes to transmitting information, URLsserve as identifiers and addresses for online documents and pages. In formal contexts, such as when official organizations post on their Facebook accounts, using a URL link may result in excessively long text [1] URL shortening technology addresses this issue by creating shorter links that redirect to longer, pre- existing ones. These shortened links have proven to be more appealing to readers, as they offer click- through analytics, allow for customized text that users can easily input on their devices, and prove advantageous on platforms with character limitations, like Twitter [1], [2]. However, despite their benefits, shortened links also present security weaknesses for visitors. Users may not be fully aware of the actual destination of the link, increasing the risk of phishing attempts. Additionally, certain platforms, such as Wikipedia, prohibit the use of short URLs to prevent abuse [2]–[5], or such URLs may cease to function [4]. This project focuses on the development of a web application that facilitates URL shortening, offers statistical insights, and requires user login to mitigate abuse and enable logging. The authors are inclined to donate this work to the school and release it as open- source software. Ensuring a reasonable cost and reliable service are crucial considerations for its implementation. Materials and Methods Our methodology is as follow: Phase 1: Selection of URL shortener service As a result of budgetary restrictions and time constraints, our strategy entails utilizing an established URL shortener service that is readily Table 1: Comparison of each URL Shortener service


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 15 accessible in the market. To enable seamless communication with the service, we will create a frontend wrapper that interfaces with its API. Our selection of the service will prioritize minimal charges, ensuring cost-effectiveness in the implementation of the project. Phase 2: Development of user-facing app We will proceed with the development of a web application using the SvekteKit meta framework. This application will be designed to interact with the URL shortener service selected in Phase 1. Phase 3: Deployment To deploy the web application, we have opted for Vercel, which offers seamless zeroconfiguration support and aligns with the authors' familiarity with the platform. Additionally, the application will be made publicly available as an open-source project on GitHub. Specification In order to meet the desired specifications for our application, the following requirements must be fulfilled: 1. Cost- Effective URL Shortening Service: The selected URL shortening service should offer predictable, minimal, or no charges to ensure long-term sustainability. 2. Reliability and Uptime: The service must exhibit high reliability and maintain good uptime, allowing users to access it without interruptions at any time. 3. Customized Shortened Links: The shortened links generated by the service should be domain-customized, providing a sense of trust to end users. 4. User Authentication: To prevent abuse and ensure security, user authentication is required for accessing the URL shortening features [14]. 5. Blacklist/Whitelist Functionality: The application must incorporate the ability to blacklist or whitelist specific links submitted by users, affording better control over link accessibility. 6. Link Click Statistics: Users should have access to the number of clicks their shortened links receive, allowing them to monitor engagement. 7. QR Code Generation: The web application should be capable of creating QR codes within the app, enhancing user convenience Results and Discussion Throughout each phase of the project, we achieved the following results, which have been discussed in detail below. Phase 1: Selection of URL Shortener Service Based on the specified requirements in the project's specifications, we conducted a comparative analysis of various URL shortening services. Table 1 presents the evaluation of each service, considering factors such as custom domain support, charging conditions, link visit count tracking, and blacklist/ whitelist configuration. After careful consideration, we chose Firebase Dynamic Link as the most suitable service for our needs. Phase 2: Development of the User-Facing App To optimize resource allocation and minimize costs, the authors opted to create a web application without a dedicated backend. Leveraging the SvelteKit meta framework for frontend development, we proceeded with the implementation. SvelteKit was chosen due to the team's familiarity and experience with the framework. Phase 3: Deployment For the deployment of the web application, Vercel was selected as the hosting platform. Its free tier and serverless architecture powered by AWS Lambda [15]–[17] offer low latency access from anywhere on Earth. Additionally, we published the source code on GitHub 1 under the MIT license, promoting openness and collaboration. Conclusions In conclusion, this project successfully integrated existing services and technologies with innovative solutions to create a cost-effective and sustainable URL shortening application. By leveraging Firebase Dynamic Link and the SvelteKit meta framework, we achieved a userfriendly and budget- friendly web application without the need for a dedicated backend. The project's benefits extend beyond content creators, as it also serves as a valuable base for those seeking to fulfill their specific requirements easily. This versatility enhances the application's potential impact and usefulness.


16 Acknowledgments We extend our heartfelt gratitude to Princess Chulabhorn Science High School Buriram for their invaluable support, providing the necessary equipment and overall assistance throughout the duration of this project. References [1] D. Antoniades et al., “we.b: the web of short urls,” in Proceedings of the 20th international conference on World wide web, in WWW ’11. New York, NY, USA: Association forComputing Machinery, 2011, pp. 715– 724. doi: 10.1145/1963405.1963505. [ 2] GeeksforGeeks, “ Pros and Cons of URL Shorteners,” GeeksforGeeks. Accessed: Jul. 27, 2023. [Online]. Available: https://www.geeksforgeeks.org/prosand-cons-of-url-shorteners/ [ 3] J. Henry, “ What Are the Benefits of a URL Shortener?,” Go WordPress. Accessed: Jul. 27, 2023. [Online]. Available: https: //wordpress.com/go/digitalmarketing/what-are-the-benefits-of-a-url-shortener/ [4] A. Neumann, J. Barnickel, and U. Meyer, “Security and Privacy Implications of URL Shortening Services,” 2010. Accessed: Jul. 28, 2023. [ Online] . Available: https://www.semanticscholar.org/paper/Security-andPrivacy- Implications- of- URL- Shortening- NeumannBarnickel/7320f130967ada26a7c54505d553a1c7cc8b 4b2d [5]N. Nikiforakis et al., “Stranger danger: exploring the ecosystem of ad-based URL shortening services,” in Proceedings of the 23rd international conference on World wide web, in WWW ’14. New York, NY, USA: Association for Computing Machinery, 2014, pp. 51– 62. doi: 10.1145/2566486.2567983. [6] C. Ashlock, “ Firebase is Shutting Down Dynamic Links. Branch Has You Covered.,” Branch. Accessed: Jul. 27, 2023. [ Online] . Available: https://www.branch.io/resources/blog/firebase-dynamiclinks-shutting-down/ [7] “Dynamic Links Deprecation FAQ,” Firebase. Accessed: Jul. 27, 2023. [ Online] . Available: https://firebase.google.com/support/dynamic-links-faq [8] Firebase, “Firebase Dynamic Links | Firebase Documentation. ” Accessed: Jul. 27, 2023. [Online] . Available: https: / /firebase.google.com/docs/dynamiclinks [9] D. Stevenson, “ Answer to ‘ Do Firebase Dynamic Links have a usage quota?,’” Stack Overflow. Accessed: Jul. 27, 2023. [ Online] . Available: https://stackoverflow.com/a/55272319 [10] “Explore Deep Linking & Mobile Attribution Pricing,” Branch. Accessed: Jul. 27, 2023. [Online] . Available: https://www.branch.io/pricing/ [11] S. Khloyan, “Understanding the CPM Pricing Model: Concept & Benefits.” Accessed: Jul. 27, 2023. [ Online]. Available: https://www.aarki.com/insights/understanding-the-cpmpricing-model [12] “Get Started,” Kochava. Accessed: Jul. 27, 2023. [Online]. Available: https://www.kochava.com/getstarted/ [13] “Choose your plan! | Adjust Pricing, Products & Features | Adjust.” Accessed: Jul. 27, 2023. [Online]. Available: https://www.adjust.com/pricing/ [14] “URL Shortening System Design,” System Design. Accessed: Jul. 27, 2023. [Online] . Available: https://systemdesign.one/url-shortening-system-design/ [15] I. Baldini et al. , “ Serverless Computing: Current Trends and Open Problems,” in Research Advances in Cloud Computing, S. Chaudhary, G. Somani, and R. Buyya, Eds., Singapore: Springer, 2017, pp. 1–20. doi: 10.1007/978-981-10-5026-8_1. [16] Lydia Hallie, “ Streaming from serverless Node.js and Edge Runtime on Vercel –Vercel,” Vercel. Accessed: Jul. 27, 2023. [ Online] . Available: https://vercel.com/blog/streaming-for-serverless-node-jsand-edge-runtimes-with-vercel-functions [17] G. McGrath and P. R. Brenner, “Serverless Computing: Design, Implementation, and Performance,” in 2017 IEEE 37th International Conference on Distributed Computing Systems Workshops (ICDCSW), Jun. 2017, pp. 405–410. doi: 10.1109/ICDCSW.2017.36.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 17 Supawit Marayat1 Advisors:Satit Thamkhanta1 Special Advisor:Theerasit Issaranon2 1Princess Chulabhorn Science High School Chiang Rai, 345 Rop Wiang Sub-district, Mueang Chiang Rai District, Chiang Rai 57000, Thailand 2National Electronics and Computer Technology Center, 111 Thailand Science Park Phahonyothin Road, Khlong Nueng, Khlong Luang, Pathum Thani 12120, Thailand Abstract In this project, we aim to create a dataset of 54 Thai handwritten characters, including the Thai letters Kor Kai to Ho Nokhuk and the Thai numbers 0 to 9. Steps in Methods 1.) involves the use of paper forms and extraction program from actual people 2.) use a technique for basic data augmentation to get generated more data. 3. ) Investigate various Generative Adversarial Networks (GANs) algorithms for synthesize more data which is including Conditional GAN (CGAN) , Auxiliary Classifier GAN (ACGAN) , Semi-Supervised GAN (SGAN), and Information Maximizing GAN (InfoGAN). The best algorithm, according to our evaluation using Inception Score ( IS) and Fréchet Inception Distance ( FID) , is CGAN with IS at 2.5 and FID at 29.1 Keywords: Thai handwritten, Dataset, Data Augmentation, Generative Adversarial Networks, Image synthesis Development of Thai Handwritten Character Dataset with MNIST-inspired Style using Data Augmentation and GANs Introduction MNIST ( Modified National Institute of Standards and Technology) Dataset is one of most popular datasets in branch of Computer vision. With a class nor index of 0 to 9 image. And mainly material if 1. Train set around 60,000 image per class. 2. Centered image, 3. Gray scale with 28 × 28-pixel image. Figure 1. EMNIST dataset While the success of datasets like MNIST has led to the creation of alternative versions such as EMNIST for English alphabets and Devanagari MNIST for the Hindi language, a Thai version seems to be missing. In this case. we aim to create a dataset of 54 Thai handwritten characters, including the Thai letters Kor Kai to Ho Nokhuk and the Thai numbers 0 to 9. With all feature from MNIST. Furthermore, the utilization of modern generative AI, specifically Generative Adversarial Networks ( GANs) , can serve as a crucial tool for data augmentation, thereby enabling the creation of a more extensive image dataset. However, due to the significant role that classes play, the use of ordinary GANs may not suffice. To address this, we propose the implementation of various GAN models such as Conditional GAN (CGAN), Auxiliary Classifier GAN (ACGAN), Semi-Supervised GAN (SGAN), and Information Maximizing GAN (InfoGAN). Evaluating the most effective CGAN architecture can be accomplished by using metrics such as Inception Score ( IS) and Fréchet Inception Distance (FID).


18 Our goal is to establish a comprehensive ThaiMNIST dataset comprising 54 classes, each consisting of over 500 images. Subsequently, we will utilize this curated dataset to train various GAN models, enabling us to determine the most effective approach. Materials and Methods Materials 1. Python 3.11 2. Pytorch 4. Form papers 5. Epson es-50 1. Handwritten Data Collection 1.1. Our volunteers will manually transcribe Thai alphabets onto paper forms. 1.1.1. 500 volunteers 1.1.2. Centered images. Figure 2. Show a form that were print to paper. 2. Image processing 2.1 Scanned form image, Inverted image color. And Binarization an image 2.2 Image processing with thresholding at 210, contrast of 2.0 and brightness of 1.7. 2.3 Cropped image to 128 × 128-pixel in each box for each class. 2.4 A Gaussian filter with σ = 1 is applied. 2.5 The region around the actual digit is extracted. 2.6 With the aspect ratio. The region of interest is padded with a 2-pixel border. 2.7 The image is down-sampled to 28 × 28-pixel. 3.Training process. 3.1 The image is converted to a matrix with the attached class using the PyTorch image loader. 3.2 Each architecture is trained with the prepared data, adhering to the following constants: 3.2.1 200 epochs 3.2.2 Batch size of 32 3.2.3 Learning rate set to 0.0001 3.2.4 Latent dimension of 100 3.3 Generate an image dataset for each architecture. 3.4 Evaluate the models using the IS and FID score metrics. 4.Evaluateion process. 4.1 Calculate IS and FID base on pseudo code. Figure 4. IS score. 4.2 calculate for every class for 54 classes. Figure 5. FID score. 5. Synthesis more dataset process. 5.1 Create Noise that control by latent dimension with Grayscale 28 × 28-pixel 5.2 Generate image using noise with the best Generator and contain in class index folder. Results and Discussion Data collection process. Here’ s the table show structure and organization of the ThaiMNIST datasets. Before augmentation of GANs


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 19 Table 1: Structure and organization of the ThaiMNIST datasets. Title Classes Total Overall 54 27,000 By letters 44 22,000 By digit 10 5,000 Evaluation process Once the training is completed, we proceed to train the fourth architecture. Subsequently, we extract data from each of the four classes and summarize. Table 2: Min, Max and Average of IS score from Evaluation process. Architecture Average Min Max CGAN 2.5 2.0 3.0 ACGAN 2.0 1.5 2.5 SGAN 1.8 1.2 2.4 InfoGAN 2.2 1.7 2.7 Table 3: Min, Max and Average of FID score from Evaluation process. Architecture Average Min Max CGAN 29.1 26.8 31.7 ACGAN 34.2 30.5 38.7 SGAN 36.8 33.5 39.6 InfoGAN 31.5 28.9 34.7 Synthesis more dataset process. Finally, after determining the best architecture, we will proceed to add the item to the datasets. Table 4: Structure and organization of the ThaiMNIST datasets. With extends Image from GANs. Title Classes Total Overall 54 54,000 By letters 44 44,000 By digit 10 10,000 Conclusions Following an extensive process encompassing data collection, image processing, training, and evaluation, we have determined that the most effective architecture is the Conditional GAN. It achieved an impressive Inception Score (IS) of 55.6 and a Fréchet Inception Distance (FID) of 80.2. The final dataset comprises a total of 54 classes, with 44,000 instances distributed across 44 letter classes and 10,000 instances across 10- digit classes. Acknowledgments This project was accomplished through the help of many parties. Therefore, I would like to say thank to over 300 volunteers. I would like to thank Teacher Satit Thamkhanta and Teacher Nawamin Wongchai. For project consultant. And IPU team, AINRG Group, National Electronics and Computer Technology. project advisor that provides academic information and correction as well as giving advice. References 1. Acharya, S., Pant, A. K., & Gyawali, P. K. (2015). Deep learning based large scale handwritten Devanagari character recognition. Paper presented at the 2015 9th International conference on software, knowledge, information management and applications (SKIMA). 2. Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein generative adversarial networks. Paper presented at the International conference on machine learning.R. Saito, G. Dresselhaus, and M. S Dresselhaus, “Physical Properties of Carbon Nanotubes”, London, Imperial College Press, 1998. 3. Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. J. A. i. n. i. p. s. (2016). Infogan: Interpretable representation learning by information maximizing generative adversarial nets. 29. 4. Cohen, G., Afshar, S., Tapson, J., & Van Schaik, A. (2017). EMNIST: Extending MNIST to handwritten letters. Paper presented at the 2017 international joint conference on neural networks (IJCNN). 5. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. J. A. i. n. i. p. s. (2017). Gans trained by a


20 two time-scale update rule converge to a local nash equilibrium. 30. 6. Mirza, M., & Osindero, S. J. a. p. a. (2014). Conditional generative adversarial nets. 7. Odena, A., Olah, C., & Shlens, J. (2017). Conditional image synthesis with auxiliary classifier gans. Paper presented at the International conference on machine learning. 8. Odena, A. J. a. p. a. (2016). Semi-supervised learning with generative adversarial networks. 9. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. J. A. i. n. i. p. s. (2016). Improved techniques for training gans. 29. 10. Shorten, C., & Khoshgoftaar, T. M. J. J. o. b. d. (2019). A survey on image data augmentation for deep learning. 6(1), 1-48.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 21 Patipond Tiyapunjanit 1 , Thinnaphat Siammai 1 , Advisor : Khunthong Klaythong 1 , Special Advisors : Natthinee Munkongwongsiri 2 , Chanati Jantrachotechatchawan , and Kobchai Duangrattanalert 4 1Princess Chulabhorn Science High School Pathumthani, 2National Center for Genetic Engineering and Biotechnology, 3Mahidol University, 4Chulalongkorn University Abstract Detecting susceptible shrimp larvae presents a significant challenge that requires dedicated effort, skill, and expertise. In this study, we introduce an advanced approach that combines probabilistic deep learning with transfer and deep metric learning using a triplet loss function. Employing 5-fold crossvalidation, we rigorously compared various model variations. Results indicate that integrating transfer and deep metric learning significantly improves the system. Specifically, DenseNet121, when combined with these techniques, achieved 92% accuracy, 87% sensitivity, and 97% specificity. After fine-tuning, the model consistently exceeded 90% accuracy on diverse backgrounds. These findings highlight the effectiveness of our method in accurately identifying vulnerable shrimp larvae. Keywords: shrimp larvae, deep learning, probabilistic deep learning, transfer learning, deep metric learning Introduction Shrimp farming, a crucial segment of the aquaculture industry, has significantly bolstered VannameiVision: An Optimized Probabilistic Deep Learning for Susceptible Shrimp Larvae Detection global seafood production. Yet, challenges persist, especially in early detection of vulnerable shrimp larvae [1]-[3]. These susceptible larvae exhibit altered attributessuch as a dull appearance and reduced activity, indicators of increased vulnerability to various stressors. Timely detection facilitates interventions like habitat enhancement [ 4] and specialized feeding [ 5] , promoting shrimp health. Deep learning has notably advanced many industries, including aquaculture [6]-[8]. Notably, a system utilizing convolutional neural networks measured fish biomass with stereo cameras, showing minimal error for seabream and seabass [9]. Furthermore, an enhanced YOLOv5 network detected abnormal fish behaviors with over 99% precision [ 10] . This technology underscores transformative potential in aquaculture practices. Our study proposes a method to identify vulnerable shrimp larvae, integrating probabilistic deep learning with transfer and deep metric learning. Data sourced from shrimp cultures underwent 5- fold cross- validation, with DenseNet121 showing the most promising results. Post- modeling, we rigorously analyzed performance for optimal accuracy and robustness. Materials and Methods A. Dataset 1) Shrimp culture We obtained 2,000 PL1 Pacific white shrimp (Litopenaeus vannamei) from The National Center for Genetic Engineering and Biotechnology (BIOTEC), Thailand, and divided them into two equal groups, S1 (robust) and S2 (susceptible). Initially, both groups were housed in 5 liters of water with a salinity level of 30 ppt, gradually reducing salinity by 5 ppt as they advanced through post-larval stages. 2) Susceptibility induction S1 was given four daily meals of 0.50 grams of pellets at 7:00 a.m., 11:00 a.m., 3:00 p.m., and 7:00 p.m. for 12 days. S2 had the same schedule but received 0.25 grams per meal. During this period, we monitored water quality and parameters like NH4 + and NO2 - for optimal shrimp larvae growth. Put the highlight picture of your project in this area.


22 Subsequent tests determined shrimp robustness or susceptibility.Durability Test: Shrimp from each group, divided into three sets of 10 larvae, were left undisturbed for 5, 10, and 15 minutes in petri dishes. Afterward, their swimming abilities were observed in a saline solution with a 10 ppt salinity level for 1 minute. We conducted the test three times, confirming S1 larvae as robust and S2 larvae as susceptible as per national criteria. Osmotic shock test: We split shrimp from each group into two sets of 20 larvae and exposed them to 100 ml of 0 ppt salinity water for 30 minutes. After observing their swimming abilities for 1 minute, we repeated the test thrice. Averaged results confirmed that S1 larvae were robust and S2 were susceptible as per national criteria. Chemical shock test: We separated shrimp into two groups of 20 larvae each. These groups were exposed to 100 ppm formalin solution for 30 minutes, followed by a 1-minute observation of their swimming abilities. This test was repeated three times, confirming S1 larvae as robust and S2 larvae as susceptible according to national criteria. Table 1 Sample partition of the three datasets. 3) Data collection We took pictures of the shrimp using a Redmi Note 8 Pro smartphone with a 10X macro len. We positioned the camera approximately 5 cm away from the larvae to maintain consistency of the capturing condition. We created three datasets, each with different background colors and textures (see Table 1). 4) Data processing In this section, we applied logarithmic correction to enhance visual clarity of the image. This correction is computed by the (1): (1) where represents the output image and denotes the input image 2 . Next, to ensure uniformity and maintain the original aspect ratio, we added black patches symmetrically to the image, resulting in a square shape. Finally, we resized the image to dimensions of 224 by 224, which is common in various image processing applications. B. Model development We designed a multi-layer neural network model (Figure 1) for processing RGB images of size (224, 224, 3). The base model extracts features, resulting in a linear vector through global average pooling. Dense layers ( 128 units each) with Mish activation and L1L2 regularization ( 0. 001 lambda) mitigate overfitting. Subsequent layers perform variational mapping via reparameterization. We chose base models with <10 million parameters: MobileNetV2, Densenet121, EfficientNetV2 B0, B1, and B2 to improve data efficiency. C. Training strategies Addressing data asymmetry where we have more data on robust shrimp larvae than susceptible ones, we adopted group sampling. This ensures balanced representation of both classes in training data. We've added visual augmentations including random brightness (values between 0 and 1), image flipping, and random rotation (-360 to 360 degrees). The model training utilized an Adam optimizer with a 1×10- 5 learning rate and a binary focal loss function (alpha 0.25, gamma 2.0) to address class imbalance. Performance evaluation metrics are accuracy, sensitivity at ≥ 0.85 specificity, and Figure 1 architecture model structure


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 23 specificity at ≥ 0. 85 sensitivity. We applied a triplet loss function with a 1.0 margin to boost class discrimination. The model was trained for 50 epochs during cross-validation and 250 epochs for the final model. D. 5-fold cross-validation In this study We divided the main data set into seven sections. The first five sections (K1-5) are reserved for cross validation, and the other two sections ( Holdout 1 and 2) are reserved for independent evaluation of the training dataset during cross validation and final model training. respectively (see Table 2). This approach ensures that our models are comprehensively evaluated for both their accuracy and efficiency in handling unseen data. Table 2 Example of dividing the main data set Results and Discussion CROSS-VALIDATION RESULTS From the study when combined with probabilistic deep learning, transfer learning and deep metric learning, significant improvements in model performance can be observed. This has proven to be extremely effective in discriminating between healthy and weak shrimp larvae. The results are as shown in Figure 2. Figure 2 Results from 5-fold cross-validation of the model trained with combinations of probabilistic learning, transfer learning, and deep metric learning. Mean and standard error of the five subsets of focal loss (top left), accuracy when threshold at 0.50 (top right), sensitivity when specificity ≥ 0.85, and specificity when sensitivity ≥ 0.85 were calculated and presented in the bar charts. MODEL PEFORMANCE AND ANALYSIS After performing 5-fold cross-validation, we trained Densenet121 with three techniques on a larger dataset (K1-5). We used Holdout 1 for validation and Holdout 2 for evaluation. To assess prediction uncertainty, we calculated prediction variances. Predictions close to 1 or 0 had low uncertainty, indicating high confidence. However, some confident predictions were inaccurate, causing false positives and false negatives. We identified samples with frailty scores around 0.5 and hypothesized that populationlevel susceptibility examination limitations contributed to this(Figure 3). As a result, we adjusted the threshold from 0.5 to 0.3, leading to improved results in Table 3 and enhanced overall model performance. Figure 3 Scatter plot to show prediction uncertainty


24 Table 3 Performance results of the model at different thresholds. Table 4 Performance results of the model on independent datasets. ASSESSING MODEL GENERALIZABILITY In this project, we performed additional tests to evaluate the generalizability of the model to new data. The model is faced with Independent datasets 1 and 2, each of which has different background and texture characteristics from the main dataset. The purpose of this test was to determine whether the model could accurately distinguish susceptible shrimp larvae under these new conditions. The results showed that the inspection was quite good. The model has an accuracy rate of more than 90%. This indicates that the model has learned meaningful features that are not too dependent on the specific image context. This may help the model perform well in real-world situations where shrimp may appear in different environments or under different conditions. Conclusions In conclusion, our research has developed a advanced deep learning model that efficiently detects susceptible shrimp larvae. We successfully integrated DenseNet121- based probabilistic deep learning with transfer learning and deep metric learning, enhancing both accuracy and adaptability. Our work holds significant promise for the future of susceptibility detection in aquaculture, transforming early detection and intervention methods. Looking ahead, we aim to expand these techniques to other species and domains, further improving detection accuracy and potentially paving the way for more active and efficient aquaculture health management systems. Acknowledgments We sincerely thank to the 25th Young Scientist Project Competition ( Young Science Competition 2023) for providing funding to complete this project. References [1] S. I. Islam, M. J. Mou, S. Sanjida, and S. Mahfuj, “A review on molecular detection techniques of white spot syndrome virus: Perspectives of problems and solutions in shrimp farming.,” Veterinary Medicine and Science, vol. 9, no. 2. Wiley, pp. 778–801, Oct. 25, 2022. [2] N. Callac, C. Giraud, V. Boulo, N. Wabete, and D. Pham, “Microbial biomarker detection in shrimp larvae rearing water as putative bio-surveillance proxies in shrimp aquaculture.,” PeerJ, vol. 11. PeerJ, Jan. 01, 2023. [3] S. R. Major, M. J. Harke, R. Cruz-Flores, A. K. Dhar, A. G. Bodnar, and S. A. Wanamaker, “Rapid Detection of DNA and RNA Shrimp Viruses Using CRISPR-Based Diagnostics.,” Applied and Environmental Microbiology. American Society for Microbiology, May 23, 2023. [4] V. Boonyawiwat et al., “Impact of farm management on expression of early mortality syndrome/acute hepatopancreatic necrosis disease (EMS/AHPND) on penaeid shrimp farms in Thailand.,” Journal of Fish Diseases, vol. 40, no. 5. Wiley, pp. 649– 659, Sep. 05, 2016. [5] P. Lavens and P. Sorgeloos, “Experiences on importance of diet for shrimp postlarval quality,” Aquaculture, vol. 191, no. 1–3. Elsevier, pp. 169–176, Nov. 01, 2000. [6] W. Vásquez-Quispesivana, M. Inga, and I. Betalleluz-Pallardel, “Artificial intelligence in aquaculture: basis, applications, and future perspectives,” Scientia Agropecuaria, vol. 13, no. 1. Universidad Nacional De Trujillo, pp. 79–96, Mar. 28, 2022. [7] X. Yang, S. Zhang, J. Liu, Q. Gao, S. Dong, and C. Zhou, “Deep learning for smart fish farming: applications, opportunities and challenges,” Reviews in Aquaculture, vol. 13, no. 1. Wiley, pp. 66–90, Jun. 29, 2020. [8] G. Kaur et al., “Recent Advancements in Deep Learning Frameworks for Precision Fish Farming Opportunities, Challenges, and Applications,” Journal of Food Quality,


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 25 vol. 2023. SAGE Publishing, pp. 1–11, Feb. 07, 2023. [9] D. Voskakis, A. Makris, and N. Papandroulakis, “Deep learning based fish length estimation. An application for the Mediterranean aquaculture,” Sep. 2021. [10] H. Wang, S. Zhang, S. Zhao, Q. Wang, D. Li, and R. Zhao, “Real-time detection and tracking of fish abnormal behavior based on improved YOLOV5 and SiamRPN++,” Computers and Electronics in Agriculture, vol. 192. Elsevier, Jan. 01, 2022.


26 Kijjanat Yingyong1 , Bunyawit Pittayakun1 , and Atthachai Wongkrut1 , Yupaporn Premkamol2 1,295 Moo 3 Nong Chak, Ban Bueng District, Chon Buri 20170, Thailand, [email protected] Abstract Effective communication is crucial for everyone, but it poses particular challenges for individuals with hearing impairments. As sign language is not widely used as a primary means of communication, the organizing committee recognizes the need for a web application that translates Thai text into Thai sign language. The main goal is to increase awareness and understanding of sign language among those with normal hearing. The project targets grade 12 students at Princess Chulabhorn Science High School Chonburi. The development of this project involves coding in Sublime Text using Python, HTML, CSS, and JavaScript. The website will be built using the Django framework, while the PyThaiNLP module will handle the processing of Thai language data. Blender, on the other hand, will be utilized to create captivating Thai sign language animations. The website operates by receiving text input from users. This text is then processed using the PyThaiNLP module, which breaks it down into individual words. These words are compared against the Thai Sign Language animation database to find corresponding sign language animations. If a match is found, the relevant Thai Web application for Thai to Thai signlanguage translation sign Language animation is displayed. In cases where a word is missing from the database, it is segmented into consonants, vowels, and tones. This segmented information is then combined with the previously processed word, enabling the presentation of appropriate Thai Sign Language animations. Furthermore, also offers the functionality to translate English text into American finger spelling. Based on satisfaction statistics, the content and design of the website have received an average score of 4, indicating high satisfaction. Users also expressed a high level of satisfaction with the utility of the website. However, suggestions were made to improve the clarity and engagement of the website, as well as to expand the sign language vocabulary. Keywords: Thai Sign Language, Deaf, PythaiNLP, Application Introduction In Thailand, Hearing impairment affects approximately 405,920 individuals, constituting 18.6187% of the disabled population, and they often face communication difficulties. Deaf individuals, with more severe hearing loss, rely on sign language for communication, which is not widely used in mainstream communication. To address these communication challenges, a web application was developed to translate Thai into Thai sign language using the PyThaiNLP library. This application provides 3D sign language animations for basic words, consonants, vowels, and intonation marks. It aims to assist both hearing- impaired and hearing individuals in learning and understanding Thai sign language more effectively, emphasizing the importance of sign language as a communication tool. Users can input text for processing into 3D Thai sign language animations, displayed as individual words. This application leverages the accessibility of the internet to enable users to access and engage with sign language media content from anywhere and at any time, promoting better understanding and communication in the Thai sign language. Furthermore, also offers the functionality to Put the highlight picture of your project in this area.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 27 translate English text into American finger spelling. Materials and Methods Materials 1. Python 1.1 Python library 1.1.1 PyThaiNLP 1.1.2 Django 2. Sublime Text 3. HTML 4. JavaScript 5. CSS 6. Computer Methods Techniques or technologies used 1.) Django is software that can be utilized for the efficient and effective development of web programs. Figure 1 was employed in the creation of the ThaiSL project, generating forms by preparing and structuring data for presentation. (A) Sub-figure. 1 (B) Sub-figure. 2 Figure 1. utilized Django to created ThaiSL project 2.) PyThaiNLP is a Python library used for text segmentation into words. In Figure 2, when users submit text, the system separates the text into words using the 'word_tokenize' function from the PyThaiNLP library. It then examines each word and file names for correspondence. If a word matches a file name, it is added to 'filtered_text'. Words that do not match are segmented into consonants, vowels, and diacritics and then added to 'filtered_text'. Finally, the function returns 'animation. html' to display 'filtered_text'. Figure 2. utilized PythaiNLP to separate massage into a word 3.) HTML is utilized for the structure of the webpage, both the header and content, is defined as depicted in Figure 3. The 'body' tag is used with the 'id' attribute set to 'bg', and it includes a 'style' attribute that specifies the background image as the file 'background.jpg'. Figure 3. Presenting an example of using HTML to define the structure of a webpage. 4. ) CSS is utilize it to enhance the aesthetic characteristics of HTML. As seen in Figure 4, CSS is employed to configure font families and sizes, center the navigation bar and other components, set background colors and images, and create hover effects on image components. Additionally, it incorporates a popup container with video playback functionality. Figure 4. illustrates an example of using CSS in a web application that translates Thai text into Thai sign language. 5.) JavaScript : When used in a web browser, JavaScript can respond to user interactions and dynamically alter the content structure on web pages. As depicted in Figure 12, JavaScript is employed to facilitate the continuous playback of videos, one word at a time, highlighting the color of the currently spoken word in a repeating loop. Additionally, it includes a function for temporarily pausing video playback to indicate the most recent word spoken.


28 Figure 5. illustrates an example of using JavaScript, defining two functions: play and temporary pause. 6.) Blender is created by studying the characteristics of sign language within the scope of research, and then these characteristics are used to produce animations. The primary focus is on hand gestures, and the result is rendered as animated clips. Software structure Figure 6. compares a real image with an animation during the signing of the word 'เธอ' in sign language. From Figure 7 this diagram illustrates the initial interface of the web application for translating Thai text into Thai sign language. The operation commences when the user selects one of the three options available on the homepage: the main page, the text translation page (Thai and English), or the sign language archive. Figure 7. depicts the initial structure of the homepage for the Thai-to-Thai sign language translation web application. If the user selects the 'Text Translation' page, as depicted in Figure 8, it will display the working process within this translation page. Users can input text, which is processed using the PyThaiNLP module to segment it into words. These words are then checked for matching animation names. If a match is found, the word is forwarded to the text waiting for the user. If no matching word is found, the text is segmented into consonants, vowels, and diacritics, processed and combined into the text waiting for the user. Finally, it is presented as a Thai sign language animation corresponding to the text awaiting the user's viewing. Figure 8. illustrates the structure of the text translation page within the Thai-to-Thai sign language translation web application. If the user selects the 'Sign Language Dictionary,' as shown in Figure 9, it will present the operational steps of the sign language dictionary. The website will display available sign language categories, allowing users to choose a category and view the signs within that category. When a user selects a sign, the website will then display the corresponding Thai sign language animation for that particular sign. Figure 9. represents the structure of the sign language repository page within the Thai-to-Thai sign language translation web application. Finally, if the user selects the homepage, the website will return to the main page and await the user's choice of other options.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 29 Results and Discussion Based on an experiment conducted with 143 grade 12 students at Princess Chulabhorn Science High School Chonburi, a web application received positive feedback. The evaluation of content yielded an average rating of 4.47, indicating a high level of satisfaction. The content was categorized into six distinct categories, including pronouns, verbs, emotions, commonly used words, places, and family, with consistent data (standard deviation 0.37), making it easier to predict and summarize content evaluation results. Regarding design, the application received an average rating of 4.33, with students finding it visually appealing and user- friendly. The standard deviation was 0.63, suggesting slightly more diverse data but still suitable for analysis. In terms of utility, the web application received an average rating of 4.20, indicating a high level of satisfaction. Users found it convenient on both mobile phones and computers, with a standard deviation of 0. 88, indicating more variability but still consistent for evaluating satisfaction. Overall, the web application received an average rating of 4.36, reflecting a high level of satisfaction. Users found it interesting, wellorganized, and suitable for various contexts. However, 10. 49% of students with previous interactions with individuals with hearing impairments suggested improvements in visual appeal, vocabulary diversity, and animation fluidity. Conclusions In the development of our web application for translating Thai text into Thai sign language, we harnessed an array of powerful tools and technologies, such as Django, PyThaiNLP, HTML, CSS, JavaScript, Blender, and various other devices. This innovative platform empowers users to input text, which is then meticulously processed to craft dynamic Thai sign language animations. These animations artfully dissect the input text into individual words, making language accessible to the hearing-impaired community. For instance, when the user inputs "I like you," the application promptly unfurls vibrant Thai sign language animations for each word—"I," "like," and "you." This remarkable feature fosters learning and comprehension, bridging the linguistic gap through the medium of sign language. Furthermore, our application extends a unique opportunity for those with normal hearing to immerse themselves in the beauty of Thai sign language. Through engaging sign language animations derived from input sentences, anyone can embark on a journey of cultural and linguistic exploration. An experiment involving 143 grade 12 students at Princess Chulabhorn Science High School Chonburi yielded resounding satisfaction. Users commended the content, design, and the educational benefits offered by the system. Yet, we value the precious input received from our users. Their suggestions to augment the sign language vocabulary, enhance the clarity of hand gestures, introduce more expressive facial animations, and revamp the overall web design have been noted. These refinements promise to elevate the Thai sign language learning system to greater heights, promising a more enriching learning experience for all. Acknowledgments The successful realization of this web application for translating Thai into Thai Sign Language owes much to Mrs. Yupaporn Premkamol, a professional educator in the computer department, who served as the project's principal advisor, offering valuable guidance, encouragement, and support throughout the project's preparation. The unwavering support of the developers' parents, along with that of other sponsors not mentioned, has been pivotal in the project's successful development.. References [1] Processing, T. N. L. (n.d.). Thai NLP. Thai Natural Language Processing. https://thainlp.github.io/. [2] กรมส่งเสริมและพฒันาคุณภาพชีวิตคนพิการ. (2566, 31มีนาคม). สถานการณ์คนพิการ31มีนาคม2566 (รายไตรมาส). ค้นเมื่อ16พฤษภาคม 2566. https://shorturl.at/jGWZ4 [3] ราษฎร์บุญญา. (2551). ภาษามือ: ภาษาของคนหูหนวก. https://rs.mahidol.ac.th/rs-journal/vol.4/v.4-1-005.pdf


30 Punnathorn Khunhon1 and Peraga Puangtong1 Advisor: Vichien Donram2 Special Advisor: Thanthun Sangphoo3 1,2 Princess Chulabhorn Science High School Chonburi, Nong Chak, Ban Bueng, Chonburi, 20170, Thailand 3Chaloem Phra Kiat HRH Princess Maha Chakri Sirindhorn Hospital, Rayong, 21150, Thailand Abstract Pulmonary tuberculosis is an infectious disease that still has outbreaks in Thailand (World Health Organization, 2021), caused by a bacterium called Mycobacterium tuberculosis, often known as MTB. In Thailand, the AFB sputum smear test is commonly used to diagnose tuberculosis. Although this method is affordable and reliable, the physician inspection process is time-consuming. So, we created an AI to automatically inspect the sputum smear to support the physician's inspection procedure in which we started by collecting sputum images from Kaggle and ZNSM-iDB, and processed them through data cleaning and augmentation process. Then, we developed the program for supporting all of the AI from python, coupled with the Yolov5 module to train, evaluate, and finally compare the performances of the different CNN models used in Yolov5's image processing process. As a result, the system is able to learn from the image dataset, which in preliminary examination, the most effective model, Yolov5s, was able to detect the MTBs with the sensitivity, specificity, F1-score, and mean detection time per The development of an AI-assisted web application for detecting Mycobacterium Tuberculosis ( M. Tuberculosis) from sputum with Acid- Fast Bacillus ( AFB) method image of 0.9802, 0.9647, 0.9727, and 11.1 milliseconds, respectively, and in the object detection task The most efficient model, Yolov5n6, was able to detect objects and classify the types of MTB with the precision, recall, mAP of 0.673, 0.761, 0.727, respectively. Once the model is exported, the CNN image processing model is trained and ready to be used in further experiments or further development into various innovations. To conclude, the model is able to count the MTBs appeared in the images, point out the MTBs' location, and can also inspect the MTBs automatically with a high accuracy and fast detection time. However, model image predictions in some images are still wrong, possibly because of the different staining characteristics in each picture, causing the system to misinterpret. Keywords: Artificial Intelligence, Convolutional Neural Network, Acid-Fast Bacillus, Mycobacterium Tuberculosis Introduction Pulmonary Tuberculosis, caused by a bacterium called Mycobacterium tuberculosis, remains a formidable global health challenge, claiming over 1.6 million lives in 2021 which only ranks second to COVID-19 in mortality (World Health Organization, 2021). Therefore, various TB screening methods are developed for early diagnosis and effective treatment. In Thailand, preliminary diagnosis for Tuberculosis mostly relies on the Acid- Fast Bacilli ( AFB) sputum smear test because of its great accuracy and accessible price for local hospitals. However, the AFB smear test has limitations in which lacks skilled personnel in microscopy- based analysis and some diagnosis may misinterpret in some cases, leading to delayed diagnoses and treatment. In spite of this, we propose an idea of developing an innovative web application capable of analyzing AFB smear test images captured through a standard microscope by the use of convolutional neural network artificial


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 31 intelligence. The application aims to automate the TB detection process with rapid and precise diagnoses with an efficient and user-friendly web application interface for physicians to utilize, thereby assuring more effective treatment for patients and contributing to global efforts in combating this infectious disease. Materials and Methods Materials and Software 1. Computer 2. GPU: RTX A100 3. Google Colaboratory 4. Python Programming Language 5. Yolov5 Module 6. Pillow Module 7. Fast API Module 8. Visual Studio 9. Javascript 10. Html Programming Language 11. CSS Programming Language 12. Docker Methods There are two essential steps to accomplish our objectives which is AI development and web application development. In AI development. Firstly, we gathered AFB sputum smear microscopy image datasets which include 1,257 images from Kaggle website and 695 images from ZNSM-iDB website, 1,953 images in total. All images then had been cleaned in order to remove unnecessary images such as black, and replicated images. 1,923 images were remained. Secondly, we applied the image augmentations into the dataset, transforming these images by vertical and horizontal flipping, random brightness and contrast adjustment, gaussian blur and noise adjustment, downscaling, hue and saturation and value adjustment, jpeg compression, and rotating in order to enlarge the dataset size and avoid an overfitting. Next, the dataset was precisely labeled with Roboflow, being supervised by Mr.Thanthun Sangphoo. We then split the MTBs into two groups which is the True MTB and the Globi MTB. Then, we provided training to the eight yolov5 models which are the efficient pretrained object detection model with the labeled dataset and evaluate their detection performances. In the end, we had already had the most potential model for furthermore usages. Figure 1. The M. Tuberculosis labeling process In the web application development. Firstly, we developed the API system with FastAPI, combined it with the best AI model to make it easy to access the application. Then, the application was deployed onto the local server, so every device with internet connection could simply use the application. Finally, we made a web application to demonstrate the usage of the application. In the web application, it is included an original image and a predicted image. Moreover, the web application indicates the position of every single MTB in an image, and also summarizes an amount of detected MTB. Results and Discussion Results 1. CNN Models Learning Efficiency: Graph 1. Shows the value between Validation box loss and epoch of each CNN model After training the models for multiple epochs, we evaluated their performance by measuring the Box loss, indicating the accuracy in


32 localizing objects. Notably, the larger model demonstrated superior learning capabilities, as depicted in Figure 8, where its more gradual curve outperformed the smaller model's steeper curve. Graph 2. Shows the value between Validation object loss and epoch of each CNN model Furthermore, we observed that training beyond 100 epochs could lead to overfitting, as evidenced by the increasing loss values across all models beyond epoch 60 (Graph 2). 2. Preliminary Image Analysis and Object Detection Efficiency: Table 1: Shows the statistical values of the preliminary image analysis of each model Model Measures for classification Sensitivity Specificity F1- Score Average Detection Time (ms) Yolov5n 0.9743 0.969 0.9717 10.6 Yolov5s 0.9802 0.9647 0.9727 11.1 Yolov5m 0.9818 0.9572 0.9699 13.6 Yolov5l 0.9845 0.9364 0.9614 17 Yolov5n6 0.9898 0.9353 0.9636 13.7 Yolov5s6 0.9947 0.9064 0.9526 13.9 Yolov5m6 0.9952 0.9037 0.9517 18.7 Yolov5l6 0.9952 0.8775 0.9399 23.8 From Table 1 reveals that model Yolov5s achieved the highest F1-Score of 0.9727, where F1- Score is a vital metric for evaluating model’ s preliminary image analysis accuracy. Additionally, the average detection time, measuring the time taken by the system to detect a single object which stood at 11.1 milliseconds. Table 2: Shows the statistical values of the Object Detection of each model Model Measures for object detection Precision Recall mAP Yolov5n 0.605 0.582 0.584 Yolov5s 0.624 0.595 0.569 Yolov5m 0.666 0.652 0.645 Yolov5l 0.615 0.551 0.537 Yolov5n6 0.673 0.761 0.727 Yolov5s6 0.738 0.732 0.710 Yolov5m6 0.708 0.735 0.699 Yolov5l6 0.714 0.748 0.718 On the other hand, Table 2 demonstrates that Yolov5n6 attained the highest mAP (mean average precision) of 0.727. Models with higher mAP values demonstrate superior object detection accuracy. 3. Sample Images from the AI Detection System and Application Interface: Figure 1. Standard sputum smear image AI diagnosis Figure 2. Sputum smear with additional microscope filter image with AI diagnosis Figure 3. False positive sputum smear image AI diagnosis High- quality images with distinct color contrast between TB bacteria and the background


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 33 demonstrate the system's precision in detection and differentiation between TB bacteria and artifacts appeared in the sputum ( Figure 1) . However, some images with green- tinted backgrounds, as seen in Figure 2, pose challenges due to artificial coloration, affecting detection accuracy which can lead to false detections as seen in Figure 3. Figure 4 Developed TBSD Web Application Interface Conclusions The developed AI system is capable of identifying M. tuberculosis bacteria from AFB smear test images using the AFB method. The system excels in conducting preliminary image analysis and object detection in which among the models evaluated, Yolov5s demonstrated the highest efficiency for preliminary image analysis, achieving great values for sensitivity, specificity, F1-Score, and average processing time per image, measuring 0. 9802, 0. 9647, 0. 9727, and 11. 1 milliseconds, respectively. In terms of object detection and differentiation, Yolov5n6 proved to be the most effective, achieving precision, recall, and mAP values of 0. 673, 0. 761, and 0. 727, respectively. Acknowledgments We would like to express our deepest gratitude to my project advisor, Mr. Vichien Donram, for his advice on preparing the projects, providing us with essential documents and helping us recheck the project to ensure the project is organized. Additionally, this endeavor would not have been possible without the generous support from the Mr. Thanthun Sangphoo, who adviced us about the details of the Mycobacterium tuberculosis. Lastly, we would like to mention, our parents, our friends, and teachers. Their belief in us has kept our spirits and motivation high during this process. References [6] Fu, H.-T., Tu, H.-Z., Lee, H.-S., Lin, Y. E., & Lin, C.-W. (2022). Evaluation of an AI-Based TB AFB Smear Screening System for Laboratory Diagnosis on Routine Practice. Sensors, 22(21), 8497. https://doi.org/10.3390/s22218497 [7] Panicker, R. O., Kalmady, K. S., Rajan, J., & Sabu, M. K. (2018). Automatic detection of tuberculosis bacilli from microscopic sputum smear images using deep learning methods. Biocybernetics and Biomedical Engineering, 38(3), 691–699. https://doi.org/10.1016/j.bbe.2018.05.007 [8] Saif Uddin. (n.d.). Tuberculosis Image Dataset. Kaggle. Retrieved March 4, 2023 from https://www.kaggle.com/datasets/saife 245/tuberculosis-image-datasets [9] Shah, M. I., Mishra, S., Yadav, V. K., Chauhan, A., Sarkar, M., Sharma, S. K., & Rout, C. (n.d.). Tuberculosis Image. ZNSMiDB. Retrieved March 4, 2023 from http://14.139.240.55/znsm/ [10] World Health Organization. (2021). Global tuberculosis report 2021. World Health Organization. Retrieved March 4, 2023. from https://www.who.int/ publications/i/item/9789240037021


34 Sarat Jeera-on1 , Jirayut Najan1 Phana Sarayam1* Chaiyapol Klinchan2 1Princess Chulabhorn Science High School Lopburi, Huayphong, Khoksamrong, Lopburi, 15120, Thailand 2Computer education, Information technology, Thepsatri Rajabhat University, Lopburi, 15000, Thailand *Email: [email protected] Abstract This project is to apply the technology and knowledge available to solve the visual problems of the elderly by making an application to help read drug labels. By taking a picture of the drug label through an application, it is processing and displaying the information of the drug in text size that the elderly can read on the mobile screen, making the elderly aware of Drug information such as properties, how to use, warnings, etc. , which is an application that the elderly can easily use. With the image classification techniques, it divides similar images into groups called classes to separate different objects in the image. We will use a Google deep learning service called Google teachable machine where we will collect images of different brands of drugs separately. Then processed through Google teachable machine then used the model in the Android studio program for drug classification. Keywords: image classification, deep learning An application to read drug labels for the elderly Introduction As Thailand enters an aging society, elderly people have various physical problems, as aging can affect the use of cheese. For example, due to many elderly people having vision problems, they are unable to read small drug labels, resulting in incorrect drug consumption. Due to visual issues and the small font size of the labels, manufacturers are aware of this issue. Therefore, we utilize our knowledge to create an application to read a drug label with big text size to help the elderly read in formation easier and adds some important information on various medications, enabling them to consume correctly and increasing their participation. Different types of vases. Therefore, the organizer created an application to read drug labels for the elderly. Materials and Methods Materials and software 1.computer 2.Android studio program 3.Google teachable machine website 4.Android smarthphones Methods 1.Collect medicine images. Figure 1 Collecting medicine images.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 35 2.Train model in Google teachable machine. Figure 2 Train model. 3.Collect the information of various medicine. Figure 3 Collect medicine information. 4.Create an android application on android studio. Figure 4 Developing on Andoid studio. 5.Import model to android application. Figure 5 Developing on Android studio. Figure 6 Developing on Android studio. Results and Discussion Result 1.Model accuracy. Classes Accuracy Samples TYLENOL 500 1.00 14 SARA 1.00 16 COUGHING MIXTURE 1.00 15 COUNTERPAIN 1.00 18 AIR-X 0.86 14 FLYING RABBIT 1.00 15 SOLMAX CAPSULE 1.00 11 STEPSILS 0.94 18 Graph 1 Model accuracy. 2.User satisfaction. Graph 2 Application performance. 0 2 4 6 8 10 12 5 4 3 Application performance


36 Graph 3 The beauty of application. 3.User’s ages. Graph 4 User ages. From the results of the trial of drug label reading applications for the elderly, it was found that The application has a complete and comprehensive collection of drug information from various brands that are generic home remedies or commonly used drugs according to the purpose of the project. The result also shown that drug label reading application for the elderly can work, displays the information of the drug clearly and completely and it is easy for users to use. and have good performance. Acknowledgments The drug label reading application project for the elderly was accomplished with the help of Mr. Phana Sarayam who gave ideas, suggestions, and corrected various deficiencies. The organizers would like to thank you very much. References 1. TensorFlow. (2558). TensorFlow: Large-scale machine learning on heterogeneous system https://www.tensorflow.org/lite/examples/image_c lassification/overview 2. Tensorflow Lite. (2565). Image classification with TensorFlow Lite Model Maker. https://www.tensorflow.org/lite/models/modif y/model_maker/image_classification 3. Pharm.D Ronnachai Ariyathumthavon. (2566). 2023 top ten medicine should have at home. https://my-best.in.th/50949 0 2 4 6 8 10 12 5 4 3 The beauty of application User ages 30-40 40-50 50-60 70-80 60-70


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 37 Wongpattanawut1 and Phurivet Methmaolee1 Advisors: Theerawut Chantapan1 Special Advisor: Bundit Boonyarit2 1,2Princess Chulabhorn Science High School Mukdahan, Mukdahan, 49000, Thailand 2School of Information Science and Technology, Vidyasirimedhi Institute Of Science and Technology, Rayong, 21210, Thailand Abstract In 2020, it was discovered that lung cancer is a non-communicable disease caused by abnormal cell growth. Over 1.8 million people worldwide died from lung cancer that year. Currently, biological markers for lung cancer cell types, LUSC and LUAD, are used for screening, diagnosis, detection, and prognosis. This process relies on omics data, including gene expression and mutation data, but it is resource- intensive and costly. Deep learning has become crucial in overcoming these limitations, using various models to predict biological markers based on omics data. However, there's a shortage of such markers, and distinguishing between LUSC and LUAD remains a challenge. To address these issues, the U- OMICS model, a semi-supervised deep learning model, was developed. It incorporates unsupervised pretraining for gene expression and mutation data and a graph convolutional network for protein interaction data to identify biologicalmarkersfor both LUSC and LUAD. U- OMICS outperformed traditional machine learning models and showed promise in reducing the time and cost of laboratory testing. It's also adaptable for finding biomarkers in other cancer types. Machine Learning Assisted Biomarker Discovery for Lung Cancer Diagnosis Based on Multiomics Keywords: deep-learning, biomarker, lung cancer Introduction Cancer, medically known as a malignant neoplasm, is a group of diseases involving abnormal cell growth in which cells divide and grow uncontrollably. malignant tumor and has the potential to invade the adjacent body Cancer can spread to distant parts of the body through the lymphatic system or the bloodstream. Lung cancer accounts for 18% of all cancers worldwide. The two most common types of lung cancer, categorized by cell type, are Lung Adenocarcinoma ( LUAD) and Lung Squamous Cell Carcinoma ( LUSC). Gene mutation information The data was collected from the TCGA database to separate the genes that are important for cancer and genes that are not important. Using data from OncoVar deep learning therefore plays an important role in this process. Reducing errors that occur also reduces costs and time. Materials and Methods Materials 1) Jupyter Lab 2) PyTorch 3) PyTorch Geometric (PyG) 4) Scikit-learn 5) NumPy 6) Pandas 7) Matplotlib 8) Seaborn 9) Computer Methods Part 1 Data Preparation Data preparation of Essential Genes, Nonessential Genes (Unlabeled Data), Driver genes (Labeled Data), Non-driver Genes (Labeled Data), Unknow Driver Genes of LUAD and LUSC lung


38 cancer from TCGA, cBioPortal, OncoVar, STRING databases were obtained as follows: Table 1 Number of genes in the pathogenesis of LUAD and LUSC cancer cells. Part 2 Measure for evaluating the model. Measures for evaluating the model. Relevant multi- omics data sets, namely gene expression and gene mutation, were gathered from the database. which collected data on the characteristics of cell lines from different patients. For the gene mutation data set, if a gene is mutated in the cell line then an amino acid change occurs. is represented by the number 1, that is, a mutation If the mutation does not occur The gene expression data set is log2(x+1), where x is the transcripts per kilobase million and gene level copy number data, respectively. It will prioritize carcinogenicity into 4 levels according to the order of importance from the TCGA. There are 4 categories of data: mutation, gene, route, cancer type, and carcinogenicity. Table 2 Number of cell line data, important genes, and unimportant genes before and after data set preparation of gene expression. Table 3 Number of cell line data, important genes, and unimportant genes before and after data set preparation of gene mutations. Part 3 Deep learning model development Deep learning model development. Developers are interested in using Semi-supervised learning techniques. The working principle is to encrypt the important elements of the structure. They looked at the structure of gene expression values and gene mutation location and used deep neural network algorithms with fully connected networks for multi- omics data. To find the significant points between each data set in the same cell line, then the two data sets were processed together for correlation with the identification of biomarkers. The data set was divided into the training set and the test set in a ratio of 80:20, and the training set was divided into 70% training set and 10% validation set. This method increased the reliability of the model. to help indicate overfitting or underfitting. (hyperparameter tuning) to find the set of parameters that make model training most efficient in learning. The packages used in this step are Scikit- Learn, Imblearn, and PyTorch. measures for evaluating the model To compare the performance of the models which can be calculated from the value The machine learning used to build the model is deep learning, in which the training process involves epoch- by- epoch training, with each epoch being evaluated for loss, which assesses the model's defect. for use in modification hyperparameter, weight, and bias on the model over time until the lowest loss is achieved. The developers chose to use mean square error (MSE), which is used to measure the mean difference of squares of the predicted values from the model and the actual values obtained from the data. The equation used in the calculation is = 1 ∑ ( − ) 2 Data Number of data LUAD LUSC Essential Genes 6,195 6,195 Non-essential Genes (Unlabeled Data) 13,902 13,979 Driver genes (Labeled Data) 71 54 Non-driver Genes (Labeled Data) 594 595 Unknow Driver Genes 783 796 Data cleaning Number of data cell line gene Before 994 40271 After 994 12388 Data cleaning Number of data cell line gene Before 1026 38024 After 994 20166


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 39 Results and Discussion Results Part 1 A pie chart showing cell line data broken down by cell type, and data cleaning. Picture 1. Pie chart showing important and unimportant genes divided by cell type. Part 2 Principal component analysis (PCA) was used to determine the distribution of multi-omics data by cancer type. Principal Component Analysis Techniques (PCA). It is a method used to analyze multivariate data. To find the relationship of those variables resulted in reducing the size of the complex matrix. Applied to reduce the size of features (Features) to be smaller. As a result, modeling takes less time. To see the correlation and distribution of multi-omics data due to the high dimensionality of these data. Picture 2. Distribution of data on genes important and non-critical in pathogenesis, gene expression and gene mutation by PCA techniques, respectively. From Picture 2, it can be seen that the gene expression data are not grouped according to the priority of lung cancer incidence. shows that Even the same type of genes Therefore, gene expression and gene mutation data should be learned in the model to enhance the identification of biomarkers. Part 3 Model performance Table 4. Comparison of model efficiency in identifying biomarkers. Deep learning model Measurement values used in testing (LUSC) AUROC Recall BCE LUSCNPNF 0.745 ± 0.040 0.592 ± 0.081 0.596 ± 0.028 LUSCNNPNF 0.709 ± 0.052 0.389 ± 0.127 0.596 ± 0.003 LUSCNOPNF 0.753 ± 0.001 0.528 ± 0.035 0.467 ± 0.018 LUSCNONPNF 0.753 ± 0.010 0.528 ± 0.035 0.467 ± 0.018 Deep learning model Measurement values used in testing (LUAD) AUROC Recall BCE LUADNPNF 0.745 ± 0.040 0.592 ± 0.081 0.596 ± 0.028 LUADNNPNF 0.710 ± 0.040 0.389 ± 0.127 0.586 ± 0.030 LUADNOPNF 0.712 ± 0.014 0.432 ± 0.042 0.510 ± 0.017 LUADNONPNF 0.753 ± 0.011 0.528 ± 0.030 0.467 ± 0.018 * The results in the table are derived from 5 tests, each time a new seed number was assigned to determine the learning stability of the model at different starting points. All models have not been adjusted. Hyperparameter Picture 3. shows the loss curve of the training data set. and test data set, LUSCNOPNF model and Model LUADNONPNF, respectively gene expression gene mutation


40 Mean square error (MSE) for each epoch or each learning session of the entire dataset. can be graphed as a loss curve. From the graph, it can be seen that as the epoch increases, the MSE value will decrease continuously. This is because the deep learning model takes multiple cycles of learning the dataset. In addition, the loss curve (Fig. 5) of the training dataset and audit dataset can indicate overfitting or underfitting in the model of interest and trigger an early stopping to stop the learning process before the model overfit the training dataset. and make the actual use have a high discrepancy. Conclusions Deep learning models for biomarkers UOMICS were developed using deep neural network techniques to identify key genes in lung cancer pathogenesis. and using a fully connected network algorithm to help learn polyomic data when evaluating the model with the test set The results showed that LUSC lung cancer using Unsupervised Pretraining, AuROC value of 0. 7982 ± 0. 01745 was the highest. and LUAD lung cancer without unsupervised pretraining. The AuROC value, 0. 7529 ± 0. 0106, was the highest. Not much different from using Unsupervised Pretraining. Acknowledgments This project was accomplished through the help of many parties.Therefore, I would like to say thank you on this occasion. I would like to thank Teacher Teerawut Chanthaphan from Chulabhorn Science High School Mukdahan, Mukdahan, project consultant. for giving advice on how to analyze the results and presenting them. Information Science and Technology School of Science and Information Technology Vidyasirimedhi Institute of Science ( VISTEC) project advisor that provides information about programming machine learning deep learning as well as giving advice. References [ 1] Mahidol wittayanusorn school, “ CBIO069T - CANDraGAT: Deep Learning for Cancer Drug Response,” Regeneron Isef, May 2022. [ online] . Available: https:// projectboard. world/ isef/ project/ cbio069t--- candragat- deep- learning- for- cancer- drug- response. [Accessed 20 July 2022]. [2] Nature Communication, “Network-based machine learning in colorectal and bladder organoid models predicts anti- cancer drug efficacy in patients,” 30 October 2020[ online] . Available: https://www.nature.com/articles/s41467-020-19313-8. [Accessed 25 July 2022]. [3] ResearchGate, “Transposable Elements in Human Cancer: Causes and Consequences of Deregulation,” May 2017. [ online] . Available: https: / / www. researchgate. net/ figure/ TE- insertionassociated- loss- or- gain- of- a- few- base- pairs- andmutagenesis- in- human_tbl2_316726055. [ Accessed July 30 2022]. [4] ScienceDirect, “Machine learning applications for therapeutic tasks with genomics data,” 10 June 2022. [ online] . Available: https://www.sciencedirect.com/science/article/pii/S266 6389921001768. [%1 Accessed1 August 2022].


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 41 Aekkarat Suwannarat and Watcharit Khongraeung1 Advisor: Mr.Wichai Buaniaw 2 1,2Princess Chulabhorn Science High School Satun, 138Moo. 12, Chalung, Mueang, Satun 91140 Abstract This project aims to increase the production speed of Photosynthesis Bacteria ( PSB) inseminated by merchants or farmers to raise the growth rate of the micro- organism by handling the growing factors, consisting of light and especially water circulation, preventing the microbe from sedimentation and letting them grow in widen space and cut down the time of the growing process and the owner’s undertaking of caring by detecting the color of liquid using AI then analyze the preparedness using color constantly to compare with them and sending notifications through “ Line Notify” when the product is ready to use. Keywords: Photosynthesis Bacteria (PSB), Insemination, Environmental controller, Color detection Introduction Due to the widespread agricultural practices both domestically and internationally, farmers have a common goal of achieving highThe Environmental Controller of Photosynthesis Bacteria (PSB) Insemination quality and high- yield crop production. While some farmers rely on chemical substances to accelerate their crop production, they face the risk of negative impacts from residual chemicals. As a result, many farmers have turned to methods that involve using minimal amounts of chemicals and are more environmentally friendly. One such method is utilizing photosynthetic bacteria to enhance nutrient availability. These bacteria are easy to cultivate, but they require proper care and control in an appropriate environment to survive and thrive. To address this, we have developed an environmental control system to facilitate the cultivation process and provide convenience to farmers and producers in maintaining optimal conditions for bacterial growth. Our project integrates scientific knowledge, engineering, artificial intelligence, and Internet of Things (IOT) technology to cover various aspects of operation, including detection, data analysis, notifications, and physical interventions. Furthermore, the system is designed to operate continuously with a one-time setup, reducing time and effort required for maintenance. Additionally, it helps accelerate bacterial growth, thereby reducing production time per cycle. Ultimately, our project targets farmers seeking to increase and nurture their crop yields by utilizing photosynthetic bacteria, which not only provide benefits but also contribute positively to the environment. Materials and Methods Regarding the implementation, our project involves two main components: hardware and software. In the hardware section, we construct a device that includes a shaking mechanism to accelerate the bacterial growth reaction and install LED lights to provide additional light sources for photosynthetic bacteria to perform photosynthesis during both day and night Moving on to the software part, we develop a GUI program for monitoring and displaying the progress of photosynthetic cultivation. The program incorporates AI, specifically computer vision, for color detection of cultivated bacteria. It also includes Put the highlight picture of your project in this area.


42 real- time tracking capabilities and automatic notifications via Line Messaging API Figure 1. Controller program interface Finally, in the experimental phase, we compare the timeframes required for conventional photosynthetic bacteria cultivation, which relies solely on natural environmental conditions and human intervention, with the timeframe using our environmental control system. This comparison helps demonstrate the project's capability to extend the cultivation period and achieve more pronounced results when applied to large-scale or long-term cultivation. Results and Discussion Based on the experimental results, it was found that the photosynthetic bacteria cultivated with the environmental control system exhibited faster growth compared to the group cultivated conventionally. The conventional cultivation period lasted for 14 days, while the cultivation period using the control system was reduced to 8-10 days. This indicates that our project has the ability to shorten the cultivation timeframe while controlling growth factors. Additionally, the project includes color detection of bacteria for assessing their readiness and sending notification messages specifying the number of bottles ready for use. Furthermore, the control system demonstrated continuous operation by utilizing battery-powered electrical energy. From our experiments, we have observed limitations in our project. Specifically, photosynthetic bacteria cultivated with groundwater tend to exhibit a green color instead of the typical red color. This divergence restricts the detection and evaluation of the project, which relies on color detection of red bacteria for notification purposes. Furthermore, our control system relies on external factors such as natural sunlight for photosynthesis, which we aim to address and improve in future iterations. Conclusions In conclusion, the control system successfully extends the cultivation period by manipulating growth factors. It incorporates color detection of bacteria to assess their readiness and sends automated notifications. The system operates continuously, relying on battery power. However, limitations were identified, such as color divergence when using groundwater and reliance on external factors like sunlight for photosynthesis. These drawbacks will be addressed and improved in future developments. Acknowledgments We would like to express our gratitude to Mr. Wicha Buaniao, our project advisor, Dr. Shinpong Angsushochatmethee, our special advisor, the faculty administrators, teachers, staff, as well as our parents and guardians for their knowledge, guidance, valuable advice, and continuous support throughout the project development process. References 1. Zenyr Garden. (2022, October 26). “How to Make PSB Photosythesis Bacteria at Home”.https://zenyrgarden.com/how-to-make-psbphotosynthetic-bacteria-at-home/ 2. PACKT. “Object detection using color in HSV”.https://subscription.packtpub.com/book/data/ 9781789537147/1/ch01lvl1sec09/objectdetection-using-color-in-hsv


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 43 Nuntawat Rodjanaphun1 and Hirun Phienpoom1 Advisors: Likit Thoppadids2 1,2Princess Chulabhorn Science High School Phetchaburi, Cha-am, 76120, Thailand Abstract The blind is one of the physical disabilities that creates significant challenges in navigating various places for those affected. To address this, we have developed a robot capable of detecting obstacles to assist individuals with visual impairments in moving to different locations. We also aim to create a prototype of this navigation robot for further adaptation and development in future projects. we obtained a reference design from the Jansen's linkage or another name for an invention that many people know as 'Stranbeest', for the robot's legs and used an ultrasonic sensor to detect obstacles. We also integrated a joystick module for control and a GPS module for tracking the user's coordinates. During testing, we divided it into two phases: a performance test where we evaluated four functions for its performance and a real-use test involving five volunteers who tried it out and completed assignments. After conducting the tests, the performance results shows that the performance of each function. These results indicated errors in the motor control function, with two tests errors. The object detection function sends a notification sound on all ten times. No errors were encountered in any of the tests for the help button function. Only a single error was identified in the positioning coordinate function. During the real-use test, after collecting the results from the volunteer- conducted test Obstacle Detection- Guided Robot Prototype for the blind with GPS Positioning surveys that show difficulty in operating robot was mostly normal, and the robot's convenience of use was mostly convenient. Safety levels for the robot were mostly high, and users could mostly avoid obstacles, encountering only one or two things. In conclusion, our robot can be used for various types of movement, but there may still be some imperfections in the functions and certain limitations that need further development and improvement. Keywords: the blind, robot, Strandbeest, Jansen's linkage, gps module Introduction You might already know that currently, people who are blind face difficulties when trying to move to different positions by themselves. They need to use various tools to help them move, such as canes and guide dogs. However, sometimes blind individuals might not have much experience with using canes, which could lead to accidents while walking. Moreover, guide dogs also need extensive training, and their cost can be high, which unfortunately means that some people can't afford them. Therefore, we have chosen to develop a robot guide designed to assist the blind. Our robot scope, it could to be functional in smooth and slightly sloped areas and where there's access to the internet. Materials and Methods Materials 1. Laser cutting machine 2. 3D printer 3. Board Arduino Uno 4. ESP8266 5. GPS module 6. HC-SR04 Ultrasonic sensor 7. Motor driver module 8. DC gear motor 9. PS2 XY Joystick module 10. Buzzer 11. Button module


44 12. Protoboard 13. Jump wire 14. AA Battery 15. Acrylic 16. PVC pipe Methods 1) The structure design in component of legs, we obtained a reference design from the Strandbeests. We have studied until we found a pattern called Jansen's linkage, which is a planar leg mechanism designed by the kinetic sculptor Theo Jansen to generate smooth motion and move across irregular terrains. Therefore, we have created it from the above pattern. Figure 1 Strandbeests Figure 2 Jansen's linkage We use the Geogebra app to design the leg structure and then cut the 3mm acrylic materials using the RD work app. And use Tinkercad for designing motor connectors. In component of body, the robot's body structure is made of 3mm acrylic sheets. It has slots for fitting modules, and it can also be extended with additional layers to increase the available space. Figure 3 the components of body and legs on RD work app The handle was constructed using a PVC pipe with a diameter of 1.2 centimeters and a length of 50 centimeters Figure 4 Robot model 2) The code and module design Various modules are used in the construction of the robot. The first one is Arduino UNO, which is connected to four modules. The first module is the L298N motor drive module, which is paired with are DC motors, the next one is a buzzer module, the third is a HC-SR04 ultrasonic sensor, and the last one is a PS2 XY joystick module. The other main controller is ESP8266. It is connected to two modules: the first module is a button, and the second module is a GPS module. And use a power bank as a power source for the board and use a separate lithium polymer battery to supply power to the motor driver module Figure 5 Schematic diagram of a circuit


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 45 3) Testing 1) Performance test We test each function ten times to assess the function's performance. 1. 1) Motor control function, test of controlling the robot by conducting tests of movement in various directions. 1. 2) Object detection, test of the sound alert within the Object detection function when an obstacle is detected. 1. 3) Help button function, test the help button by pressing the button and checking if there is a request for assistance message in the Line Notify chat. 1.4) Positioning coordinate function, test the position coordinate function, when sending a message to the Line Notify chat, the board will send back the user's coordinate. 2) Real-use test Five volunteers are testing our robot and completing assignments from a Google Form, as there are four different question topics: difficulty in operating robot, convenience in operating robot, Safety when utilizing the robot. The number of obstacles you can evade. Subsequently, we gather the data. Results and Discussion 1) Performance test 1.1) Motor control function, there were ten tests conducted to control the motors. Out of these, two tests had errors in their control. Both errors occurred due to misconfigured code that had incorrect coordinate settings, causing the robot to move in an uncontrolled direction. Table 1: Evaluating of motor control function 1.2) Object detection function, it was tested ten times, and a notification sound was produced on all ten occasions. Table 2: Evaluating of object detection function 1.3) Help button function, it was tested a total of ten times, and no errors were encountered in any of the tests. Table 3: Evaluating of help button function 1.4) Positioning coordinate function, After conducting the Positioning coordinates test, only a single error was identified. It occurred due to the module's capability, which had an error in its values, resulting in coordinates that did not match the actual user location. Table 4: Evaluating of positioning coordinate function


46 Based on the results of all the tests, we calculated the percentage using the formula = × 100 and obtained values that corresponded to the data on graph 1. Graph 1 Show that result of performance 2) Real-use test after collecting the results from the volunteer-conducted test surveys that show 2.1) Difficulty in operating robot that It was found that two respondents answered 'low,' and three respondents answered 'normal.' 2. 2) Convenience in operating robot it was found that all five volunteers responded that it was 'convenient'. 2.3) Safety when utilizing the robot It was found that one person responded 'normal' and four people responded 'high.' 2. 4) The number of obstacles user can evade It was found that one volunteer could avoid one thing, two volunteers could avoid two things each, and two volunteers could avoid three things each. When calculated as a percentage, it was found that the difficulty in operating robot was mostly normal, and the robot's convenience of use was mostly convenient. Safety levels for the robot were mostly high, and users could mostly avoid obstacles, encountering only one or two things. Graph 2 Show that result of Real-use test Conclusions Our robot, after testing all four functions, has indicated that most of them are performing well. However, there is still a need for improvement in the positioning coordinate function. To enhance the performance of the used module, adjustments or replacements of the module might be necessary. In the real-use test, the majority of users expressed positive feedback, but our robot still needs further development in various aspects, especially in the control system for better obstacle avoidance. References [1]Chinnapong, P. (2553). Attitudes of the Blind towards Environmental Accessibility. Retrieved December 1, 2022, From https:// www. google. com/ url?sa= t&sourc e= web&rct= j&url= https:// so02. tcithaijo. o rg/ index. php/ jars/ article/ download/ 1689 01/ 121527/ 474169&ved= 2ahUKEwiwrKK6 rKv9AhXxaGwGHQH6DB0QFnoECA0QA Q&usg=AOvVaw0vNT2gidVc8tTu4g6Uk [2]Fx4u. (2560). Strandbeest - a Robotic Project. Retrieved January 14, 2023, From https:// www. hackster. io/ fx4u/ strandbees t-a-robotic-project-7e1e 23 [ 3]Safwit. ( 2565). Arduino Line Notify alert to Line. Retrieved January 15, 2023, From http://youtu.be/v8wAW2no [ 4] Aqib. ( 2563). Controlling DC Motors with Arduino | Arduino L298N Tutorial. Retrieved January 15, 2023, From https:// www. hackster. io/ muhammadaqib/ controlling- dc- motors- with- arduinoarduino-l298n-tutorial-e62697


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 47 Nattakorn Limpanarom And Chayapon kanyamasa1 Advisor: Desrit Pararukchuchot2 Special advisor: Phuwadon Boon-Um3 1Princess Chulabhorn Science High School Phitsanulok, Makam-soong, Phitsanulok 65000, Thailand 2Department of Electrical and Computer Engineering, Faculty of Engineering, Naresuan 3University,Tapoh,Phitsanulok 65000, Thailand Abstract As technology continues to rapidly advance, various industrial processes have begun utilizing technology in numerous ways. One such technology is the robotic arm, which has become a widely adopted substitute for human labor in the manufacturing industry. Robotic arms are capable of performing tasks that require continuous, repetitive work or those that involve the dangerous handling of chemicals. Tasks that are too heavy or difficult for humans to perform can also be done efficiently using a mechanical arm. In addition to these benefits, robots increase work efficiency, accuracy, and the ability to work around the clock without requiring breaks, resulting in a higher number of workpieces produced. Industrial robotic arms have two primary functions, either working in an automated system or being manually operated by humans to move and hold workpieces between different locations. The objective of this project is to manage an industrial robotic arm and simulate its operation in picking up objects. To achieve this objective, we created a circuit that employs an Arduino as a controller to operate a servo Study and development system of the robot arm motor that manages the robotic arm. We also developed a program that enables users to communicate with the robotic arm via the LabVIEW program. Users can define the desired location for the robotic arm to pick up and place the object on the robotic arm controller. The program processes the target position value and transfers the data to the Arduino board to drive the servo motors mounted on each axis of the robotic arm to the target position specified by the user. Our tests indicate that the robotic arm can be successfully managed to relocate objects from one location to another. Keywords: LabVIEW program, servo motors, robot arm Introduction Robots are mechanical devices that come in different shapes and with varying structural characteristics. They have various functions that can be directly or indirectly controlled by humans, and some systems are designed to allow robots and humans to work together. Robots are typically designed to perform difficult tasks that require precision, such as those involved in space exploration and manufacturing. The field of robotic technology is rapidly evolving, with robots increasingly being utilized in various domains, such as medicine and sports. There is even a trend towards developing robots that are more humanoid in appearance to facilitate their integration into daily life. Designing a project involving a robotic arm can contribute to the development of assistive robots and provide an opportunity to simulate their operation. Materials and Methods Materials - 3D Printer -servo motors - wire - Arduino UNO r3


Click to View FlipBook Version