The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

Computing and Information Technology (1)

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by mintra.s, 2023-12-18 01:14:51

Computing and Information Technology (1)

Computing and Information Technology (1)

98 Measurement results can be quickly aggregated by being sent to the headquarter via wireless communication. This allows you to quickly since the results can be provided to the elderly, motivation for exercise can be improved. Figure 4: Developed device Figure 5: Operation diagram Conclusions This allows you to quickly and easily since the results can be provided to the elderly, motivation for exercise can be improved. Acknowledgments Our sincere thanks to Dr. Kazuki Ashida, Department of Engineering Information Electronics, National Institute of Technology Nagano College for giving us the opportunity to conduct this research and for his guidance throughout its execution as our advisor. We would like to express my deepest gratitude to him. NFC reader Antenna Touch panel Push sw itches Pow er supply


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 99 Thanapat Wanakitsamphan1 and Yuna Terui1 Prof. Keishi Okamoto1 1Sendai National Institute of Technology Abstract The goal of this project was to study if a convolutional variational autoencoder could be applied to evaluate handwriting of foreigners. We have collected handwriting images of 6 Japanese characters and divided them into a dataset of well-written letters by natives and a dataset of foreigners’ handwriting. All images are regularized to 64x64x1 tensors whose elements were in range [0,1]. A variational autoencoder model was consisted of two parts: encoder and decoder. The encoder used 2-dimensioanl convolutional layers and max-pooling layers. The decoder used 2-dimensional transposed convolutional layers and up-sampling layers. All hidden layers used activator ReLU and the output layer used Sigmoid function. Binary Crossentropy was used to calculate reconstruction loss. The reconstruction of the first method had problems of incorrect reconstruction when characters had similarity in shape. The reconstruction loss distributions of foreigners’ handwriting had wider width of distribution and were less right-skewed than the dataset of natives’ handwriting. Keywords: Handwriting, Convolutional Neural Network, Variational Autoencoder Handwritten Character Recognition Model for Foreigners' Handwritten Japanese Introduction Sendai National Institute of Technology have a number of foreign students enrolled. It seems that most of students have struggled with Japanese orthography. This fact gave us motivation to study how artificial intelligence can be applied to language acquisition. We found that foreigners’ handwriting would be unfamiliar to Japanese people and difficult to read. We approached to use deep learning to recognize handwritten characters by foreigners. We also aimed to use deep learning to analyze characters written by native speakers, foreigners mastered in Japanese, and foreigners who have just started to learn Japanese. Tools and Methods Part 1: Data Collection Figure 6 Website for data collection We developed a backend software using NodeJS to serve the website to volunteers, collect PNG format of handwriting and organize data. The software was deployed on an AWS EC2 instance. Participants were asked to write Japanese characters displayed on the screen. The dataset was divided into two parts: dataset of which images were well-written by Japanese people and dataset of which images are improperly or incorrectly written by foreigners. We have collected 1798 handwriting images of 6 characters, including 2 Hiragana, 2 Katakana and 2 Kanji,that we had chosen to study


100 in this project: あ, ぬ, チ, ケ, 世 and 万. The numbers of images are shown in the table 1. Table 1 Dataset Character By natives By foreigners あ 212 96 ぬ 213 96 チ 213 96 ケ 197 96 世 194 96 万 197 92 Total 1226 572 Part 2: Preprocessing In preprocessing, all images were resized to 64x64 with 1 channel. Their color was inverted and the color values were regularized to range [0,1]. Figure 2 Image Proprocessing The handwriting dataset of natives was split into train set and test set with ratio 7:3. Part 3: Convolutional Variation Autoencoder We designed a convolutional variation autoencoder which consisted of an encoder and a decoder as shown in the figure 2. Figure 3 An autoencoder architecture When a 64x64x1 tensor is passed as input to the encoder, its features are extracted and compressed to a 4x4x16 tensor in latent space. Again, this compressed value is again passed to the decoder in order to reconstruct the original image. The architectures of the encoder and decoder are shown in figure 3. To encode, the encoder uses 2-dimensional convolutional layers and max-pooling layers. For reconstruction, 2- dimensional transposed convolution layers and upsampling layers are used. These layers use ReLU as the activation function, except for the last layer of the decoder, which uses the Sigmoid function. () = max(0, ) () = 1 1 − − Figure 4 the encoder(left), the decoder(right) We had 2 methods in training models: One model for all characters and one model for one character. These two methods used the same architecture but different training methods. Both were trained using ADAM optimizer with learning rate = 0.1 and validation split = 0.1, and used Binary Crossentropy as the loss function. (, ̂ ) = − 1 ∑∑ (̂ ) + (1 − )ln(1 − (̂) = =1 =1 By the first method One model for all characters, the model was trained with all images in dataset with epochs = 750. The second method One model for one character, one model was responsible for only one character. The models for Hiragana and Katakana were trained with epochs = 500 and the models for Kanji were trained with epochs = 800.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 101 The reconstruction losses compared to the original image, calculated by Binary Crossentropy, of the test set and the dataset by foreigners were used to evaluate and analyze handwriting. Results and Discussion Figure 5 shows loss and validation loss while training model using method One model for all character and figure 6 shows loss and validation loss while training models using method One Model for all character. Figure 5Loss and Validation Loss of the first method: One Model for all characters Figure 6Loss(upper) and Validation Loss (lower) of the second method: One Model for One Character We found that the autoencoder of the first method which was trained on all data, had problem when some characters have similar features. For instance, the characters 万 and ぬ appeared to be reconstructed as character あ as shown in figure 7. It means that the model is unusable with large number of characters which share some similar features or shapes, for example め and ぬ, 土 and 士, et cetera. Figure 7Incorrect reconstruction by the first method In this study, we focused on the second method One model for one character which had no problems when characters had similarity in shape. The histograms of the distribution of reconstruction losses are shown below. (figure 8 - 13) DS1 represents the test set of well-written letters by Japanese and DS2 represents the dataset of foreigners.


102 Figure 8 - 13 Reconstruction loss distributions It appears that reconstruction loss distributions of the two datasets have one common noticeable difference, the reconstruction losses of DS2 have wider ranges of distribution. It can be interpreted that images in DS1 are more well-written and have less variations than DS2. The reconstruction loss distributions demonstrate almost identical shapes when comparing DS1 and DS2 of the two Hiragana あ and ぬ. About other letters, the reconstruction loss distribution of DS1 appear to be more rightskewed than that of DS2. By natives By foreigners (loss: 0.152118742) (loss: 0.19698973) (loss: 0.087327488) (loss: 0.147028223) (loss: 0.187802032) (loss: 0.324365318) Figure 14 Comparison of reconstruction between datasets The figure 14 shows reconstruction of each character. It appears that well-written characters are more accurately reconstructed than characters by novices. Since the models were trained on limited size of data, the models could not extract some main features of each character. Figure 15original and reconstructed image of 世 Figure 16original and reconstructed image of ぬ For example, as shown in figure 15, the upper-left part of世 is not fully reconstructed as well as the lower-right circle of ぬin figure 16. From figure 4, it shows that some incorrect inputs are corrected after reconstruction. It is presumed, if trained with enough data, to be appliable in handwriting correction as well. Conclusions The correct handwriting of natives is more correctly reconstructed than incorrect handwriting of foreigners. Therefore, we recommend utilizing the average reconstruction loss of handwriting by natives as standard threshold to evaluate handwriting. In addition, we see that this could be developed into a tool to for handwriting correction. Acknowledgments We would like to express our sincere gratitude to Prof. Keishi Okamoto for his suggestion and academic encouragement. We


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 103 would also like to thank our friends for helping us finalize this project. References [16] Geron, Aurelien. Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow: Concepts, Tools, and Techniques to Build Intelligent Systems (Norihiro Shimoda, Trans.). 2 nd edition. Oreilly Japan. 2021 [17] Tensorflow. (2023, August 15th). Intro to Autoencoders. Tensorflow. https://www.tensorflow.org/tutorials/generati ve/autoencoder [18] Tensorflow. ( 2023, October 10th) . Convolutional Variational Autoencoder. Tensorflow. https://www.tensorflow.org/tutorials/generati ve/autoencoder


104 Yuki TSUGE 1 and Haruto SHIBITA 1 Advisor : Advisors: Teppei MIURA, Makoto SATO, Daichi MATSUI, Kaito MOTOKAWA, and Keiko EGUCHI2 1,2 National Institute of Technology (KOSEN), Toyota College, 2-1 Eisei-cho, Toyota, Aichi, 471-8525, Japan Abstract We believe that the current facility tour lacks flexibility and can be dull for participants who are solely relying on explanations. When we survey students from National Institute of Technology Toyota College, 46% of those who find the facility tours uninteresting mention their inability to freely explore the facilities as the primary reason. To address the problems, we develop a mobile application that transforms the facility tour experience into an engaging gamelike adventure. The app utilizes Bluetooth, location information, and gyro- sensors on smartphones to guide participants through various facility check point within the venue, ultimately leading them to the goal. With this application, we aim to offer a new style of facility tour that addresses the problems and allows participants to have an enjoyable experience while exploring the venue. This application offers the advantage of attracting a larger number of visitors for organizers of facility tours. Moreover, it provides the opportunity for participants to fully enjoy and comprehend the facility. TREASURE on FIND: Development of an Application to Support Freely Facility Tours Keywords: new style, mobile application, Bluetooth, location information, gyro-sensors Introduction We believe that the problem with the current facility tours is that they have a rigid format and can sometimes feel boring. In fact, when I visited the facilities, I sometimes could not hear the explanations when I was in a large group, and sometimes I felt that the explanations were too long and boring. In a survey of National Institute of Technology Toyota College students, 46% of those who found the facility tours boring said it was because they were in large groups and could not tour the facilities freely. The application we developed guides participants to the check point, preventing them from gathering at a single site and making it possible for them to tour facilities in small groups. We believe that this will make it easier for participants to listen to the explanation given by the facility tour organizer. In addition, after arriving at the check point and receiving an explanation about the location, a quiz about the explanation will be given to the participants to make them more interested in the facility tour. In order to eliminate boredom during the facility tour, the application uses various smartphone functions as items to make the tour of the facility feel like a game, like a treasure hunt. This application was created for use in school and corporate tours. We wanted to have participants install this application during tours of corporate and school facilities and turn the tours into something fun and easy to understand. The usefulness of the application will be determined by conducting mock tours, the results of questionnaires to the participants, and the conditions of their arrival at the check point. Through these surveys, we found that the application is very attractive to the tour participants.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 105 Figure 1. Imaged figure of gyro sensor. Figure 2. Imaged figure of location information. (Scheduled for future implementation.) Figure 3. Imaged figure of Bluetooth. Materials and Methods Use Case ・Tour of facilities lacking participants' spontaneity. ・Crowded tour of facilities with many participants. Examples: School tours, factory tours Application Design Our application makes the tours fun and game-like. ・Use smartphone sensors as items to locate check point. ・Scan the QR code to confirm that you are at the correct check point. ・Take a quiz about the check point. ・ Displays real- time ranking of points earned in quizzes. System Design Our application is composed of three main functions: Gyro- sensor, Location information, and Bluetooth. Item: Gyro-sensor (Figure 1) Calculate the angle (ϕ) with respect to the check point from the coordinates (x1, y1) of the check point and the coordinates (x2, y2) of the current location obtained from GPS, using the following formula. ϕ=90≥atan2(sinΔx, cosy1tany2≥siny1cosΔx) (Δx=x2≥x1) Item: Location information (Figure 2) Display a circle with a radius of 20 meters on the map and place the check point within the circle. Item: Bluetooth (Figure 3) Place a Bluetooth beacon at the check point to display the signal strength with the smartphone. Participant Use Flow 1. User registration. 2. Login. 3. Obtain rough location information of the check point. 4. Use the items (location information obtained in step 3. , gyro sensor and Bluetooth) to find the location of the check point. 5. Scan the QR code posted at the check point. 6. Solve the quiz. 7. Repeat steps 3 to 6 as many times as the number of check points. Evaluation Method A mock tour is held to evaluate the following items. ・Participant’s Survey Results. The needle points in the direction where the check point is located. red dot:User location. green circle:Check point. The stronger the signal strength, the greater the percentage of blue.


106 ・Displays real-time ranking of points earned in quizzes. Table 1. Questionnaire results of simulated experiments Question Yes No Did this application assist you with the tour? 10 0 Compared to a typical group tour of a facility, did you enjoy this individual tour? 9 1 Did the items make the tour more enjoyable? 9 1 Did the real-time ranking function make the conference more enjoyable? 10 0 Did the quiz help you better understand the places you visited? 7 3 Was the tour smooth and uncrowded? 10 0 Results Ten participants were invited to a simulated facility tour to test the functionality of this application. As a result of taking a questionnaire to evaluate the application, all participants indicated that the application assisted them in touring the facility and that they enjoyed the tour more than a typical tour. The real- time ranking function was also well received by all, and 9 out of 10 participants said that the items made the tour more enjoyable. In addition, one of the features of this application, the individual action tours, allowed the tours to be conducted smoothly and without crowding. The quiz had a correct response rate of approximately 85.6%. Eight out of ten respondents indicated that the quiz was helpful in promoting understanding. Discussion As for the quiz, the 85.6% correct response rate indicates that the quiz was effective to some extent in assisting the facility tour. This is evidenced by the fact that 7 out of 10 participants indicated that the quiz enhanced their understanding. In addition, since all participants indicated that the application assisted their tours and allowed them to tour smoothly without congestion, we believe that the initial purpose of developing this application was achieved. Based on the results of this study, the following areas for improvement were identified. Regarding the item compasses. From the results of this experiment, we found that the item's compass was not used very often. We have two considerations about the cause of this situation. First, the GPS location circle was too small. To inform the user of the location of the check point, this application presents a rough location of the check point to the user in the form of a circle. The circle was too small, which allowed the user to reach the check point without using a compass. Second, the person holding the QR code was visible from the outside and could be found without using the item; we would like to put the QR code on the wall to make it unattended and harder to find. Based on this experiment, we believe this application can transform the facility tour into an enjoyable experience and promote understanding. Conclusions " TREASURE on FIND" has the advantage for the organizers of the facility tour that they can expect more participants. It is an opportunity to fully enjoy and understand the facility. In the future, we intend to make this application even more enjoyable by adding various functions such as Bluetooth distance measurement and safety features. This application will create a new future for facility tours.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 107 Acknowledgments This project was supported by The Nitto Foundation.


108 Yui ASAKURA and Yuu INABA1 Advisors: Keita TSUZUKI and Teppei MIURA2 1,2 National Institute of Technology (KOSEN), Toyota college, 2-1 Eiseicho, Toyota, Aichi, 4718525, Japan Abstract Recent increase in interest and enthusiasm for anime and manga not just in Japan, but all over the world. "Oshi" (favorite persons or characters) and "Oshi-Katsu"(activities to support own Oshi) trends are becoming more popular. Having a talk about Oshi-Katsu with friends who like Oshi and attending events together are also an important part of Oshi-Katsu. However, it can be difficult to make Oshi- Katsu friends as easily as actual friends, because some of them find it hard to talk to their friends about their Oshi. To solve this problem, we developed a solution named "Oshi no Map" to such problems. "Oshi no Map" is a web map application that allows people with the same Oshi to connect closely with each other. This web application provides the following two functions. 1. Display people who like the same Oshi as the user on a map from the user's current location. 2. Organize events and parties related to their Oshi and invite participants. We will present up to the point where basic functions can be confirmed. In addition, we will discuss how to handle the location information provided and how to deal with the possible risks of meeting in person, assuming practical use. By using this application, you can make friends close to you who can share the fun and enrich your Oshi-Katsu. "Oshi no Map" ~ Web Application to find new friend with the same interests ~ Keywords: Oshi, Oshi-Katsu, friends, location, events Introduction In Japanese subculture, anime and manga hold significant representation. Although this culture is commonly perceived to be popular among younger generations, the recent global surge in the popularity of anime and manga underscores their potential for the Japanese economy. “Oshi” is one of Japanese subculture. It is character or something that you like so much that you want to recommend to others. And the activity of supporting a Oshi is called a “Oshi-Katsu”. Oshi and Oshi- Katsu trends are becoming more popular. One reason for this trend may be that the coronavirus pandemic has caused people to spend more time alone at home. As can be seen from this influence, Oshi- Katsu is good for one- person entertainment. On the other hand, talking with friends who like the same Oshi and participating in events together are other ways to enjoy OshiKatsu. During previous coronavirus outbreaks, such enjoyment was not possible because of restrictions on multi-person and outdoor events. But now that the restrictions have been lifted, we are free to enjoy Oshi-Katsu. Nevertheless, there are many who want to engage in those activities but cannot because they do not have friends to support the same Oshi or attend the events of their Oshi together. Some of them find it hard to talk to their friends about their Oshi. Therefore, it is difficult to make Oshi friends as easily as actual friends, and even if you want to fully enjoy Oshi friends, you may not have the opportunity to do so. “Oshi no Map” was developed to solve such problems. Through this project we also aim to create opportunities for cross-cultural exchange. We believe that by allowing more people to experience Japanese culture, more people will become interested in each other's countries, and this will help stimulate the economy.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 109 Materials and Methods Project Goals and Features ・ Utilize a web platform to connect and make friends with shared interests. ・ Organize and join offline events centered around hobbies. Application Specifications Data Communication Mechanism for our application is shown in Figure 1. This application is designed to run on a map-based website and is user-friendly. To achieve real-time map updates, information is sent to the server when user information is updated, and each user's page is refreshed. In addition to a user's location information, the details entered during login and event setup are stored in a cloud database. Figure 1. Data Communication Mechanism Implementation platforms and materials In Figure 2, the relationships between various web platforms used in the project are illustrated. The arrows indicate the direction of data flow or interaction, suggesting a sequence or hierarchy in how these platforms interact with each other in the project. ・ Node.js: Node.js is an JavaScript runtime built on Chrome's V8 JavaScript engine. In this project, Node.js is primarily used in the application layer, handling the logic and processes of the application. ・ Firebase: Firebase is a platform developed by Google for creating mobile and web applications. Here, it facilitates real- time communication between the server and the database, allowing for instantaneous data updates and synchronization. ・ Google Maps API: This API is a set of APIs offered by Google to integrate maps into applications. In the project, the Google Maps API is used to fetch and display maps, providing users with geographic visualizations. Flow of Use 1. Access the web page. 2. Enter user information (ID, name, hobby) and login. 3. Transit to map page 4-1.View your own and other users’ information on the map. 4-2.Pins are displayed at your and other users’ locations on the map. 5-1.Click anywhere except the user's pin can set up an offline event. 5-2.Enter the event details in the window that comes up. 5-3.Pins of events set by you and other users are displayed on the map. 5-4.Find an event on the map and go there to participate. Figure 2. Relation of the web platforms in the project Results and Discussion Figure 3 shows the actual user pins on the map. The users are indicated by a red pin like this.Currently, only ID, name, and hobby can be entered, but we plan to make it possible to enter other information.


110 Improvements to this application include the handling of personal information such as location data and the need for security measures because of the direct meeting with others. Other issues include the possibility of ambiguity in how hobbies and characters are referred to among users. In order to unify the way, they are called, we are considering making the input of hobbies at the time of login a choice type. Figure 3. User pins on a map of the web app Conclusions The "Oshi no Map" application provides a platform for individuals to connect based on their shared interests, specifically their " Oshi" . By presenting users on a map interface, the application facilitates real-time connections and event participations centered around their shared passions. Not only does this promote a richer and more enjoyable "Oshi-Katsu" experience, but it also fosters opportunities for cross- cultural interactions. As anime, manga, and other facets of Japanese culture continue to captivate global audiences, the application holds promise for the cultural understanding, leveraging the power of collective interest. As we advance, addressing concerns related to personal information security and providing clearer hobby definitions will be essential to refine the user experience. Ultimately, "Oshi no Map" represents an innovative approach to community building in the modern digital age. Acknowledgments I would like to thank Keita Tsuzuki for useful discussions. I am grateful to Teppei Miura for assistance with the application development. References [19] ” Introduction to Creating Web Applications with Node.js”. Zenn. 2021-01- 29. https://zenn.dev/wkb/books/node-tutorial [20] ” Firebase Realtime Database”. Firebase. 2023-08-02. https://firebase.google.com/docs/database?hl=ja [21] ” Maps JavaScript API”. Google Maps Platform. 2023-04-06. https://developers.google.com/maps/docu mentation/javascript?hl=ja


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 111 Akito Okamoto1 1Nara Women’s University Secondly School, 1- 60-1 Higashikidera, Narasi, Nara 630-8305 Abstract This project aims to speculate on how the five senses influence human emotions, control human emotions, and simulate the impact of situations on human emotions. In addition, I now propose a new method for generalizing information from the five senses. The main issue with inferring emotion is accurately expressing complex emotions ("painful," "sad," etc.) that cannot be easily described and involve multiple intertwined emotions. This is because human sensitivity varies from person to person, and nuanced emotions depend on past experiences. The current major challenge is gathering information to generalize sensitivity. The prospects of this research include smoother communication, control of human emotions, and embedding emotions. Additionally, theoretically, it is possible to infer sensations from situations, making it possible to provide pseudo-sensations to individuals with physical disabilities. Keywords: Machine Learning, Emotion, contrastive learning, complex plane, keyword 5 Introduction I often find that when I'm talking to someone, the content of my conversation doesn't Infer and control emotions from information expressed by different sensory functions get across or isn't understood. I think that differences in emotions ( ≥ sensations) are the reason for this. So, I wanted to infer how the five senses affect human emotions, and create situations where the conversation is definitely understood by simulating the impact of situations on human emotions and controlling human emotions. Preliminary experiment I conducted a preliminary experiment to test my hypothesis. The sample size is 20 and the age range is 16~17. Hypothesis 1: Do the five senses influence emotions? Figure 1: Percentage of emotions felt via the senses Preliminary result show that all participants sensed emotions from sight and taste, and more than 75% of the other senses also sensed emotions. Since this result is found to be significantly different from the significance test, the hypothesis is considered correct. Hypothesis 2: Do the shape and color of objects affect emotions? Next, I conducted a questionnaire focusing on eyesight perception.I asked two questions to Investigate the feelings people have toward Shapes of different shapes and colors. 0% 20% 40% 60% 80% 100% smell touch taste hearing eyesight Have you ever felt emotions through the senses? Yes No


112 Figure 2: Percentage of feelings toward the form As a result, 90% of respondents found the circle to be kind. And none of the respondents found the circle "scary" or "impressive". However, disparate results were obtained for the other two types of figures. Figure 3: Percentage of feelings toward the color In all sentiments, more than 50% of the respondents responded to some of the figures but found it difficult to differentiate by color Materials and Methods Emotions extracted from faces I thought about where I would determine a human’s emotions. This time, I decided to extract emotions from faces, rather than heart rate, body temperature, or anything else that can be quantified. The reason why determine emotions from faces is that emotions are considered abstract, not concrete. Information from the five senses An attempt was made to infer emotion from information from a human’s five senses. The information obtained by the sensory organs was implemented in a pseudo-instrumental way. In addition, the information obtained from the five senses is assumed to be discrete in this case. Expand to the complex plane I placed the information of the five senses on a complex plane. This way, I thought I could express information while maintaining the relationship between each sense. This part is still a work in progress as I just came up with it myself. I define ≥ on the unit circle on the complex plane, where ≥ is located by the expression ≥ = exp( 2≥≥ 5 ) and defined to be in the range 0 ≤ |≥| ≤ 1 . In this case, ≥ 2 and ≥ 3 are located in the position where ≥ moves 5≥ 2 from Euler's formula. (Figure 4) Figure 4: Location of ζ on the complex plane Next, I apply the information of the five senses to vectors on , 2 , 3 , 4 and 5 . By expressing them in this way, each sense can be 0% 20% 40% 60% 80% 100% Kind scary impressive From which form did you feel which emotion? Circle Nonagonal Bow NA 0% 20% 40% 60% 80% 100% Wonder Ecstasy Sad From which gradation did you fell which emotion? Pale Blue Orange More than 2 NA


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 113 represented as a vector, and the addition of the senses can be easily expressed. (Figure 5) Figure 5: Place a vector on ζ. Also, when all vectors are maximized (|| = 1), and 4 , 2 and 3 are conjugate complex numbers. + 2 + 3 + 4 + 5 = 0 is valid. Next, I consider the point where all vectors are added together. At this time the sum of vectors 1 and 2 both sum toξ ⃗⃗⃗⃗ = ⃗ . ⃗ 1 + 1 ⃗⃗⃗⃗2 + 1 ⃗⃗⃗⃗3 + 1 ⃗⃗⃗⃗4 + 1 ⃗⃗⃗⃗5 = ⃗⃗2 + 2 ⃗⃗⃗⃗2 + 2 ⃗⃗⃗⃗3 + 2 ⃗⃗⃗⃗4 + 2 ⃗⃗⃗⃗5 = ⃗ This leads me to believe that it is possible to produce a sensory value representing a particular emotion ξ ⃗⃗⃗⃗ by adjusting the length of the vector. Also, even if one sense is deficient but can reach that point, the physically challenged person may experience the same emotions as the average person. Contrastive learning Finally, I conducted contrastive learning with information from the five senses and emotions extracted from the faces to derive associations. Results and Discussion Extract Emotion I tested this using the dataset in the reference, but it was only able to represent primary emotions. This is due to the fact that the emotions were not subdivided when learning and that secondary emotions are difficult to discriminate because they consist of multiple primary emotions. In the future, I would like to test the results by expressing complex emotions by treating them as several tags instead of deciding on one from a face. Process information Currently under investigation… The challenge is that what I know now is not that there are P emotions in the vicinity of the vector sum P, but that I can only do this for emotions that are already known. Infer emotion The estimation up to the primary emotion was also good, but the expression of the secondary emotion was a challenge. Also, the most significant problem is the lack of numerical values for the results. The sample size is small and sufficient teacher data has not been gathered to determine accuracy, etc. Conclusions T Although the accuracy as a numerical value has not yet been obtained, it was thought possible to produce emotions from the senses. However, due to the small sample size, the results


114 are not yet clean, and I would like to focus on demonstration rather than theory from now on. Acknowledgments I would like to thank all those who responded to the survey. References [1]X. Sun, J. Zeng and S. Shan, "Emotion-aware Contrastive Learning for Facial Action Unit Detection," 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), Jodhpur, India, 2021, pp. 01-08, doi: 10.1109/FG52635.2021.9666945. [2] Daeha Kim and Byung Cheol Song. Emotionaware multi-view contrastive learning for facial emotion recognition. In Lecture Notes in Computer Science, Lecture notes in computer science, pages 178–195. Springer Nature Switzerland, Cham, 2022, doi: 10.1007/978-3-031-19778-9_11 [3] Ali Mollahosseini, Behzad Hasani, and Mohammad H. Mahoor, “AffectNet: A New Database for Facial Expression, Valence, and Arousal Computation in the Wild”, IEEE Transactions on Affective Computing, 2017. [4] D. Kollias, S. Zafeiriou: Expression, affect, action unit recognition: Aff-wild2, multi-task learning and arcface. In: 30th British Machine Vision Conference 2019, BMVC 2019, Cardiff, UK, September 9-12, 2019, https://bmvc2019.org/wpcontent/uploads/papers/0399-paper.pdf


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 115 Mimo Onodera and Aika Sato and YuNakata 1 1 Ibaraki College National Institute of Technology (KOSEN) Nakane866, Hitatinaka City, Ibaraki, Japan Abstract The project aims to invent a white cane equipped with sensors and assist visually impaired people to walk safely. Roll the white cardboard into a long and thin stick shape. A reflecting material is attached on the cane. Two ultrasonic sensors and micro: bit programmed ones are prepared and attached to the left and right sides of a thin, round white cardboard. In fact, when the micro: bit A button is pressed, four different sounds are made every 100 centimeters, and the distance in centimeters is displayed on the micro: bit. The function can be stopped by pressing the B button. In addition, since the white cane is equipped with a reflecting material, the presence of the white cane user can be seen by people around him even when walking in a dark place. Consequently, the possibility of bumping into an obstacle can be significantly reduced. A white cane equipped with a sensor can assist visually impaired people to walk safely. Keywords: white cane, ultrasonic sensors,micro:bit, reflecting material , visually impairedpeople Introduction The white cane was first introduced in the20th century. Although it is not long history, the white cane technology has grown in many ways. There are many kinds of canes in the Infer and control emotions from information expressed by different sensory functions market, suchas a cane that detects objects in video and informs them by sound, a cane that guides them, a folding cane, and an AI- equipped cane. However, these are very expensive. In this project, we are presenting a low cost and user-friendly walking stick that would help the visually impaired. I installed an ultrasonic sensor on a white cane tomeasure the distance and programed it so that the user is alerted by sound. Materials and Methods Materials 1.Micro:bit 6 extension pole 2.Ultrasonic sensor 7 computer 3.Card bord 8 reflecter 4.beads 9 battery 5 tape 10 cable Methods Part1 : Program using micro;bits and computers We proposed two micro:bit programs. ≥ The first program (top)


116 If the distance is less than 100 meters, you will hear ba ding. If the distance is less than 200 meters, you will hear dadadum. If the distance is less than 300 meters, you willhear chase.If the distance is less than 400 meters, you will hearnyan. ≥ The second program (under)If the distance is less than 100 meters, you willhear Ode. If the distance is less than 200 meters, you will hear birthday. If the distance is less than 300 meters, you will hear power up. If the distance is less than 400 meters, you will hear punchline. Part2: Made a white cane ≥ Tape an extension pole so it won’t shrink. ≥ Fixed a reflector on an extension pole. ≥ Fixed a micro:bit to extension pole. ≥ Fixed a battery to extension pole. ≥ Fixed on ultrasonic sensor to it so that it is parallel to the ground when it is tilted and held by person. Because white cane can’t measure the exact distance between people and object. Results and Discussion Press the A button to start the micro:bit. While holding a cane and changing the position of standing facing the wall, we measured the distance corresponding to the program and investigated whether the 8 types of sound would change. Table 1: Checking the operation of the first program Table 2: Checking the operation of the second program Conclusions The results showed that the white cane we made produced the sounds we had programed. This allows people who are visually impaired to know and walk several meters away. However, since the micro;bit and cable are secured with duct tape, there is a possibility that they will break if


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 117 the cane is accidentally knock them over. This raises the problem of strength. It is necessary to think about how to securely fix the machine to the tension rod, and what container to enclose the exposed machine. By doing so, it is easy to use and stable for visually impaired people. Acknowledgments Special thanks are due to Dr. AbbasAlshehabi and Prof. Koh Ikeda for the thoughtful advice and guidance throughout the project. References https://www.with-blind.com/whitecane/白杖 の"ルール"と"種類"を視覚障がい者が解説! (with-blind.com) https://www.nippokai.jp/useful/72.html 白杖(盲人安全つえ)について | 日本歩行訓 練(nippokai.jp) https://www.carefit.org/liber_carefit/barrierfre e/barrier-free20.php 白杖にデコレーション!? 白杖使用者への誤解今更聞けない視覚障害 者が使う白杖とは 4|公益財団法人日本ケ アフィット共育機構(carefit.org)


118 Kensuke Maemura and Ryu Iwanaga Advisors: Hirofumi Ohtsuka and Koshi Kikuchi National institute of Technology (KOSEN), Kuma moto College, 2659-2 Suya, Koshi, Kumamoto Abstract In this project, we develop a propulsion mechanism with four legs with freewheels that can be propelled by opening and closing the legs like roller skates and can move back and forth, left and right. The purpose of this research is to analyze the mechanism of the swizzling technique and to deepen the understanding and technical skills of physics and motor control using a microcomputer through the creation of the propulsion mechanism and experiments. The swizzle motion obtains propulsion by utilizing the anisotropy of the drag force, which has a weak friction coefficient in the rolling direction of the wheel and a strong friction in the axial direction. For the swizzle propulsion mechanism, the feet are placed in a "v" position with the toes pointing outward, and the knees and ankles are bent deeply as the feet are opened. When the feet are fully open, turn the toes outward, and the knees and ankles are extended. Repeat this process for propulsion. Keywords: propulsion mechanism, roller skates, swizzling technique, friction Introduction What kind of robot do you think of when you hear the word "propulsion robot"? Like a car, is it has wheels that connected to a drive unit? We Development of Swizzle- type Traveling Mechanism for Legged Robot with Freewheel developed a robot that has “freewheel.”That means “unpowered wheel. ” How this robot works with unpowered wheel? Please think back to your first skating experience. “Start by placing your blades in a "V" formation with the heels touching. Using the inside edges of blades, push both feet outwards, then inwards so that your toes are touching.” That is called “swizzling”, or “scissors”, the basic skill of skating technique. We decided to use this principle to create a propulsion mechanism for a robot. The biggest challenge in applying swizzle motion to a robot is the number of legs. Basically, swizzle motion is performed with two legs, but the robot to be developed in this project has four legs. How to reproduce with four legs an action performed with two legs is the key to this project. Materials and Methods Materials The following is a list of the electronic devices used in the robot. Table1. Equipment in use Device Manufacturer Model No. Servo motor KONDO KRS-9004HV KRS-2542HV Microcomputer board Arduino Arduino UNO R3 Battery KONDO F3-1450 Transmitter KONDO KRC-5FH Receiver KONDO KRR-5FH Conversion board KONDO ICS conversion board Most of the parts of the robot are designed by us. Also, Frame and reg parts are cut from aluminum sheet and aluminum square or 3d printed.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 119 Figure1. Process of parts production Methods Swizzle motion Figure2 shows it being propelled from left to right as seen from above. In this diagram, the black lines are shown like the blades of skate shoes. The red arrows indicate the direction of the leg’s forces. Figure2. Swizzle motion Robot design Figure3 is a robot design on 3D CAD data. The robot has three joints per leg, and each part can be moved freely by servo motors. Three servo motors are mounted per leg, and casters are at the end of each leg. Figure3. Design of robot Frame The frame consists of aluminum squares, aluminum plates, and servo motors. Servo motors are installed on all four sides of the frame, and the motors are the first joints of the legs. The inside of the frame is hollow and can store the battery. Figure4. Frame Legs Two servo motors are incorporated per leg. The direction of the wheels is also controlled by servo motors. 3D printed parts are used to hold the servo motors and wheels in place. Figure5. Leg The servo motors are powerful, and the swizzle motion is propelled by frictional force, so the parts that need strength are made of aluminum.


120 Figure6. 3D printed part To hold the servo motor tight, use parts designed and 3d printed for the servo motor. Operation system The robot is controlled wirelessly. ICS conversion bord relays the power supply from the battery to the computer, the control from the computer to the servo motors, and the receiver to the computer. Figure7. Operation system Figure8 shows a transmitter: controller that can be control the robot wirelessly. Figure8. Controller The microcomputer board is connected to a conversion board and radio receiver. They are mounted on top of the frame. Wires are connected to each servo motor, receiver, and battery from the ICS conversion board connected to the microcomputer board. Figure9. Microcomputer board, ICS conversion board and radio receiver Results and Discussion Results As a result, we succeeded in propelling the robot with a swizzle motion. Discussion There are various ways to move the legs for swizzle motion. It was found that the robot’s propulsion and speed differed greatly depending on whether the front and rear legs were moved at the same time or separately. Conclusions According to the experiments, we were able to produce a swizzle-type traveling mechanism for legged robot with freewheel. It is possible to reproduce the swizzle motion performed by two legs with a four-legged robot. Acknowledgments First, we would like to express our appreciation to TJ- SSF projects for holding an event and providing an opportunity to present. And we would like to thank Mr. Hirofumi Ohtsuka for constructive suggestions.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 121 References [1] Gen Endo, Shigeo Hirose; Study on Roller- Walker ( Multi- mode Steering Control and Self-contained Locomotion), Proc. Int. Conf. on Robotics and Automation, pp. 2808-2814 (2000) [2] Gen Endo, Shigeo Hirose; Study on RollerWalker - Improvement of Locomotive Efficiency of Quadruped Robots by Passive Wheels, Advanced Robotics, Vol. 26, Issue 8- 9, pp. 969-988 (2012)


122 Shimakawa Shunsuke1 and Nabeshima Yuha1* Advisor: Kikuchi Koshi2 1,2National Institute of Technology (KOSEN), Kumamoto College,2659-2, Suya, Koshi, Kumamoto, 861-1102, Japan e-mail: [email protected] Abstract In this research, we employ the principles of an inverted pendulum to stabilize an unstable two-wheeled robot. Sensors detect the robot's tilt in real-time, and this information is fed back to the motors to maintain the robot's balance. As a result, the robot can move in challenging environments and exhibits a high level of versatility in its locomotion. While existing inverted pendulumtype opposite dual-wheel robots have shown success in maintaining an upright position and moving on horizontal surfaces, there are relatively few examples of them effectively handling highdifficulty situations such as navigating on unstable terrain or overcoming obstacles. Hence, our research aims to establish unique control methods and develop a highly versatile locomotion system. This research holds significant practicality, with potential applications ranging from assisting individuals with walking impairments to providing a means of transportation for able-bodied individuals, and it is expected to make a valuable contribution to society. Keywords: inverted pendulum, PID control Manufacturing of an inverted pendulum-type opposite dual-wheel robot Introduction 1, Definition of Terms 1-1,Inverted pendulum An inverted pendulum is a pendulum whose center of gravity is higher than its fulcrum. Since an inverted pendulum is inherently unstable, it must be actively always controlled to maintain its inverted state. In this research, we will apply this principle to a robot and create a two- wheeled robot. Sensors read the robot's inclination and provide feedback to the wheel motors to maintain the robot's inverted position. 1-2, PID control It is one of the control algorithms widely used in control engineering. This control consists of three components: Proportional, Integral, and Derivative, and by adjusting these values appropriately, it can be applied to a variety of systems. This research shows that this method is effective in shortening the time it takes for a robot to balance itself, and in making the robot less likely to fall over when it receives an impact. Figure 8 : PID control Flow 2, Goals (1) Manufacturing of an inverted pendulum-type opposite dual-wheel robot (2) Maintaining an upright posture (3) Movement of horizontal plane (4) Overcoming obstacles (5) Establishment of control process Figure 7 : Robot model


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 123 Materials and Methods 1, Materials Most of the robot's parts were printed using a 3D printer. Therefore, the information on the parts used in the hardware is limited to the motor. Two types of motors are used: a drive wheel motor and a servo motor to move the joints. Table 1 : Wheel motor information RS-385PH speed performance 6,400 rpm (*max) torque 81 g / cm gear ratio 1/19 Table 2 : Servo motor information DS3235MG speed performance 0.11 s / 60° torque 35 kg / cm degree 0° ~ 270° We chose the ESP32, Bluetooth enabled microcomputer, to connect to the DualShick3. MD is a full n-channel MOSFET type. Battery supplies 7.4V to motor and 5V to microcontroller using DCDC converter. Table 3 : Mother board materials microcomputer ESP32 WROVER -E gyro sensor DCDC converter controller BMX055 MYMGK00506ERSR DualShock3 Table 4 : *MD board materials MOSFET TK100E06N1 gate driver IRS2008 Table 5 : Battery KT2200/30-2S type voltage capacity Li-Po 7.4V 2200mAh Table 6: Software Solid works 2022 Robot body design Ki CAD 7.0 Circuit board design Arduino IDE 2.2.1 Writing a program 2, Methods 2-1, Control method We equipped the robot with a gyro sensor to read its tilt. This data was sent to the microcontroller, controlling the motor movements based on the values. Figure 9 : control flow 2-2, Manufacturing method Divide the work into two. Its role is to create the robot body (green) and perform circuitry and control (blue). Figure 10 : Flow of manufacturing a robot *MD … It is an abbreviation of motor driver. A device that drives and controls a motor, controlling the amount, direction, timing, etc. of current flowing through the motor. Here we are


124 referring to the motor driver circuit board that we made ourselves Results and Discussion 1, Robot body We designed and created this type of inverted pendulum-type opposite dual-wheel robot. Unlike conventional models, our robot had multiple joints in its locomotion system, providing it with higher degrees of freedom. This allowed for a broader range of control, enabling the robot to navigate complex terrains and obstacles effectively. Figure 11 : Result of robot body design Inside the robot are motors, circuit boards, batteries, etc. Power from the servo motor is transmitted to the legs via gears. These link mechanisms work to move the robot's joints. Figure 12 : leg details Figure713 : Actual robot Table 7 : robot information weight 2,600 g height 225 mm (*max expansion : 315 mm) width 275 mm depth 205 mm 2, Circuit board Mother Board has wiring to gyro sensor, microcomputer, DCDC converter, etc. *MD is equipped with two full bridges. It controls the amount, direction, timing of two motors by turning these MOSFETs on and off. 3, Control ESP32 timer interrupts were used for PID control. Proportional, integral, and differential calculations are performed every second using a timer interrupt and output to the motor. The gyro sensor values were used without filtering. The filtering takes time for the results to be reflected. Figure 15 : Part of the source code W heel m otor Servo m otor W heel (A) Mother board (B) *MD board Figure 14 : Circuit board


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 125 Conclusions Conventional inverted pendulum- type two- wheeled robots are not good at handling uneven roads or obstacles because the impact on the wheels is directly transmitted to the robot's body. On the other hand, the inverted pendulumtype two-wheeled robot created in this research has many joints, so it was supposed to be able to absorb shocks like a human bends and stretches, allowing it to run stably even on rough roads. However, the problem was that the vertical movement of the joint was not linear, and the tires rotated within the joint motion. This placed a burden on control. Although he succeeded in maintaining a handstand and moving horizontally, he was not good at climbing uphill or dealing with obstacles. There is room for improvement in the joint design. This robot's high flexibility makes it suitable not only for assisting human walking but also as a highly adaptable means of transportation. Its societal contributions are highly anticipated. Acknowledgments In acknowledgement you may thank all the people/ organizations who provided their assistance to you in forms of advice, suggestions, and any others. References [1] Moke Nakamura, BalaC(M5StickC builtin inverted pendulum) is made to stand by PID control, 2023 https://qiita.com/coppercele/items/527228 e3f08c53597bd1 [2] Toshihiko Arai, Build it in 10 days! Raspberry Pi Inverted Pendulum Robot, 2023 https:// 101010. fun/ self- balancingrobot.html Figure 16: Rotation of wheels due to movement


126 Itsuki Furushima1 , and Rin Hasebe2 Advisors1 …Dr. Kohshi Kikuchi 1Department of Information, Communication and Electronic Engineering, [email protected] 2Department of Human-oriented Information Systems Engineering, [email protected], National Institute of Technology (KOSEN), Kumamoto College, Japan Abstract The purpose of this study is to support the introduction of KME ( KOSEN Multifunctional Endpoint) to the community. This will be accomplished through making improvements of compact assistive devices and the creation of instruction manuals for their use. KME is a small device that allows various devices to be controlled by switches. It can connect switches as input and control Bluetoothconnected devices as output. It is also possible to change the function according to the device you want to control. For example, there is a function for operating the screen of an iPad or PC and a function for controlling IoT devices to operate home appliances and toys freely. Furthermore, by connecting various switches, KME can create a system optimized for each user. To use KME, the user will need instruction manuals to know how to combine the inputs and outputs of the switches and the devices you want to control, and how to switch between functions. This study will support people with disabilities and their supporters who use KME to make it Development and Supporting of an Educational Accessibility Switch Controller more effective. Specifically, we will produce videos that show examples of KME use and how to set it up as instruction manuals. In addition, we will further improve KME and the instruction manuals through the reactions of actual users in the field. Keywords : KME(KOSEN Multifunctional Endpoint), assistive technology, switch controller,accessibility switch, instruction manuals Introduction Our study aimed to make an instruction manual for KME. KME was designed to help people with disabilities and those who support them in their work. Kumamoto Prefectural Kuroishibaru School for Special Needs Education provides students with various disabilities, such as severe intellectual disability with educational equipment tailored to the type and severity of disability. Learning of causal relationship is taught using digital devices such as tablets and PCs, original games, as well as familiar objects such as toys and appliances. However, there were problems in the preparation for the lessons. Specifically, when a student with a severe intellectual disability learns about causality relationships using games on a tablet device, the keyboard and tablet device had to be wired together each time. The wiring required technical knowledge in many situations and was complicated. In addition, as the type and severity of disability differed from student to student, it took time and effort to prepare the devices, change the setting methods, accordingly, making it impossible to proceed efficiently. Therefore, last year National Institute of Technology(KOSEN), Kumamoto Collage produced assistive technology called KME (KOSEN Multifunctional Endpoint), which is a device that allows objects to be controlled by switches. KME simplifies wiring and settings, making it easier to operate and more efficient for people without specialist knowledge. What used to be wired and wired before is now wireless and compact, which has helped to save time and effort.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 127 In addition, work that used to require different devices to be prepared and different setup methods for each student has been simplified and made quicker with the use of KME. We visited the support school to learn about usage of KME and feedback on use to further improve it, and to find out what kind of activities should be carried out to promote KME in the local community. There, the feedback was expressed that the functions of KME were not being used properly. Specifically, typical functions could be used, but functions that required specialist knowledge had to be explained. There was no instruction manual of KME so it was difficult to introduce KME comprehensively, functions and examples of good use. We started to make the manual for KME. While making the manual, we tried to make it easy to understand as the majority of KME users are not technicians with specialist knowledge. In addition, it is difficult for the users to visualize the system using only text. We solved this problem by making video manuals. By giving examples of KME applications in the videos, the users can easily learn about the various way to use KME. Furthermore, we made improvements of KME and the manual through feedback from the support school. Materials and Methods Materials KME(KOSEN Multifunctional Endpoint) KME is a compact assistive device produced by National Institute of Technology (KOSEN), Kumamoto College. KME has a small IoT development module called M5Stack Basic based on ESP32 with a mono jack for switches. It was developed for people with disabilities, who cannot move their limbs as they wish. KME is a device that allows things to be controlled by switches. It can connect switches as input and control Bluetooth-connected devices as output. It is also possible to change the function according to the device you want to control. The users can learn cause-and-effect relationships by using KME. Methods In our study, we followed a cycle of three processes. 1, Planning: First, we studied KME to get to know it. In this way, we learned about the functions and usage of KME. We also examined what we ourselves could make as the instruction manual from the user's point of view. We then considered what information people who use KME would like to know and in what situations they would use KME. As a result of the discussion, we decided to make several short videos as the instruction manual. We also created a storyboard of KME functions and examples of KME, the flow of the video, and the screen structure to be used as the content and planned for the video production. 2, Development: We made videos introducing We set ourselves the goal of making videos that was easy to understand and humorous for everyone. First, we created a storyboard. There were many difficulties in the storyboarding process.We struggled with how to combine pictures/images and explanations. Also, as we worked on the storyboards, we realized what was missing in the structure of the video we were trying to create. For example, we wanted the ending of the video to be more interesting to viewers, so we added a humorous scene at the end of the video to make it more a positive note.To be honest, the process was not easy and took longer than we had anticipated. However, by carefully carrying out this work, we were able to simplify the process and were able to share a common vision. This allowed us to proceed smoothly with subsequent activities. Second, we started filming the videos. Although we had no knowledge of filming videos, we referred to video websites such as YouTube and filmed while paying attention to the angle of view of the camera, camerawork, brightness of the shooting location, placement of the KME and control equipment, and movements. To make the video more enjoyable for the viewer, we added faces to the equipment used in the video to make them look like characters. We also tried to make them look like characters by attaching faces to the equipment used in the video so that they could enjoy the video.Third, we edited the videos we had shot. We tried to make the video as simple as possible, and we were most careful about the screen structure and the placement of the titles. We also made sure that the duration of the video was suitable for


128 viewing. We also included post recording. In preparing the scripts for the post-recording, we tried to supplement the explanations for areas that could not be conveyed by the video footage alone. For example, in the scenes where the KME is operated, we explained the order of button presses and mode selection, etc. As the majority of KME users are not technicians with specialist knowledge, we kept the recording as brief as possible, using as little specialist or difficult language as possible to make it easy to understand so that anyone can understand. We did this by using as little technical or difficult language as possible in the recording. After completing the video, we asked teachers at our school, senior students who participated in last year's TJ-SIF and senior members of the broadcasting club with video editing experience to check the video to find out what others thought of it, and to give us feedback on their praise of the video and their opinions on how to improve it. The feedback we received back on the video was that it was extremely easy to understand and enjoyable to watch. This was our goal in producing the video, and we achieved it. However, we also found that there were various areas for improvement. People with knowledge of KME told us about the order of the explanations in the video, where they were lacking and where they were difficult to understand. Also, senior students with experience in editing pointed out details such as the volume of the recording voice, the color and font of the text, etc. They also taught us basic knowledge of video editing, filming equipment and points to note when filming, and gave us feedback from a specialist's point of view. From this feedback, we realized that although we had achieved our goal of making videos that were easy to understand and humorous, we needed to make further changes. 3, Social Implementation and Review: We visited the teachers at Kuroishibaru Support School, which is located next to our school, and held a meeting with them to get feedback from the KME users. We asked them to use KME for a certain period of time and to share their impressions, areas they would like to see improved, and any problems they had with KME. We held two meetings, one in July and the other in September. At the first meeting, we received many positive comments about KME. For example, some people said that the KME eliminated complicated wiring and reduced the distance the user had to move, making it easier and more convenient than using a keyboard and iPad. In any case, the use of the KME eliminated the need to re-configure the settings for each student, which led to faster preparation, saved time, and eliminated the need to keep the students at the support school waiting. We also learned that the teachers were more proficient in using the KME functions than we had expected, and that they were using the KME in combination with the various devices and applications that the teachers normally used as learning materials for the students at the support school. Here we were convinced that KME was useful. As a point of improvement, we received a request to use multiple KMEs at the same time when using KMEs in combination with various devices and applications. We found it necessary to explain the function of KMEs. In response, we decided to study the modes of the KME in detail and make a video explaining them. We received many questions at the second meeting. For example, one commented that it was difficult to understand which jacks on the KME corresponded to which functions. From this opinion, we decided that we needed an instructional video explaining how to use the KME, including the names of each KME component and how to connect them, so that people can use the KME more smoothly. Other opinions that stood out included the need-to-know what peripherals are necessary to use each function of the KME and how to utilize them. We felt again that it was necessary to create an instructional video on how to use the KME itself and three main instructional videos using MaBeee and Switchbot for detailed mode usage. In addition, there was a comment that problems such as the mode unintentionally switching by itself occurred when


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 129 using the KME. By introducing the KME to the field, we were able to learn about defects that were not found during the development phase, and we realized that the KME needs to be further reviewed and improved. Results and Discussion We have made a video for KME users to show how to use KME, which is a small device that can control various devices with switches. In order to use KME, an instruction manual is necessary to know how to combine the input/output of the switch with the device you want to control, how to switch functions, and so on. We made several videos as the instruction manual.≥The reason why we chose to use videos is that it is easier to visualize and understand the actual operation of the device, and videos can be viewed at any time if you have a digital device. First, we made a video introducing KME itself. In this video, we introduced and explained the purpose for which KME was developed and the functions available in KME. In the other videos, we introduced the combination of two devices (SwitchBot MaBeee) as an example of its use. The videos were designed to show the functions of KME and to see if there is a need for them among users. In the video, we used a battery-operated soft toy, the blackboard eraser cleaner, to help users visualize how this functionality would help them learn. The first part of the video was a scene explaining the SwitchBot and the three typical modes of KME, which was recorded with the SwitchBot attached to a blackboard eraser cleaner and controlled by KME. Next, the method of connecting KME and SwitchBot was explained, and a similar instructional video using MaBeee, batteryoperated soft toy, was made. In the process of making the video, we tried to keep the video short and concise so that it was easy for the viewer to understand and concentrate on watching. We were conscious of the flow of the video content and reduced the scene of KME’s operation. We also tried to make the screen structure as easy to read as possible and included subtitles and post recording. Discussion: To develop assistive devices, we need to provide devices that are optimized for each user. People who use them have different disabilities and may not always use them in the same way. To create a system that is optimized for each user, it is important to repeat each step of the development process, such as planning, development, social implementation, and review. By repeating this process, we can deepen our understanding of the users and their assistive devices and develop even better products. In particular, visiting the users directly and receiving their feedback are important processes. In this way, we can understand user needs and review devices not only from the developer's point of view but also from the users' point of view. User needs include explanations of how to use the products, and we need to provide support for these needs as well. Conclusions The most important part of the development process is post-development. It does not end with making the product. Support and improvements are needed to make it easier for the users. Especially in the development of assistive devices, it is necessary to optimize the product for each individual user. We must gain a deep understanding of the needs of users and improve products accordingly. In addition, development requires the support of not only engineers, but also


130 various supporters, such as experts in the field, and people who use the product. Acknowledgments We thank Mr. Hiroshi Shimoda and Mr. Isamu Fukushima for developing KME and providing advice on various improvements. We express our gratitude to Mr. Koichi Watanabe for the steps he has taken in our efforts to introduce welfare equipment. References [1] K. Iwasaki and T. Uemura, "Development of Assistive Devices for Persons with Disabilities Using Small Devices", Thailand - Japan Student ICT Fair 2022.


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 131 Shu Nemoto ,Haruki Akutsu and Hiroyuki Komatsu1 Advisors: Ko Ikeda,Abbas Alshehabi,Aya Nita2 1,2Ibaraki college, Japan Hitachinaka city nakane 866 Abstract DENEBU is a functional sleeping bag with temperature control. This sleeping bag uses a micro: bit to measure the body temperature of the sleeper and adjusts the heater temperature accordingly. When the body temperature is low, the heater output is increased, and when the body temperature is high, the heater is turned off. Thus, the DENEBU helps regulate body temperature during sleep, ensuring a comfortable night sleep. This DENEBU is intended for use when sleeping outside, such as during disasters and camping. The name "DENEBU" comes from the Japanese word for "electric sleeping bag. Keywords: DENEBU, micro: bit, temperature Introduction Sleep is an inseparable part of our lives. If we continue to get poor quality sleep, it will cause a great deal of stress and have a negative impact on our physical and mental health. Therefore, we have decided to create a product that improves the quality of sleep. The name of the product is "DENEBU. This product is focused on helping people sleep when they sleep outside or in cold environments, mainly in times of disaster. The product adjusts the temperature inside the sleeping bag according to the body temperature of the sleeper to support comfortable sleep. Functional sleeping bag with temperature control(DENEBU) Materials and Methods The materials we need to make DENEBU are sleeping bags, thermometers, heaters, and micro bits. The DENEBU is a sensor that observes changes in the user's body temperature by connecting a micro bit to a thermometer. If the sensor detects that the user's body temperature is low, such as before sleep, a heater attached to the sleeping bag will warm the sleeping bag. If the sensor confirms that the user's body temperature is maintained at an appropriate level, the heater is switched off. In this way, the sleeping bag can be used in a variety of situations, such as to ensure a comfortable night's sleep for the user, in times of disaster or to help the elderly who suffer from sleep deprivation. Results and Discussion Although it is still in progress, here are our conclusions and considerations at this point. DENEBU only had a function to sense changes in body temperature and switch the heater on and off, but we plan to link this to the solution of comfort and problems that meet a wider range of needs through delicate functional adjustment using even more detailed data such as the human pulse rate. Even this simple structure has taken a great deal of time and effort to create up to this point, and we would like to try to improve on that point as well. Conclusions In conclusion, we believe that by developing and commercially available DENEBU we can be a step towards solving one sleep problem. There were also issues that we found during the development of this product. If we can solve these issues in the future, we will be able to solve the problems of many more people.


132 Acknowledgments Thanks are due to M. Kawasaki, the Department of student affair, National Institute of Technology ( NIT) ( Ibaraki College) , for assistance in tools providing and support throughout the project. References [1]Electric sleeping bag "electric sleeping bag" that is slightly heated by electricity https://www.makuake.com/project/nafrotokyo/ [2] Sleep and Temperature research. www.terumotaion.jp/health/sleep/article01.html


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 133 Shuntaro Eguchi, Kante Takahashi, Neo Enzaki 1 Advisors: Koh Ikeda, Abbas Alshehabi, Aya Nita 2 1,2National Institute of Technology, IbarakiCollege, That, Ibaraki, Hitachinaka,Nakane 866, Japan Abstract Our goal is to use aroma diffusers and air quality survey sensors to improve concentration in studying and working. The wiring of the aroma diffuser is extended and air quality sensor madeoperable by microbit. In addition, 15 subjects were surveyed to determine which of approximately 20 different aromas would provide healing and concentration effects. A system that recommends ventilation of the room according to the amount of carbon dioxide ppm, is installed Introduction Memory is associated with smell. We focused on this and thought that sniffing could improve memory and concentration in studying. First, we investigated the relationship between smell and memory, and found that only the sense of smell can send signals almost directly to the hippocampus, the part of the brain responsible for memory. And since the hippocampus has a role like a memory storage, it is said that almost assoon as it detects a smell, it finds its corresponding file and recalls the memory. Next, when we examined the relationship between smell and concentration, we found that smell has a calming effect on people, just as aromas do, and thus canmake them concentrate more deeply. Based onthese findings, we would like to create a device that emits smells to support the study Mental Concentration Improvement with Smell Materials and Methods 1) Motion detection 2) Sends a signal to the device 3) Emits a smell Results and Discussion The following line graph shows the percentage of squares filled and the number of people who were asked to multiply 1000 squares in 5 smells by 30 people in 40 minutes. From the graph, lavender and citrus are concentrated in 70~90 percent. Wood and cooking are around 50~70 percent. Ammonia is concentrated in the 60 percent. From this, smells that are generally considered good or not good depending on how they are perceived, do not enhance concentration very much. Also, good smells, such as those used in aromatherapy, enhance concentration. Graph and table split next page


134 Conclusions Experiments have shown that aromas that are generally considered to smell good by the public have the effect of improving mental concentration. They were also able to answer more questions asked after the surveys and test than the others, so it can be said that their memory is also improved. Acknowledgments Thanks are due to M. Kawasaki, the student affair department at the National Institute of Technology ( NIT) , Ibaraki College for her assistance in purchase, and support during the project. References [1]ARTLAB:https://www.artlab.co.jp/blog /etc/fragrant_etc_01 [2] Hanno Takahiro, The influence that a fragrance gives to an exercise performance and mind concentration 愛知教育大学保健体育口 座研究記録 No.33.2008


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 135 Contributors for TJ-SSF 2023 Ministry of Education Embassy of Japan in Thailand Office of The Basic Education Commission Ministry of Education, Culture, Sports, Science and Technology Khon Kaen University Japan International Cooperation Agency Loei Rajabhat University Chulabhorn Royal Academy


136 Contact Persons of TJ-SSF 2023 Name Tel. Language Mr. Songkran Buttawong 095-194-1926 TH Miss Wiriya Tasee 087-224-2291 TH/ENG Mrs. Chanunphat Khotewong 091-706-7815 TH Mr. Satawat Tudmala 095-865-2380 TH Miss Chayaporn Montha 080-746-3378 TH/ENG Miss Parita Chamontree 095-659-0484 JP


Thailand – Japan Student Science Fair 2023 “Seeding Innovations through Fostering Thailand – Japan Youth Friendship” 137


Click to View FlipBook Version