Search for collections on EPrints Repository UNTIRTA

PENERAPAN YOLOV8 DALAM DETEKSI SAMPAH ORGANIK DAN ANORGANIK BERBASIS MOBILE

Anggraeni, Ega Nisa (2025) PENERAPAN YOLOV8 DALAM DETEKSI SAMPAH ORGANIK DAN ANORGANIK BERBASIS MOBILE. S1 thesis, Fakultas Teknik Universitas Sultan Ageng Tirtayasa.

[img] Text (Fulltext)
Ega Nisa Anggraeni_3337210043_Fulltext.pdf
Restricted to Registered users only

Download (1MB)
[img] Text (Bab 1)
Ega Nisa Anggraeni_3337210043_01.pdf
Restricted to Registered users only

Download (840kB)
[img] Text (Bab 2)
Ega Nisa Anggraeni_3337210043_02.pdf
Restricted to Registered users only

Download (437kB)
[img] Text (Bab 3)
Ega Nisa Anggraeni_3337210043_03.pdf
Restricted to Registered users only

Download (585kB)
[img] Text (Bab 4)
Ega Nisa Anggraeni_3337210043_04.pdf
Restricted to Registered users only

Download (530kB)
[img] Text (Bab 5)
Ega Nisa Anggraeni_3337210043_05.pdf
Restricted to Registered users only

Download (263kB)
[img] Text (Daftar Pustaka)
Ega Nisa Anggraeni_3337210043_Ref.pdf
Restricted to Registered users only

Download (241kB)
[img] Text (Lampiran)
Ega Nisa Anggraeni_3337210043_Lamp.pdf
Restricted to Registered users only

Download (241kB)
[img] Text (Turnitin)
Ega Nisa Anggraeni_3337210043_CP.pdf
Restricted to Registered users only

Download (11MB)

Abstract

This study aims to develop a web-based automatic detection system capable of identifying organic and inorganic waste in real-time using the YOLOv8 deep learning model. Three variants of YOLOv8, YOLOv8n, YOLOv8s, and YOLOv8m were employed to compare their performance in waste detection. The methodology used is a Prototype approach combined with the Artificial Intelligence Project Cycle to develop and evaluate the object detection model. The dataset consisted of 8.853 images categorized into two main classes (organic and inorganic), supplemented with unlabeled images as non-waste data. Based on the evaluation results, YOLOv8s was selected as the best-performing model with a precision of 0.931, recall of 0.847, mAP50 of 0.919, and mAP50-95 of 0.75, offering a balance between accuracy and efficiency for real-time implementation. The resulting system is expected to assist the public in sorting waste more practically, raise environmental awareness, and support sustainable waste management. Keywords: Deep Learning, Object Detection, Real-Time, YOLOv8.

Item Type: Thesis (S1)
Contributors:
ContributionContributorsNIP/NIM
Thesis advisorWicaksana, Cakra Adipura199006282019031010
UNSPECIFIEDHolilah, Holilah202102012154
Additional Information: Penelitian ini bertujuan untuk mengembangkan sistem deteksi otomatis berbasis Mobile yang mampu mengidentifikasi sampah organik dan anorganik secara real-time dengan memanfaatkan model deep learning YOLOv8. Tiga varian model YOLOv8 yaitu YOLOv8n, YOLOv8s, dan YOLOv8m digunakan untuk dibandingkan performanya dalam mendeteksi sampah. Metode yang digunakan adalah pendekatan Prototype yang dikombinasikan dengan Artificial Intelligence Project Cycle untuk mengembangkan dan mengevaluasi model deteksi objek. Dataset yang digunakan terdiri dari 8.853 gambar dua kelas utama (organik dan anorganik), dilengkapi dengan data tanpa label sebagai data non-sampah. Berdasarkan hasil evaluasi, model YOLOv8s dipilih sebagai model terbaik dengan precision 0.931, recall 0.847, mAP50 sebesar 0.919, dan mAP50-95 sebesar 0.75, karena mampu memberikan keseimbangan antara akurasi dan efisiensi untuk implementasi real-time. Sistem yang dihasilkan diharapkan dapat membantu masyarakat dalam memilah sampah secara praktis, meningkatkan kesadaran lingkungan, dan mendukung pengelolaan limbah yang berkelanjutan. Kata Kunci : Deep Learning, Deteksi Objek, Real-Time, YOLOv8.
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
T Technology > T Technology (General)
Divisions: 03-Fakultas Teknik
03-Fakultas Teknik > 55201-Jurusan Teknik Informatika
Depositing User: Ms. Ega Nisa Anggraeni
Date Deposited: 04 Aug 2025 03:40
Last Modified: 04 Aug 2025 03:40
URI: http://eprints.untirta.ac.id/id/eprint/53445

Actions (login required)

View Item View Item