TrashML: TartansHack Grand Prize 
2018 // Machine Learning, Computer Vision, Design for Data, Learner Experience Design
Supawat Vitoorapakorn (Design + HCI): UX, UI, fabrication, and servo setup 
Akshat Prakash (Computer Science): Computer vision, and servo/Rasberry-Pi communication
Hale Konopka (Electrical Computer Engineering): fabrication, servo/Rasberry-pi communication
Killian Huang (Information Systems): Database, Machine Learning
Problem Space

A linear system is a "stupid" system.

Environmentally, despite efforts to recycle 8 million metric tons of plastic trash enters the ocean every year. While many attempts have been made to alleviate the problem from biodegradable plastics to automatic trash sorting machines, many current solution focuses on eliminating the user's cognitive load and provides no educational feedback. Without feedback, people's behavior would remain unchanged and the same mistakes are repeated.

Existing solution does address user's action and bheavior.

While AI is used to alleviate the process, the user's behavior still remains constant because this it's still a linear system. Technologically, often time machine learning is viewed as tool to replace human cognition. As algorithms become smarter, how do we design machine learning systems that can enhance human learning and cognition rather than replacing it? 

System becomes "smart" by feedback loops.

Why Machine Learning and Computer Vision?

Because recycling is confusing. Firstly, every locality handles recycling very differently from one another due to differences in facilities available. This leads to confusing and cognitively taxing explanations like these:
Confusing graphics that are cognitively taxing.
Secondly, most company "greenwash" to appeal to environmentally conscious costumer and provides products that looks eco-friendly but in actually is not recyclable by many local facilities. Paper cups are often lined with hard to recycle plastic to prevent hot liquids from penetrating them. 
Misleading products that looks "environmental friendly" and cannot be recycled properly due to plastic linings.
From a UX's perspective, the modern recycling system is very poorly designed. It's cognitively demanding to keep track of both what materials your trash is made of and remember whether or not your local facility recycles that specific material. In order to remove this pain point, we relied on Microsoft Cognitive Vision and Custom Vision API to analyze and provide feedback for humans on whether a trash can be recycled. In doing so, every interaction with the machine learning trash can is an educational opportunity.
Data: Leveraging Our By-Product

GUI Mock up.

To create social incentive for proper recycling, we created a software interface that allowed the user to socially comparing each other's recycling on an individual and communal bases. Forecasting into the future, we might want to incorporate a cash back incentive for the users. This would be incorporated via simply tapping one's Andrew ID onto the trash can.

Accessibility for people with color blindness. 

Leading a Hack-a-thon Ideation
Before we started the hackathon we brainstormed various ideas both individually and collaboratively. We started by putting down the things we know (our skills), technology we're interested in learning, and problems we have. From these three columns we synthesized our hack-a-thon ideas. Afterwards we mapped these idea on an affinity to tame complexity and make sense of all our ideas.  
After we decided upon the idea of a machine learning trash can, we knew that our project was complex and had multiple software and hardware component involved that needs work synchronously. In order to maximize our chance of successfully completing this project we diagramed out our base, most likely, and reach feature to ensure a minimum viable product.
Hardware and software goals for Tartanhacks 2018.
Process
After the commencement speech by Microsoft, our team broke up into two groups: hardware and software. On the software side Akshat Prakash and Killian Huang, focused on learning how to use Microsoft's Cognitive Services' Vision API, while the Supawat Vitoorapakorn and Hale Konopka focused on getting the Raspberry Pi to work with our mechanism.
Back to Top