This was a university group project in collaboration with Fraunhofer FOKUS, where we worked on real-time AI-based detection of bullying and harassment in chat rooms.
Project Description
Cyberbullying, harassment, and hate speech are problems that occur in many online spaces. Manual moderation of chats often is often prohibitively expensive. Therefore, recent approaches to moderating online discourse use automated detection of such unwanted behavior through natural language processing.
Our task as a team of three students was to investigate state-of-the-art machine learning models that can understand the semantics of a text and then reliably classify it. We then had to implement this AI-based classification system, train it on public datasets, and evaluate the quality of our solution.
Personal Contributions
My main task to build and deploy a chat application and connect the trained models to the backend so that we could do a live demonstration of the recognition system. The demonstration was interactive, so that any audience member with a mobile device could join the chat room and test the quality of the classifier with their own input in real time.
Technology Used | Purpose |
---|---|
Django | MVC-based web framework to get the prototype up and running quickly |
SQLite | Simple database, ideal for prototyping applications |
Bootstrap | CSS Framework with ready-to-use responsive elements |
JQuery & AJAX | Dynamic updating of the application, e.g., when new messages come in |
GIMP | Image editing program, used to design some of the visuals |
More implementation details can be seen in the live demo when it is viewed in full-screen mode and in the corresponding GitHub repository.