Bias Devpost

Bonisiwe Shabane
-
bias devpost

During the Palestine conflict, I watched people read the same events but walk away with opposite realities. It wasn't fake news—it was framing. One headline said "clashes" when one side had F-16s. Palestinian victims "died" while Israeli victims were "killed." Sources were framed as "Hamas-run ministry" vs "Israeli officials." I realized: real-time fact-checking is impossible, but revealing how stories are told is totally achievable. Bias Mirror analyzes news articles to reveal manipulation techniques in real-time: Tech Stack: React, Tailwind CSS, Claude API (Anthropic)

Core algorithm: Two-pass system—fast regex/keyword matching (<100ms) then AI semantic analysis on flagged sections (2-3s) The fact-checking trap: Initially tried verifying claims, but realized it's impossible in real-time and many claims are fundamentally unverifiable. Pivoted completely to revealing framing instead. The BIAS project stems from a belief in better-informed, potent and healthier debate and decision-making contexts, for everybody Navigate the flow of information on issues you are interested in, with the help of AI : We used the following tools to build the BIAS projects:

We signed up for this hackaton as a startup to put our infrastructure, RAIDEN AI, to use in a real-life application, and potentially generate interest in our platform from users (and investors) To process, index and retrieve sources, in a project-scoped environment, with all the required features out-the-box (feature analysis, embeddings generations Pinecone indexing, ...), accessible with a single API call and no-config, in addition to... Welcome to the 2025 AI Bias Bounty Hackathon! Join us as we explore, detect, and report biases in AI models and datasets. This is an exciting opportunity to contribute to ethical AI while building your skills and network. The AI Bias Bounty Hackathon challenges participants to build machine learning models and generate technical reports that identifies bias within provided datasets.

The goal is to encourage the development of fair and responsible AI systems. Devpost: AI Bias Bounty Hackathon on Devpost Build AI models to analyze and detect bias in provided datasets. Access your company's private hackathons Grow your developer ecosystem and promote your platform Drive innovation, collaboration, and retention within your organization

Insights into hackathon planning and participation Inspiration from peers and other industry leaders For the past few weeks, we have been learning about how to navigate online digital information in our Literature class. We learned how biased sites and texts can often to correlate to misinformation. That's why we created a program to detect bias and hate speech. First, 2 language models are trained and tested using a CSV file with a dataset.

Then we got input text from our website and put it through the two models to get a bias score and a positivity score. First we downloaded a CSV file full of data & text samples along with bias scores and sentiment scores. Next we preprocessed the data by stemming the words, getting rid of useless words/phrases, and tokenizing everything. Afterwards, we used the model MultinomialNB and trained it with the preprocessed data. For the UI, we used HTML, CSS, and JavaScript to create a nice looking home page and about page. Our home page has 2 "progress bars" that help visualize the bias & positivity score.

We faced many challenges throughout this programming process. However, one of our main challenges was that we would have to retrain the model every time the program is ran. To combat this, we created pickle so that we wouldn't have to run the models more than one time. We're proud that we managed to create our own language model and our own UI that is able to work with each other and produce a valid output. ⚠️ IMPORTANT: You can participate as an individual or form a team of 2-4 members. Still assembling your team?

Start by registering here on Devpost. Devpost makes it easy to connect with others and form a team. Once your team is finalized, each member must complete the official registration form via the website to be fully registered for the hackathon. Please ensure your entire team is ready before submitting the official registration form on our website. If your team details change after registering, contact us at [email protected]. About the challenge Welcome to the AI Bias Bounty Hackathon, a community-powered competition focused on uncovering, documenting, and mitigating harmful bias and risk in AI systems.

Whether you're a student, researcher, engineer, red teamer, or concerned techie, this event is your chance to contribute meaningfully to AI safety while building tools that will benefit the entire ecosystem. The AI Bias Bounty Hackathon invites you to step into the role of an AI risk detective: investigate datasets, detect hidden bias, document real harm, and build models/tools that make AI safer for everyone. This is your opportunity to explore the dark corners of machine behavior, contribute to the world’s first open AI Risk Intelligence Framework, and create tools that will be used far beyond this event. What is AI Bias Bounty? AI Bias Bounty is a 48-hour, hands-on, impact-focused hackathon designed for researchers, engineers, students, and red teamers who care about responsible AI. It’s inspired by security bug bounty programs, but instead of vulnerabilities in code, we’re mapping bias, hallucination, discrimination, data risk, and misuse in real-world AI systems.

You’ll work with: Ready-to-use template for documenting risk You’ll test, detect, report, and contribute, and your work will live on as part of a global GitHub archive that’s open to the public. Why Join? Make a real impact by helping shape how AI harm is identified and discussed. Gain hands-on experience testing models, analyzing datasets, and building auditing tools.

The information we consume sculpts our reality: it dictates what we care about, how we act, and how we view the world. Yet, this foundation is cracking. Modern news is increasingly polarised, and we are drowning in a sea of misinformation—a fire now fueled by generative AI. We know that neutrality is often a myth; every story has a frame. But while we read the news, we often miss the invisible architecture of persuasion beneath the text. Worse, the existing AI tools designed to detect this bias are "Black Boxes." They offer opaque scores without explanation, effectively just swapping the journalist's bias for the model's.

We built this tool to be a Glass Box. It is a tool that refuses to tell you what to think. Instead, it shows you exactly how the text is trying to make you think it. BiasSphere is a transparent news-analysis agent that reveals how language shapes perception within an article. It evaluates the ratio of verifiable, evidence-based claims compared to subjective or speculative commentary, giving readers a clearer sense of how much of the text is grounded in fact versus opinion. The system highlights loaded or emotionally charged language, such as spin, dog whistles, and subtle evaluative phrasing, so users can immediately spot wording designed to influence rather than inform.

It also maps entity sentiment to show exactly how specific people, groups, or institutions are being framed throughout the article. Every insight is fully traceable: each claim, tone assignment, and detected phrase is linked directly to the exact sentence it came from, ensuring that readers can always see the original context and understand how... We built BiasSphere as a low-bias, fully observable LLM pipeline designed to focus strictly on what is written in an article rather than what the model “thinks” about it. The system begins by taking the raw text and extracting entities, claims, and sentiment using Claude, but with strict constraints to prevent it from inferring political intent or adding outside context. It identifies loaded or emotionally charged phrases and pinpoints their exact positions, surrounding sentences, and tone. For every claim or flagged phrase, the system calls Valyu to retrieve neutral external sources that represent both positive and negative perspectives.

These sources are then fed back into Claude to ground its analysis and reduce reliance on the model’s internal assumptions. Every step of this process is tracked through LangSmith, which records all prompts, outputs, evidence sentences, spans, and decision paths, allowing us to see exactly how each classification was made. The final structured output is then sent to our NiceGUI interface, where the article is reconstructed with highlighted phrases and hover-based explanations. This architecture ensures that the entire reasoning process is transparent, reproducible, and anchored directly to the text and verifiable sources instead of opaque model intuition. We were inspired to create bias.ai when we realized the increasing need to tackle biases in AI models, given the growing popularity of AI technology. As students, we recognized the importance of building a platform that empowers developers and hobbyists to create fair and unbiased AI models.

bias.ai is a platform that helps you identify and address biases in your AI models. Using technologies like Deno, Preact, and Tailwind, we built a user-friendly interface where you are currently able to do 2 things The backend is written with Deno, while the frontend uses Preact, and Tailwind. These technologies made development a breeze and allowed us to craft a sleek and intuitive user interface. For the backend, we harnessed the power of GPT-3.5, a mind-blowing language model, to enhance the capabilities of our platform. We used TypeScript to ensure type safety and smooth development throughout the process.

Throughout the development journey, we encountered our fair share of challenges. Integrating different technologies like Deno, Preact, and GPT-3.5 had its complexities. We had to troubleshoot and find workarounds to ensure smooth interactions between the frontend and backend. Additionally, fine-tuning the GPT-3.5 prompt took some trial and error. As students, we are extremely proud of what we have accomplished with bias.ai. It's our first time working with GPT-3.5, and integrating it seamlessly into our platform was a major achievement.

We're also proud of the user experience we crafted using Preact, Tailwind, and TypeScript. Credibility ensures that information is reliable and trustworthy. It is vital, especially in an age where misinformation spreads quickly. Without credibility, people may make decisions based on false or incomplete information, which can lead to harmful consequences. Biases can skew how we interpret information, often leading to unfair conclusions. Biases based on race, gender, or socioeconomic status can perpetuate stereotypes and reinforce discrimination, especially when people are not aware of these biases.

People Also Search

During The Palestine Conflict, I Watched People Read The Same

During the Palestine conflict, I watched people read the same events but walk away with opposite realities. It wasn't fake news—it was framing. One headline said "clashes" when one side had F-16s. Palestinian victims "died" while Israeli victims were "killed." Sources were framed as "Hamas-run ministry" vs "Israeli officials." I realized: real-time fact-checking is impossible, but revealing how st...

Core Algorithm: Two-pass System—fast Regex/keyword Matching (<100ms) Then AI Semantic

Core algorithm: Two-pass system—fast regex/keyword matching (<100ms) then AI semantic analysis on flagged sections (2-3s) The fact-checking trap: Initially tried verifying claims, but realized it's impossible in real-time and many claims are fundamentally unverifiable. Pivoted completely to revealing framing instead. The BIAS project stems from a belief in better-informed, potent and healthier deb...

We Signed Up For This Hackaton As A Startup To

We signed up for this hackaton as a startup to put our infrastructure, RAIDEN AI, to use in a real-life application, and potentially generate interest in our platform from users (and investors) To process, index and retrieve sources, in a project-scoped environment, with all the required features out-the-box (feature analysis, embeddings generations Pinecone indexing, ...), accessible with a singl...

The Goal Is To Encourage The Development Of Fair And

The goal is to encourage the development of fair and responsible AI systems. Devpost: AI Bias Bounty Hackathon on Devpost Build AI models to analyze and detect bias in provided datasets. Access your company's private hackathons Grow your developer ecosystem and promote your platform Drive innovation, collaboration, and retention within your organization

Insights Into Hackathon Planning And Participation Inspiration From Peers And

Insights into hackathon planning and participation Inspiration from peers and other industry leaders For the past few weeks, we have been learning about how to navigate online digital information in our Literature class. We learned how biased sites and texts can often to correlate to misinformation. That's why we created a program to detect bias and hate speech. First, 2 language models are traine...