Skip to content

amishra15/Abstractive-Text-Summarization-Bias-and-Fairness-

Repository files navigation

Abstractive Text Summarization | Bias and Fairness

This project focuses on developing an abstractive text summarization model utilizing state-of-the-art Transformer models and GenAI techniques. The models used include BERT, T5, and Langchain, with the primary goal of generating concise and coherent summaries from large text datasets while ensuring fairness and minimizing bias.

Table of Contents

Introduction

Text summarization is a crucial task in natural language processing (NLP) that involves creating a short and precise summary of a longer text document. This project leverages Transformers and GenAI to build and evaluate summarization models, ensuring they are both effective and fair.

Models Used

  • BERT: Bidirectional Encoder Representations from Transformers.
  • T5: Text-To-Text Transfer Transformer.
  • Langchain: An advanced model designed for language processing tasks.

Performance

The performance of the models was evaluated using Rouge scores. The best performing model was Langchain, achieving the following scores:

  • Rouge-1 F1 Score: 0.67

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published