CaseCracker is an AI-powered legal assistant designed to help users navigate the complexities of Indian law. It can answer general legal queries, provide detailed information about legal issues, rules, petitions, and suggest relevant sections of the Indian Penal Code (IPC). CaseCracker leverages advanced natural language processing (NLP) techniques to provide accurate and contextually relevant legal advice.
- User Interface: The UI has two options - a general chatbot for basic legal queries and a professional chatbot for more in-depth legal advice, including IPC sections and laws.
- Interaction: Users choose the appropriate chatbot for their needs, input their questions, and receive answers based on the AI's training and knowledge of Indian law.
- Flask (v2.0.1): A micro web framework for Python used to create the backend server.
- Flask-CORS (v3.0.10): A Flask extension for handling Cross-Origin Resource Sharing (CORS), making it possible to communicate between the frontend and backend.
- langchain (v0.0.1): A library for building applications powered by language models.
- PDFPlumber (v0.5.28): A library for extracting text, metadata, and other information from PDF files.
- react-icons (v4.2.0): A library for including popular icons in React projects.
LangChain is a framework for developing applications using language models. It provides tools to manage and interact with these models, enabling the creation of powerful NLP applications like CaseCracker.
ChromaDB is a vector store used for efficient storage and retrieval of document embeddings. In CaseCracker, it helps in managing the embeddings of legal documents, enabling quick and relevant retrieval during queries.
Flask is a lightweight WSGI web application framework in Python. It is designed with simplicity and flexibility in mind, allowing developers to quickly set up a web server for backend functionalities.
Embeddings in NLP are vector representations of words or phrases. In CaseCracker, embeddings are used to convert legal documents into a format that can be efficiently searched and compared.
git clone https://github.com/prettycoolvariables/CaseCracker.git
cd CaseCracker
npm install
cd backend
pip install -r requirements.txt
- Start the React Development Server:
This will start the frontend on
npm start
http://localhost:3000
.
- Start the Flask Server:
This will start the backend server on
cd BACKEND python app.py
http://localhost:8080
.
- Open your Browser and navigate to
http://localhost:3000
. - Click the Chat Button to open the chatbot interface.
- Type Your Question in the input field and press Enter or click the send button.
Flask is used to set up a simple web server, with CORS configured to allow communication between the frontend (React) and backend (Flask).
- /ai: Handles general AI queries.
- /ask_pdf: Handles queries requiring detailed document retrieval.
- /pdf: Handles PDF uploads for document parsing and embedding.
The necessary components for managing language models, embeddings, and document processing include Ollama for the language model, Chroma for the vector store, RecursiveCharacterTextSplitter for text splitting, and FastEmbedEmbeddings for creating embeddings.
This endpoint handles PDF file uploads, extracts text using PDFPlumber, splits the text into manageable chunks, and stores them in ChromaDB with embeddings for efficient retrieval.
Handles general AI queries by invoking the language model and returning the response.
Creates a chain of processes for retrieving relevant documents based on user queries and previous chat history. This involves loading the vector store, setting up a retriever, and managing the conversation context.
To start the application, run the Flask server which will handle all backend operations and the React development server for the frontend interface.