-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Add GitHub connector #37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@Adamsmith6300 is attempting to deploy a commit to the Rohan Verma's projects Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThe changes introduce comprehensive GitHub connector support across the codebase. In the backend, new Alembic configurations and migration scripts update enum types and database schemas, while a new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ConnectorRoute
participant IndexingTask
participant GitHubConnector
participant Database
Client->>ConnectorRoute: Request GitHub indexing
ConnectorRoute->>IndexingTask: run_github_indexing_with_new_session(connector_id, search_space_id)
IndexingTask->>Database: Retrieve connector config (GitHub PAT)
IndexingTask->>GitHubConnector: Initialize with token
GitHubConnector-->>IndexingTask: Return repository and file data
IndexingTask->>Database: Update/create document records
IndexingTask-->>Client: Return indexing status
Poem
Tip ⚡💬 Agentic Chat (Pro Plan, General Availability)
📜 Recent review detailsConfiguration used: CodeRabbit UI ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (3)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (20)
surfsense_backend/alembic/README (1)
1-1: Enhance Documentation with Additional ReferencesThe one-line README succinctly states the async dbapi setup for a generic single-database configuration. For improved clarity and to better assist developers, consider adding references or links to the associated configuration file (
alembic.ini), migration scripts (env.py,script.py.mako), or even a brief outline of the migration process. This would provide a more complete picture of the Alembic setup in the project.surfsense_backend/alembic.ini (1)
74-84: Consider enabling code formatting hooks for migration files.The post-write hooks for black and ruff are currently commented out. Enabling these hooks would ensure consistent code formatting and linting for all migration scripts.
- # hooks = black - # black.type = console_scripts - # black.entrypoint = black - # black.options = -l 79 REVISION_SCRIPT_FILENAME + hooks = black + black.type = console_scripts + black.entrypoint = black + black.options = -l 79 REVISION_SCRIPT_FILENAMEsurfsense_backend/app/tasks/stream_connector_search_results.py (1)
247-272: Remove debugging print statement.The implementation for the GitHub connector follows the same pattern as other connectors, which is good for maintainability. However, there's a print statement on line 251 that should be removed as it appears to be left over from debugging.
- print("Starting to search for GitHub connector...")surfsense_backend/app/utils/connector_service.py (2)
563-614: Consider using a dynamic ID generation approach.The implementation follows the established pattern for connector search methods, which is good. However, on line 608, you're using a hard-coded ID of 7 with an assumption comment. This could lead to issues if connector IDs are rearranged or if multiple developers add new connectors simultaneously.
Consider using a more dynamic approach such as:
- "id": 7, # Assuming 7 is the next available ID + "id": len(self.get_all_result_ids()) + 1, # Dynamically generate next IDor maintaining a constant/enum for connector IDs in a central location.
563-569: Enhance docstring with parameter documentation.The method's docstring should include descriptions for all parameters, consistent with other methods in the class.
async def search_github(self, user_query: str, user_id: int, search_space_id: int, top_k: int = 20) -> tuple: """ Search for GitHub documents and return both the source information and langchain documents + Args: + user_query: The user's query + user_id: The user's ID + search_space_id: The search space ID to search in + top_k: Maximum number of results to return + Returns: tuple: (sources_info, langchain_documents) """surfsense_backend/alembic/versions/e55302644c51_add_github_connector_to_documenttype_.py (2)
11-11: Remove unused import.The sqlalchemy import is not being used in this migration script.
from alembic import op -import sqlalchemy as sa🧰 Tools
🪛 Ruff (0.8.2)
11-11:
sqlalchemyimported but unusedRemove unused import:
sqlalchemy(F401)
29-69: Document data loss implications more prominently.While there is a comment about data loss during downgrade, it's important to make this more prominent as it's a significant side effect that operators need to be aware of.
-# Warning: This will delete all rows with the new value def downgrade() -> None: - """Downgrade schema - remove GITHUB_CONNECTOR from enum.""" + """ + Downgrade schema - remove GITHUB_CONNECTOR from enum. + + WARNING: This will permanently delete all documents with document_type='GITHUB_CONNECTOR'. + Backup your data before running this downgrade if you need to preserve GitHub documents. + """surfsense_backend/alembic/versions/1_add_github_connector_enum.py (1)
11-11: Remove unused import.The
sqlalchemyimport on this line is not used in the script. Consider removing it to clean up the file.- import sqlalchemy as sa🧰 Tools
🪛 Ruff (0.8.2)
11-11:
sqlalchemyimported but unusedRemove unused import:
sqlalchemy(F401)
surfsense_backend/app/routes/search_source_connectors_routes.py (3)
23-23: Remove unused import.
timezoneis not used; consider removing it to align with clean-code practices.-from datetime import datetime, timezone +from datetime import datetime🧰 Tools
🪛 Ruff (0.8.2)
23-23:
datetime.timezoneimported but unusedRemove unused import:
datetime.timezone(F401)
315-318: Consider raising from the original exception.Using
raise ... from ecan help preserve the original stack trace, improving error debugging. For example:raise HTTPException(...) from e🧰 Tools
🪛 Ruff (0.8.2)
315-318: Within an
exceptclause, raise exceptions withraise ... from errorraise ... from Noneto distinguish them from errors in exception handling(B904)
430-463: Factor out repetitive indexing logic.The GitHub indexing functions follow a similar pattern to Slack and Notion indexing. Consider refactoring common steps (fetching the connector, updating timestamps, handling errors, etc.) into shared helper functions to reduce duplication.
surfsense_web/app/dashboard/[search_space_id]/connectors/add/page.tsx (3)
26-27: UnuseduseFormimport
useFormis imported but not used anywhere in this file. If no form logic is needed here, consider removing the import to keep dependencies clean.-import { useForm } from "react-hook-form";
133-133: Expanded categories are well chosenBy default, only
["search-engines", "knowledge-bases"]are expanded. This is fine for now, but if you have more categories in the future, consider whether any other categories should also be expanded by default.
175-216: Connector card UI
- When a connector’s status is
"connected", consider rendering a “Manage” or “Edit” button instead of “Connect.”- You can extract this connector card into a separate reusable component if multiple pages need similar behavior.
Example of conditionally rendering a “Manage” button:
{connector.status === "connected" && ( <div className="px-6 py-4 border-t border-gray-200 dark:border-gray-700"> <Link href={`/dashboard/${searchSpaceId}/connectors/${connector.id}`}> <Button variant="outline" className="w-full"> Manage </Button> </Link> </div> )}surfsense_backend/app/connectors/github_connector.py (3)
3-3: Remove unused import
Tupleis imported but never used, which can be safely removed.-from typing import List, Optional, Dict, Any, Tuple +from typing import List, Optional, Dict, Any🧰 Tools
🪛 Ruff (0.8.2)
3-3:
typing.Tupleimported but unusedRemove unused import:
typing.Tuple(F401)
70-72: Preserve original traceback when re-raisingWhen re-raising an exception, consider raising from the original exception to preserve the traceback context:
-raise ValueError("Invalid GitHub token or insufficient permissions.") +raise ValueError("Invalid GitHub token or insufficient permissions.") from e🧰 Tools
🪛 Ruff (0.8.2)
71-71: Within an
exceptclause, raise exceptions withraise ... from errorraise ... from Noneto distinguish them from errors in exception handling(B904)
121-121: Rename unused loop variableThe variable
item_nameis never used. To convey its intentional disregard, rename it to_or_item_name.-for item_name, content_item in contents: +for _item_name, content_item in contents:🧰 Tools
🪛 Ruff (0.8.2)
121-121: Loop control variable
item_namenot used within loop bodyRename unused
item_nameto_item_name(B007)
surfsense_backend/app/tasks/connectors_indexing_tasks.py (3)
661-693: Consider storing per-repo last indexed timestamps.
Although you do skip unchanged files by comparing SHAs, you might also store a timestamp per repository to skip entire repositories if no changes are present. This helps scalability for users with many repositories.
694-715: Separate summary from content if needed.
Currently, the summary is a truncated slice of file content. If you plan to store a more comprehensive or specialized summary in the future, consider storing it in a separate field. This ensures you don’t overwrite content with partial text.
716-785: Single final commit is good; consider batched commits for large repos.
You chose to commit everything at the end, avoiding repeated overhead. However, if indexing thousands of files, memory usage could spike. You might consider intermittent commits to mitigate risk in extremely large repository scans.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
surfsense_backend/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (21)
surfsense_backend/.gitignore(1 hunks)surfsense_backend/alembic.ini(1 hunks)surfsense_backend/alembic/README(1 hunks)surfsense_backend/alembic/env.py(1 hunks)surfsense_backend/alembic/script.py.mako(1 hunks)surfsense_backend/alembic/versions/1_add_github_connector_enum.py(1 hunks)surfsense_backend/alembic/versions/e55302644c51_add_github_connector_to_documenttype_.py(1 hunks)surfsense_backend/app/connectors/github_connector.py(1 hunks)surfsense_backend/app/db.py(1 hunks)surfsense_backend/app/routes/search_source_connectors_routes.py(5 hunks)surfsense_backend/app/schemas/search_source_connector.py(2 hunks)surfsense_backend/app/tasks/connectors_indexing_tasks.py(2 hunks)surfsense_backend/app/tasks/stream_connector_search_results.py(1 hunks)surfsense_backend/app/utils/connector_service.py(1 hunks)surfsense_backend/main.py(1 hunks)surfsense_backend/pyproject.toml(1 hunks)surfsense_web/app/dashboard/[search_space_id]/connectors/(manage)/page.tsx(2 hunks)surfsense_web/app/dashboard/[search_space_id]/connectors/[connector_id]/page.tsx(6 hunks)surfsense_web/app/dashboard/[search_space_id]/connectors/add/github-connector/page.tsx(1 hunks)surfsense_web/app/dashboard/[search_space_id]/connectors/add/page.tsx(5 hunks)surfsense_web/app/dashboard/[search_space_id]/documents/(manage)/page.tsx(3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
surfsense_backend/app/routes/search_source_connectors_routes.py (4)
surfsense_backend/app/db.py (4)
get_async_session(176-178)User(147-152)SearchSourceConnector(130-140)SearchSourceConnectorType(45-50)surfsense_backend/app/schemas/search_source_connector.py (1)
SearchSourceConnectorRead(79-83)surfsense_backend/app/utils/check_ownership.py (1)
check_ownership(7-12)surfsense_backend/app/tasks/connectors_indexing_tasks.py (1)
index_github_repos(594-784)
surfsense_backend/app/tasks/stream_connector_search_results.py (2)
surfsense_backend/app/utils/streaming_service.py (2)
add_terminal_message(22-39)update_sources(41-52)surfsense_backend/app/utils/connector_service.py (1)
search_github(563-614)
surfsense_backend/app/tasks/connectors_indexing_tasks.py (2)
surfsense_backend/app/db.py (5)
Document(83-95)DocumentType(36-43)Chunk(97-104)SearchSourceConnector(130-140)SearchSourceConnectorType(45-50)surfsense_backend/app/connectors/github_connector.py (4)
GitHubConnector(25-211)get_user_repositories(76-97)get_repository_files(99-158)get_file_content(160-211)
🪛 Ruff (0.8.2)
surfsense_backend/alembic/versions/1_add_github_connector_enum.py
11-11: sqlalchemy imported but unused
Remove unused import: sqlalchemy
(F401)
surfsense_backend/app/routes/search_source_connectors_routes.py
23-23: datetime.timezone imported but unused
Remove unused import: datetime.timezone
(F401)
315-318: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
surfsense_backend/alembic/versions/e55302644c51_add_github_connector_to_documenttype_.py
11-11: sqlalchemy imported but unused
Remove unused import: sqlalchemy
(F401)
surfsense_backend/app/connectors/github_connector.py
3-3: typing.Tuple imported but unused
Remove unused import: typing.Tuple
(F401)
71-71: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
121-121: Loop control variable item_name not used within loop body
Rename unused item_name to _item_name
(B007)
🔇 Additional comments (30)
surfsense_backend/main.py (1)
3-9: Good addition of structured logging.Adding centralized logging configuration with a standardized format will significantly improve debugging and monitoring capabilities, especially useful with the new GitHub connector integration.
surfsense_web/app/dashboard/[search_space_id]/connectors/(manage)/page.tsx (1)
47-47: Addition of GitHub connector to the UI looks good.The GitHub connector type has been properly integrated into the UI mapping, allowing it to display correctly in the connectors list.
surfsense_backend/.gitignore (1)
6-7: Appropriate additions to .gitignore.Adding
.flashrank_cacheandsurf_new_backend.egg-info/to .gitignore is good practice to avoid committing temporary files and build artifacts.surfsense_backend/app/db.py (1)
43-43: GitHub connector enum values have been properly added.The addition of
GITHUB_CONNECTORto both theDocumentTypeandSearchSourceConnectorTypeenums is consistent with the goal of adding GitHub connector support.Also applies to: 50-50
surfsense_backend/pyproject.toml (1)
8-8:Details
✅ Verification successful
Dependencies for GitHub connector and migrations added.
The additions of
alembic>=1.13.0for database migrations andgithub3.py==4.0.1for GitHub API integration align with the PR objectives.Let's verify the latest version of github3.py and any potential security advisories:
Also applies to: 15-15
🏁 Script executed:
#!/bin/bash # Check PyPI for latest github3.py version and retrieve release date curl -s https://pypi.org/pypi/github3.py/json | jq -r '.info.version + " (released: " + .releases[.info.version][0].upload_time_iso_8601[0:10] + ")"' # Check for any security advisories gh api graphql -f query=' { securityVulnerabilities(first: 5, ecosystem: PIP, package: "github3.py") { nodes { advisory { summary severity publishedAt } vulnerableVersionRange firstPatchedVersion { identifier } } } }'Length of output: 524
Dependency Updates Verified: Approved
- In
surfsense_backend/pyproject.toml, the dependencies for database migrations (alembic>=1.13.0) and GitHub API integration (github3.py==4.0.1) have been added.- Verification confirms that
github3.pyis at the latest version (4.0.1, released 2023-04-26) and no security advisories were reported.- These updates align with the PR objectives.
surfsense_web/app/dashboard/[search_space_id]/documents/(manage)/page.tsx (2)
97-97: GitHub icon integration completed correctly.The import of
IconBrandGithuband addition to thedocumentTypeIconsmapping ensures consistent visual representation of GitHub documents in the UI.Also applies to: 145-145
1032-1032: Component export added.The
DocumentsTablecomponent is now properly exported, allowing it to be imported and used in other components.surfsense_backend/app/schemas/search_source_connector.py (1)
83-83: Pydantic config attribute updated.The
from_attributes = Trueattribute is the current recommended approach in Pydantic v2, replacing the deprecatedorm_mode = True.surfsense_backend/app/tasks/stream_connector_search_results.py (1)
248-272: LGTM! The GitHub connector implementation follows the established pattern.The implementation correctly follows the same pattern as other connectors:
- Sends terminal messages about search progress
- Calls the connector service's search method with appropriate parameters
- Updates sources with the search results
- Adds documents to the collection
This consistency makes the code maintainable and easier to understand.
surfsense_backend/alembic/script.py.mako (1)
1-29: LGTM! Standard Alembic migration template.This is a well-structured Alembic migration template that follows best practices:
- Properly defines revision identifiers
- Includes appropriate imports
- Provides clear upgrade and downgrade function structures
- Uses type annotations for better code quality
The template will enable consistent migration script generation throughout the project.
surfsense_backend/app/utils/connector_service.py (1)
570-614: LGTM! Well-implemented search method for GitHub connector.The implementation properly handles:
- Searching for GitHub-related documents
- Assigning unique IDs for citation tracking
- Extracting and utilizing metadata
- Creating properly formatted source entries
- Ensuring unique sources in the result
- Following the established pattern of other connector search methods
This consistency makes the codebase more maintainable.
surfsense_backend/alembic/versions/e55302644c51_add_github_connector_to_documenttype_.py (3)
21-22: LGTM! Clear constant definitions.Defining the enum name and new value as constants at the top of the file makes the code more maintainable and easier to understand.
24-27: LGTM! Clean upgrade implementation.The upgrade function is simple and effective, using Postgres' ability to add new values to existing enum types without recreating them.
33-70: LGTM! Thorough downgrade implementation.The downgrade function properly handles the complex process of removing an enum value, which requires:
- Creating a new enum type
- Handling affected data
- Switching the column type
- Cleaning up the old type
This implementation follows PostgreSQL best practices for enum management.
surfsense_web/app/dashboard/[search_space_id]/connectors/add/github-connector/page.tsx (1)
80-105: Form submission logic looks good.The
onSubmitfunction is well-structured, with error handling and user feedback. No changes needed.surfsense_web/app/dashboard/[search_space_id]/connectors/[connector_id]/page.tsx (4)
54-54: New GitHub connector type mapping.Adding
"GITHUB_CONNECTOR": "GitHub Connector"to thetypeMapis consistent. No issues here.
90-91: New API config field for GitHub connector.Defining
"GITHUB_CONNECTOR": "GITHUB_PAT"aligns well with the naming convention for other connectors.
231-232: GitHub PAT label.Labeling the key as "GitHub Personal Access Token (PAT)" makes it clear for users.
239-245: Clear placeholder text.Providing distinct placeholder text for GitHub ensures users know they can leave this field blank if they don't wish to update the existing PAT.
surfsense_backend/alembic/env.py (1)
1-99: Looks Good - Well-Structured Alembic SetupThe file correctly follows Alembic's offline/online migration patterns, and the asynchronous approach using
run_async_migrationsis implemented cleanly. No immediate issues regarding concurrency, security, or performance are apparent. Good job!surfsense_web/app/dashboard/[search_space_id]/connectors/add/page.tsx (3)
28-35: Connector interface designDefining a
Connectorinterface with an explicitstatusunion type is a neat way to manage different connector states, including the new"connected"option. This facilitates consistent handling or expansion of connector states in the future.
37-41: ConnectorCategory interfaceIntroducing a
ConnectorCategoryinterface helps ensure type safety and clarity when grouping connectors. This is a solid step toward more structured and maintainable code.
165-173: Expandable/collapsible category layoutUsing
CollapsibleTriggerto let users manually show/hide connectors within a category is a clean approach. The toggling logic is straightforward and easy to follow.surfsense_backend/app/tasks/connectors_indexing_tasks.py (7)
6-6: No issues with the new import statement.
This import addstimezonesupport, which is appropriate for creating timezone-aware timestamps.
12-12: New import for GitHubConnector is consistent.
ImportingGitHubConnectoris necessary for the newly introduced indexing function. This looks correct.
593-599: Ensure naming consistency across connectors.
The new functionindex_github_reposparallels other indexing methods (e.g.,index_slack_messages,index_notion_pages). Its naming matches the established convention, and the docstring is clear. Good job.
600-628: Handle missing connector edge cases.
The logic to find the GitHub connector and check its type is correct. Returning an error if the connector is missing or of the wrong type prevents processing inconsistent data. Nicely done.
629-639: Validate the presence of the GitHub PAT.
It’s good that you abort and return an explicit error message if the token is missing. This avoids accidental indexing runs with incomplete credentials.
640-645: Reduce repetitive logs for empty repository sets.
When no repos are accessible, you’re returning early with an informative log. This is practical.
648-660: Efficient document lookup strategy.
Using a dictionary keyed by"full_path"to avoid re-indexing unchanged files is a clear and effective approach. Great job preventing duplication.
| # The SQLAlchemy URL to connect to | ||
| # IMPORTANT: Replace this with your actual async database URL | ||
| sqlalchemy.url = postgresql+asyncpg://postgres:postgres@localhost:5432/surfsense |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using environment variables for database credentials.
The database connection string contains hardcoded credentials. For better security and configuration flexibility across environments, consider using environment variables.
- sqlalchemy.url = postgresql+asyncpg://postgres:postgres@localhost:5432/surfsense
+ sqlalchemy.url = postgresql+asyncpg://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_NAME}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # The SQLAlchemy URL to connect to | |
| # IMPORTANT: Replace this with your actual async database URL | |
| sqlalchemy.url = postgresql+asyncpg://postgres:postgres@localhost:5432/surfsense | |
| # The SQLAlchemy URL to connect to | |
| # IMPORTANT: Replace this with your actual async database URL | |
| sqlalchemy.url = postgresql+asyncpg://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_NAME} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| # Create result object | ||
| result_object = { | ||
| "id": 7, # Assuming 7 is the next available ID |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be 8 now after this PR: afe7ed4
| self.source_id_counter += 1 | ||
|
|
||
| # Use a unique identifier for tracking unique sources (URL preferred) | ||
| source_key = source.get("url") or source.get("title") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is causing citation issues atm. No need to deduplicate chucks. Just remove deduplication code. Follow the pattern at other search functions in surfsense_backend/app/utils/connector_service.py now at afe7ed4
MODSetter
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Adamsmith6300 . Nice Job.
Add GitHub connector & Alembic migrations
Add GitHub connector & Alembic migrations

Testing:
Note: when indexing, its currently set to index all repos owned by the user (if you're like me and have a lot this will take a long time). Update SKIPPED_DIRS if theres directories you want to skip. Ill add something similar for skipping certain files.
Summary by CodeRabbit
New Features
Documentation
Bug Fixes
Chores