Step 1: Getting Context (with RAG)

Step 1: Getting Context (with RAG)

1) Clone + Setup enterprise-ai-infra

From a parent directory:

git clone https://github.com/bitovi/enterprise-ai-infra.git cd enterprise-ai-infra

Create required secrets/config

cp .env.example .env

Fill in .env:

  • OPENAI_API_KEY

  • GITHUB_TOKEN (PAT with repo read access)

  • TEMPORAL_NAMESPACE

  • TEMPORAL_HOST_PORT

  • TEMPORAL_API_KEY

Optional path overrides (if you already have local checkouts):

  • PATH_EAI_AGENT_WORKER

  • PATH_EAI_PIPELINE_WORKER

  • PATH_EAI_MCP

Start everything

make up

This runs setup.sh, which clones required repos into ./modules and starts Docker Compose in watch mode.


2) Validate running services

Important local endpoints:

  • Qdrant: http://localhost:6333

  • Qdrant dashboard: http://localhost:6333/dashboard

  • Agent API: http://localhost:3101

  • Pipeline API: http://localhost:8002

  • MCP: http://localhost:3111/mcp

3) Ingest code into Qdrant

All ingestion is triggered through the pipeline API.

3a) Ingest an entire GitHub organization

ORG_NAME=your-org curl -X POST "http://localhost:8002/organization-repository-vectorization/run?organization_name=${ORG_NAME}"

Optional query params:

  • file_ext_filter (example: .ts)

  • chunk_size (default 1000)

  • chunk_overlap (default 0)

Example with filters:

ORG_NAME=your-org curl -X POST "http://localhost:8002/organization-repository-vectorization/run?organization_name=${ORG_NAME}&file_ext_filter=.ts&chunk_size=1200&chunk_overlap=100"

3b) Ingest one specific repo

ORG_NAME=your-org REPO_NAME=your-repo curl -X POST "http://localhost:8002/github-code-vectorization/run?organization_name=${ORG_NAME}&repository_name=${REPO_NAME}"

Optional query params:

  • branch_name (default main)

  • file_ext_filter

  • chunk_size

  • chunk_overlap

Ingest a set of repos (repeat single-repo endpoint)

For a selected set, call the single-repo endpoint once per repo:

ORG_NAME=your-org for REPO_NAME in repo-a repo-b repo-c; do curl -X POST "http://localhost:8002/github-code-vectorization/run?organization_name=${ORG_NAME}&repository_name=${REPO_NAME}" done

4) Connect Solutions Architect to local MCP

In .vscode/mcp.json in your repo, configure:

{ "servers": { "enterprise-ai": { "type": "http", "url": "http://localhost:3111/mcp" } } }

This points solutions-architect prompts at your locally running enterprise MCP service.

5) Move Prompts into Github Folder

Take both prompt files and move them into .github/prompts. This allows them to be used in a command-style format from the chat window.


Other Useful Operations

Stop the infra:

cd enterprise-ai-infra make down

Reset Qdrant collection data:

cd enterprise-ai-infra docker exec eai_infra_admin_tools /usr/bin/python ./qdrant_reset.py