Step 1: Getting Context (with RAG)
1) Clone + Setup enterprise-ai-infra
From a parent directory:
git clone https://github.com/bitovi/enterprise-ai-infra.git
cd enterprise-ai-infraCreate required secrets/config
cp .env.example .envFill in .env:
OPENAI_API_KEYGITHUB_TOKEN(PAT with repo read access)TEMPORAL_NAMESPACETEMPORAL_HOST_PORTTEMPORAL_API_KEY
Optional path overrides (if you already have local checkouts):
PATH_EAI_AGENT_WORKERPATH_EAI_PIPELINE_WORKERPATH_EAI_MCP
Start everything
make upThis runs setup.sh, which clones required repos into ./modules and starts Docker Compose in watch mode.
2) Validate running services
Important local endpoints:
Qdrant:
http://localhost:6333Qdrant dashboard:
http://localhost:6333/dashboardAgent API:
http://localhost:3101Pipeline API:
http://localhost:8002MCP:
http://localhost:3111/mcp
3) Ingest code into Qdrant
All ingestion is triggered through the pipeline API.
3a) Ingest an entire GitHub organization
ORG_NAME=your-org
curl -X POST "http://localhost:8002/organization-repository-vectorization/run?organization_name=${ORG_NAME}"Optional query params:
file_ext_filter(example:.ts)chunk_size(default1000)chunk_overlap(default0)
Example with filters:
ORG_NAME=your-org
curl -X POST "http://localhost:8002/organization-repository-vectorization/run?organization_name=${ORG_NAME}&file_ext_filter=.ts&chunk_size=1200&chunk_overlap=100"3b) Ingest one specific repo
ORG_NAME=your-org
REPO_NAME=your-repo
curl -X POST "http://localhost:8002/github-code-vectorization/run?organization_name=${ORG_NAME}&repository_name=${REPO_NAME}"Optional query params:
branch_name(defaultmain)file_ext_filterchunk_sizechunk_overlap
Ingest a set of repos (repeat single-repo endpoint)
For a selected set, call the single-repo endpoint once per repo:
ORG_NAME=your-org
for REPO_NAME in repo-a repo-b repo-c; do
curl -X POST "http://localhost:8002/github-code-vectorization/run?organization_name=${ORG_NAME}&repository_name=${REPO_NAME}"
done4) Connect Solutions Architect to local MCP
In .vscode/mcp.json in your repo, configure:
{
"servers": {
"enterprise-ai": {
"type": "http",
"url": "http://localhost:3111/mcp"
}
}
}This points solutions-architect prompts at your locally running enterprise MCP service.
5) Move Prompts into Github Folder
Take both prompt files and move them into .github/prompts. This allows them to be used in a command-style format from the chat window.
Other Useful Operations
Stop the infra:
cd enterprise-ai-infra
make downReset Qdrant collection data:
cd enterprise-ai-infra
docker exec eai_infra_admin_tools /usr/bin/python ./qdrant_reset.py