Skip to content

$ cd ~/services/ai-agents

Amazon Bedrock + AgentCore agents.

Production agents on AWS Bedrock and AgentCore Runtime.

Amazon Bedrock + AgentCore agents

We build agentic workflows on Amazon Bedrock and deploy them to AWS AgentCore Runtime — the production substrate for serverless agents. Custom MCP servers, deterministic Step Functions for the 80% that shouldn't be an LLM call, and Bedrock-hosted models (Claude, Llama, Titan) for the 20% that genuinely needs reasoning. Evals in CI. Cost guardrails per tool call.

# stack
  • Amazon Bedrock
  • AWS AgentCore Runtime
  • Claude · Llama · Titan models
  • Custom MCP servers
  • LangGraph for orchestration
  • Step Functions for deterministic flows
  • OpenSearch + pgvector for retrieval
  • Bedrock Guardrails
# deliverables
  • Agent workflow design + guardrails
  • Prompt + tool definitions versioned in your repo
  • Evals baseline so regressions are caught in CI
  • Cost + latency dashboards per tool call
  • Fallback paths when the model fails

# faq

The honest answers.

Do we need a dedicated LLM vendor?+
No — we treat models as interchangeable where possible. Bedrock gives you multi-vendor access on AWS. For specific workloads we'll recommend the best fit and keep the integration swappable.
How do you prevent prompt injection and data leaks?+
Structured tool definitions, schema-validated outputs, input/output filtering at the MCP layer, and a defense-in-depth review for every tool that touches PII or executes side effects.

$ ready to start

Book a Lehi strategy session.

30 minutes. You leave with a scoped MVP plan, a fixed-price quote, and an AWS architecture sketch.