Table of Contents
Need Help? Get in Touch!
re:Invent 2025: The AWS Releases That Move the Needle for Builders
It was a great privilege to get the opportunity to attend my 2nd Amazon Re:Invent Conference last week in Las Vegas. As the technical leader for an AWS Advanced Tier Partner with a competency in Data & Analytics, it was my responsibility to understand new technology releases and translate those into new opportunities to improve the work we do for our existing clients at Red Oak Strategic, with the hope of providing a roadmap for how these releases change our approach to what’s possible with AI and data for new projects. While the AWS keynotes are good fun, and new services like Amazon Nova Forge and AWS AI Factories are ground breaking and fun to learn about, if you are not a Fortune 100 company it is highly unlikely you possess the capital or the business need to adopt these heavy enterprise features. Hidden behind these headline releases are smaller breakthroughs that immediately change the game for small and medium business and startups that build on AWS. These are my top 3:
Lambda Reaches Mt. Everest
While Lambda has been a star service since its release a decade ago, and one of the most game changing features in serverless computing ever, there has always been a hard limit that I thought would never be breached: the dreaded 15 minute timeout cap. While this cap remains, AWS rolled out last week Lambda Durable Functions which have their own orchestration that allow Lambdas to run, wait on downstream processes, and stay active for up to 1 year!
This durable functions concept is particularly valuable in SAAS applications of agentic AI because deep research and iterative reasoning agents often need session context and benefit from serverless tech but are variable in their runtime and can risk 15min time outs. This has been a challenge for my team in the past which has at times forced us to fail back to containerized functions in Batch or ECS which require more management and have much longer cold starts. The future is bright and it is still serverless.
Kiro is Not Messing Around

As a long time Q Developer user/early adopter (or as long as anyone has been a Q Dev user, so maybe 9 months), it was my belief that these tools were clearly valuable and improving thanks to the release of Claude 4.5 Sonnet as the underlying LLM for reasoning and code execution. I had heard of Kiro in its early release phase and did not understand the value add. Its a lot to ask to leave VS Code for a fancy new “AI IDE,” but with $30 worth of free credits in hand I decided to turn to Kiro for my annual mid-conference hackathon…and Kiro made it the shortest and smoothest hackathon I’ve had to date!
I used the spec-driven approach for serious engineering tasks (they also offer vibe coding option that is faster) and I successfully pivoted a GitHub demo I found for using multi-agent chaining in Amazon AgentCore to a fully built (including terraform and GitHub actions for deployment) AI Application featuring 5 agents, persistent user memory, and a working RAG implementation for industry specific knowledge. I spent $4.68 worth of credits, the end to end work took just under 3 hours, and the quality of the demo, complete with a locally hosted frontend was both 2x stronger than I had expected, but also represented at minimum a 5x improvement in development speed, and I still like to think of myself as an experienced and proficient developer. This opens the door to new rapid one-day MVPs Red Oak Strategic could host with clients, and makes me excited for more opportunities to use it in the future, including in the right circumstances on genuine client projects.
Remember AgentCore?

Data and Platforms for Generative AI
Amazon’s multi-agent AI framework AagentCore has dramatically increased development velocity of custom collaborative AI systems. One of the most significant leaps ahead is the ability to deploy automatic user and system-isolated memory, and deliver it to agent context in an efficient and affordable way. AgentCore has a short-term and long-term memory concept that mirrors our understanding of the human brain. Settings are configurable, but imagine conversations that have taken place in the last 30 days being readily available to agents in-context, even across modalities, while long-standing preferences and needs are stored and accessed from long-term memory in a shortened summary and recalled only when needed. Our early testing of adding memory to an existing AgentCore MVP took a whopping 15 minutes to stand up and validate - exciting times!
Closing Thoughts
It’s been a really fantastic year for our AWS engineering team at Red Oak Strategic, and as this is likely the last technical blog of our 2025, I wanted to share a thank you to my team, our full compliment of teammates at Red Oak Strategic, our partners as Amazon Web Services, and especially to our clients who trust and support us while we work to build at the cutting edge of cloud, data and AI. Stay tuned for 2026, as I’m sure we’ll only see an accelerated pace of change in the AI space, and we’ll be here to share our perspective of the tech and its ability to help “real” companies beyond the Big 7.
Cheers & Happy Holidays,
Tyler & the Red Oak AWS Team!
Contact Red Oak Strategic
From cloud migrations to machine learning & AI - maximize your data and analytics capabilities with support from an AWS Advanced Tier consulting partner.
Related Posts
Ready to get started?
Kickstart your cloud and data transformation journey with a complimentary conversation with the Red Oak team.
