on Ai, Developer productivity, Vibe coding, Llm, Software engineering
There’s a new phrase circulating in engineering teams: vibe coding. The term, coined by Andrej Karpathy in early 2025, describes a mode of programming where you describe what you want in natural language, let an AI generate the code, and intervene only when something breaks. You’re not writing code line-by-line. You’re steering.
on Rag, Llm, Ai engineering, Vector database, Information retrieval
Retrieval-Augmented Generation (RAG) is the backbone of most production AI applications. Knowledge bases, document Q&A, code search, customer support — if your application needs to answer questions about specific content, RAG is usually how you do it.
on Llm, Ai engineering, Evaluation, Mlops, Production ai
Shipping LLM-powered features is now table stakes. The hard part isn’t getting a prototype working — it’s knowing whether your model is performing well, catching regressions before users do, and maintaining quality as you iterate on prompts and models.
on Kubernetes, Cloud native, Devops, Platform engineering, Container orchestration
Kubernetes is nearly a decade old now, and the ecosystem around it has accumulated enough churn to confuse even experienced practitioners. Patterns that were best practice in 2022 are anti-patterns in 2026. Tools you invested in have been deprecated, merged, or superseded.
on Gitops, Devops, Argocd, Flux, Kubernetes, Continuous delivery
GitOps has crossed the chasm. What started as a blog post from Weaveworks in 2017 is now the default CD model for Kubernetes-native organizations. The Git repository is the source of truth. The cluster reconciles itself to match. Humans don’t kubectl apply in production.