{"id":25,"date":"2026-03-05T12:23:58","date_gmt":"2026-03-05T12:23:58","guid":{"rendered":"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/advanced-prompt-engineering-techniques-chain-of-th\/"},"modified":"2026-03-18T22:00:10","modified_gmt":"2026-03-18T22:00:10","slug":"advanced-prompt-engineering-techniques-chain-of-th","status":"publish","type":"post","link":"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/advanced-prompt-engineering-techniques-chain-of-th\/","title":{"rendered":"Advanced Prompt Engineering Techniques: What Actually Works After Two Years of Testing"},"content":{"rendered":"<p><script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"BlogPosting\",\n  \"headline\": \"Advanced Prompt Engineering Techniques: What <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/autogen-vs-langgraph-vs-crewai-best-ai-agent-frame\/\" title=\"Actually Works\">Actually Works<\/a> <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/08\/rag-deep-dive-chunking-strategies-vector-databases\/\" title=\"After Two\">After Two<\/a> Years of Testing\",\n  \"description\": \"Six months into using <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/04\/running-local-llms-in-2026-ollama-lm-studio-and-jan-compared\/\" title=\"LLMs in\">LLMs in<\/a> <a href=\"https:\/\/m.do.co\/c\/06956e5e2802\" title=\"DigitalOcean for Production Workloads\" rel=\"nofollow sponsored\" target=\"_blank\">production<\/a>, I had a classification pipeline that was wrong about 30% of the time.\",\n  \"url\": \"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/advanced-prompt-engineering-techniques-chain-of-th\/\",\n  \"datePublished\": \"2026-03-05T12:23:58\",\n  \"dateModified\": \"2026-03-05T17:39:33\",\n  \"inLanguage\": \"en_US\",\n  \"author\": {\n    \"@type\": \"Organization\",\n    \"name\": \"RebalAI\",\n    \"url\": \"https:\/\/blog.rebalai.com\/en\/\"\n  },\n  \"publisher\": {\n    \"@type\": \"Organization\",\n    \"name\": \"RebalAI\",\n    \"logo\": {\n      \"@type\": \"ImageObject\",\n      \"url\": \"https:\/\/blog.rebalai.com\/wp-content\/uploads\/logo.png\"\n    }\n  },\n  \"mainEntityOfPage\": {\n    \"@type\": \"WebPage\",\n    \"@id\": \"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/advanced-prompt-engineering-techniques-chain-of-th\/\"\n  }\n}\n<\/script><\/p>\n<hr \/>\n<p>Six months into using <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/04\/running-local-llms-in-2026-ollama-lm-studio-and-jan-compared\/\" title=\"LLMs in\">LLMs in<\/a> <a href=\"https:\/\/m.do.co\/c\/06956e5e2802\" title=\"DigitalOcean for Production Workloads\" rel=\"nofollow sponsored\" target=\"_blank\">production<\/a>, I had a classification pipeline that was wrong about 30% of the time. I&#8217;d spent weeks tweaking temperature, swapping models, writing longer system prompts. Nothing stuck. Then I rewrote one prompt using chain-of-thought reasoning and the error rate dropped to around 8% overnight \u2014 same model, same temperature, same data.<\/p>\n<p>That experience broke something in my head about what prompt engineering actually is. It&#8217;s not about writing clearer instructions. It&#8217;s about changing <em>how the model thinks through the problem<\/em>, not just <em>what<\/em> it&#8217;s supposed to do.<\/p>\n<p>Here&#8217;s <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/rag-vs-fine-tuning-when-to-use-each-technique-for\/\" title=\"What I\">what I<\/a>&#8217;ve learned since then, including the techniques that genuinely moved the needle and a few that looked promising but wasted a lot of my time.<\/p>\n<hr \/>\n<h2>Chain-of-Thought Isn&#8217;t Just &#8220;Show Your Work&#8221;<\/h2>\n<p>The basic version of CoT is well-known at this point: add &#8220;think step by step&#8221; to your prompt and watch accuracy improve on reasoning tasks. But most developers stop there and leave a lot on the table.<\/p>\n<p>What actually matters is <em>where<\/em> you surface the reasoning and <em>how structured<\/em> you make it. I spent about <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/08\/rag-deep-dive-chunking-strategies-vector-databases\/\" title=\"Two Weeks\">two weeks<\/a> running variants on a document extraction task (pulling structured fields from messy legal contracts \u2014 not glamorous, but real). A bare <code>think step by step<\/code> helped modestly. What really helped was telling the <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/claude-vs-gpt-4o-vs-gemini-20-which-ai-model-to-us\/\" title=\"Model to\">model to<\/a> reason through <em>each field<\/em> independently before committing to an answer, with explicit uncertainty markers.<\/p>\n<p>Here&#8217;s roughly <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/09\/setting-up-github-actions-for-python-applications\/\" title=\"What the\">what the<\/a> prompt looked like after iteration:<\/p>\n<pre><code class=\"language-python\">SYSTEM_PROMPT = &quot;&quot;&quot;\nYou are extracting structured data from legal contract text.\n\nFor each field below, reason through it before writing your answer:\n1. Identify what evidence <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/08\/rag-deep-dive-chunking-strategies-vector-databases\/\" title=\"in the\">in the<\/a> document supports this value\n2. Note any ambiguity or conflicting signals\n3. Then output your best answer with a confidence level (high\/medium\/low)\n\nIf you have low confidence, explain why rather than guessing silently.\n&quot;&quot;&quot;\n\nUSER_PROMPT = &quot;&quot;&quot;\nContract text:\n{contract_text}\n\nExtract the following fields:\n- effective_date\n- termination_clause_type  \n- governing_law_jurisdiction\n- auto_renewal (yes\/no\/unclear)\n&quot;&quot;&quot;\n<\/code><\/pre>\n<p>The key change was requiring the <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/claude-vs-gpt-4o-vs-gemini-20-which-ai-model-to-us\/\" title=\"Model to\">model to<\/a> flag its own uncertainty rather than projecting false confidence. Previously I&#8217;d get a clean JSON blob that looked great but had quietly hallucinated a governing jurisdiction. Now I get hedged output I can actually route differently \u2014 send high-confidence extractions straight through, flag medium\/low for human review.<\/p>\n<p>One thing I noticed: the reasoning trace itself becomes a debugging tool. When a field comes back wrong, I can read the model&#8217;s chain of thought and usually see <em>exactly<\/em> <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/09\/webassembly-in-2026-where-it-actually-makes-sense\/\" title=\"Where It\">where it<\/a> went sideways. That&#8217;s worth something even beyond the accuracy gain.<\/p>\n<p>Gotcha I hit hard: CoT inflates token usage significantly. On a high-volume pipeline \u2014 we were doing around 4,000 documents a day at one point \u2014 this is not a rounding error. I ended up stripping the reasoning section from the output using a simple post-processing step and only keeping the structured fields. You get the accuracy benefits without <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/09\/building-rag-with-pgvector-why-i-stopped-paying-fo\/\" title=\"Paying for\">paying for<\/a> the reasoning tokens in downstream processing. Your situation will vary depending on cost constraints, but don&#8217;t assume you have to ship the chain of thought to your users.<\/p>\n<hr \/>\n<h2>Few-Shot Examples: The Technique That Works Until It Doesn&#8217;t<\/h2>\n<p>Few-shot prompting is probably the most misunderstood technique <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/08\/rag-deep-dive-chunking-strategies-vector-databases\/\" title=\"in the\">in the<\/a> toolbox. The common advice is &#8220;include 3-5 examples.&#8221; That&#8217;s fine as a starting point, but it misses the important variables: <em>diversity<\/em> of your examples, <em>proximity<\/em> to your edge cases, and whether you&#8217;re accidentally teaching the model the wrong generalization.<\/p>\n<p>That last one bit me on a sentiment classification task. I&#8217;d included five examples in my prompt, all of which happened to be single-sentence reviews. <a href=\"https:\/\/m.do.co\/c\/06956e5e2802\" title=\"DigitalOcean for Production Workloads\" rel=\"nofollow sponsored\" target=\"_blank\">Production<\/a> data had multi-paragraph reviews with mixed sentiment \u2014 positive overall but mentioning specific negatives <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/08\/rag-deep-dive-chunking-strategies-vector-databases\/\" title=\"in the\">in the<\/a> body. My few-shot examples had inadvertently taught the <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/claude-vs-gpt-4o-vs-gemini-20-which-ai-model-to-us\/\" title=\"Model to\">model to<\/a> anchor on the first sentence. Took me two days to figure out why accuracy cratered on longer inputs.<\/p>\n<p>What fixed it wasn&#8217;t adding more examples \u2014 it was adding <em>strategically selected<\/em> examples that represented the failure modes. I specifically included:<\/p>\n<ul>\n<li>One example where the opening sentence is negative but the overall sentiment is positive<\/li>\n<li>One that&#8217;s sarcastic<\/li>\n<li>One that&#8217;s genuinely mixed and should return &#8220;neutral&#8221; rather than forcing a classification<\/li>\n<\/ul>\n<p>This is less about volume and more about coverage. Three great examples beats eight mediocre ones.<\/p>\n<p>Here&#8217;s what a well-structured few-shot block looks like for something like intent classification:<\/p>\n<pre><code class=\"language-python\">FEW_SHOT_EXAMPLES = [\n    {\n        &quot;input&quot;: &quot;Can you help me reset my password? I've tried three times.&quot;,\n        &quot;reasoning&quot;: &quot;User is making a direct request for a specific account action. Frustration implied but the intent is clearly a password reset, not a complaint.&quot;,\n        &quot;intent&quot;: &quot;account_action&quot;,\n        &quot;confidence&quot;: &quot;high&quot;\n    },\n    {\n        &quot;input&quot;: &quot;I guess the product works okay but it's not really <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/rag-vs-fine-tuning-when-to-use-each-technique-for\/\" title=\"What I\">what I<\/a> expected from the description.&quot;,\n        &quot;reasoning&quot;: &quot;This is passive dissatisfaction, not an explicit request. User isn't asking for anything specific \u2014 more likely venting or leaving feedback.&quot;,\n        &quot;intent&quot;: &quot;feedback&quot;,\n        &quot;confidence&quot;: &quot;medium&quot;  # mixed signals here\n    },\n    {\n        &quot;input&quot;: &quot;When will my order arrive? The website says it shipped but the tracking hasn't updated in 5 days.&quot;,\n        &quot;reasoning&quot;: &quot;Surface-level this looks like an order status query, but the 5-day stale tracking detail implies a potential lost shipment. Route to shipping support, not generic order status.&quot;,\n        &quot;intent&quot;: &quot;shipping_issue&quot;,\n        &quot;confidence&quot;: &quot;high&quot;\n    }\n]\n<\/code><\/pre>\n<p>Notice I&#8217;m including reasoning <em>in the examples themselves<\/em>, not just input\/output pairs. This is CoT applied to few-shot \u2014 you&#8217;re showing the model how to think about classification decisions, not just <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/09\/setting-up-github-actions-for-python-applications\/\" title=\"What the\">what the<\/a> answer is. I started doing this about eight months ago and it&#8217;s now standard in everything I build.<\/p>\n<p>One practical note: if you&#8217;re using an API with a <code>messages<\/code> array (OpenAI, Anthropic, etc.), you can format few-shot examples as alternating user\/assistant turns rather than stuffing them all into the system prompt. In my experience this produces slightly cleaner behavior, probably because it&#8217;s closer to how the model was trained on conversation data. Not a huge difference, but worth knowing.<\/p>\n<hr \/>\n<h2>The Techniques I Wish I&#8217;d Found Earlier<\/h2>\n<p><strong>Self-consistency sampling<\/strong> is criminally underused. The idea: run the same prompt multiple times (3-5 times), then take the most common answer. It&#8217;s embarrassingly simple and it works \u2014 especially for tasks with a single correct answer buried in ambiguous context.<\/p>\n<p>I used this on a legal clause extraction job where the model would occasionally hallucinate a clause that didn&#8217;t exist (very bad in a legal context). Running the extraction five times and only surfacing clauses that appeared in at least three responses cut hallucination incidents by roughly 60% in our testing. It&#8217;s not cheap \u2014 you&#8217;re literally <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/09\/building-rag-with-pgvector-why-i-stopped-paying-fo\/\" title=\"Paying for\">paying for<\/a> 5x the tokens \u2014 but for high-stakes, low-volume tasks it&#8217;s an easy call.<\/p>\n<p><strong>Persona + constraint stacking<\/strong> is another one. This is different from basic role prompting (&#8220;you are a senior developer&#8221;). The useful version layers constraints that bound the model&#8217;s behavior. Example:<\/p>\n<blockquote>\n<p>You are a senior backend engineer reviewing a junior&#8217;s PR. You have strong opinions about code quality but your goal is to be educational, not discouraging. You must raise every issue you see, but frame at least one piece of feedback per issue as a question rather than a directive. Do not approve the PR if there are any security concerns.<\/p>\n<\/blockquote>\n<p>Each constraint there does <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/08\/ai-coding-assistant-benchmarks-real-world-performa\/\" title=\"Real Work\">real work<\/a>. Stacking them creates a behavioral space that&#8217;s hard to specify any other way. I&#8217;m not 100% sure this scales beyond a certain number of constraints \u2014 somewhere around 7-8 I&#8217;ve seen the model start dropping some of them \u2014 but for 3-5 constraints it&#8217;s reliable.<\/p>\n<p><strong>Output format as a constraint, not an afterthought.<\/strong> Most developers (including past me) put format instructions <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/09\/cloudflare-workers-vs-aws-lambda-which-edge-runtim\/\" title=\"at the\">at the<\/a> end of a prompt as a cleanup step: &#8220;&#8230;and return your answer as JSON.&#8221; That&#8217;s late. Specifying format early and being precise about <em>why<\/em> you need <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/copilot-vs-cursor-vs-codeium\/\" title=\"It in\">it in<\/a> that format changes output quality meaningfully.<\/p>\n<p>Compare:<br \/>\n&#8211; \u274c &#8220;Return your answer as JSON.&#8221;<br \/>\n&#8211; \u2705 &#8220;Your output will be parsed programmatically by a Go struct with strict type requirements. Return only valid JSON with no markdown code fences, no trailing commas, and no comments. Use ISO 8601 dates. Null is acceptable for missing values; do not omit keys.&#8221;<\/p>\n<p>The specificity signals to the model what kind of precision is required. Vague format instructions get vague compliance.<\/p>\n<hr \/>\n<h2>Where Prompt Engineering Actually Has Limits<\/h2>\n<p>Here&#8217;s something I don&#8217;t see enough developers admit: there are tasks where prompt engineering isn&#8217;t the bottleneck and you&#8217;re wasting your time optimizing prompts.<\/p>\n<p>If your base model genuinely doesn&#8217;t have the domain knowledge you need, no amount of CoT or few-shot examples will save you. I spent <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/08\/ai-coding-assistant-benchmarks-real-world-performa\/\" title=\"Three Weeks\">three weeks<\/a> trying to prompt-engineer a model into performing well on very domain-specific pharmaceutical regulatory text. I got incremental improvements but kept hitting a ceiling. The actual fix was RAG \u2014 pulling <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/08\/rag-deep-dive-chunking-strategies-vector-databases\/\" title=\"in the\">in the<\/a> relevant regulatory documents as context \u2014 not a cleverer prompt.<\/p>\n<p>Similarly, if your task requires consistent multi-step behavior across a <em>long<\/em> conversation, you&#8217;re fighting against context degradation and instruction drift. Prompt engineering techniques that work great <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/09\/cloudflare-workers-vs-aws-lambda-which-edge-runtim\/\" title=\"at the\">at the<\/a> start of a conversation tend to degrade by turn 15 or 20. I don&#8217;t have a great solution for this one beyond shorter context windows and more explicit state management. Your mileage may vary.<\/p>\n<p>The other limit is evaluation. Prompt engineering without a proper eval setup is basically guessing. I see a lot of developers (again, including early me) iterating prompts based on vibes and a handful of anecdotal examples. You need a held-out test set and you need to measure before and after each change. Even a scrappy pytest file with 50 representative examples beats no eval at all.<\/p>\n<hr \/>\n<h2>What I&#8217;d Actually <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/09\/setting-up-github-actions-for-python-applications\/\" title=\"Tell You\">Tell You<\/a> to Start With<\/h2>\n<p>If I&#8217;m being direct: start with chain-of-thought with explicit uncertainty markers. It&#8217;s the highest ROI technique I&#8217;ve found, it generalizes across tasks, and it produces debuggable outputs that make everything downstream easier.<\/p>\n<p>Once that&#8217;s working, add few-shot examples \u2014 but be surgical about it. Don&#8217;t grab five random examples. Specifically target your known failure modes and make your examples diverse enough to not accidentally teach wrong generalizations.<\/p>\n<p>Only reach for self-consistency when you have a high-stakes task and hallucination is genuinely costly. The token overhead makes it a bad default.<\/p>\n<p>Skip persona prompting unless you have a clear behavioral reason for it (like the PR review example above). Cargo-culted role prompting \u2014 &#8220;you are a helpful AI assistant who is an expert in&#8230;&#8221; \u2014 adds noise without adding value in my experience.<\/p>\n<p>And honestly? The most consistent gains I&#8217;ve seen come not from any single technique but from iteration with actual evals. Pick a technique, measure it <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/08\/ai-coding-assistant-benchmarks-real-world-performa\/\" title=\"Against Real\">against real<\/a> test cases, keep what helps, drop what doesn&#8217;t. That&#8217;s less exciting than a list of tricks, but it&#8217;s what <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/autogen-vs-langgraph-vs-crewai-best-ai-agent-frame\/\" title=\"Actually Works in\">actually works in<\/a> <a href=\"https:\/\/m.do.co\/c\/06956e5e2802\" title=\"DigitalOcean for Production Workloads\" rel=\"nofollow sponsored\" target=\"_blank\">production<\/a>.<\/p>\n<p>The prompt that fixed my classification pipeline, by the way, is still running. I haven&#8217;t touched <a href=\"https:\/\/blog.rebalai.com\/en\/2026\/03\/05\/copilot-vs-cursor-vs-codeium\/\" title=\"It in\">it in<\/a> four months. That&#8217;s probably the best endorsement I can give for getting the fundamentals right.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>{ &#8220;@context&#8221;: &#8220;https:\/\/schema.org&#8221;, &#8220;@type&#8221;: &#8220;BlogPosting&#8221;, &#8220;headline&#8221;: &#8220;Advanced Prompt Engineering Techniques: What Actually Works After Two Years of Tes<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[1],"tags":[],"class_list":["post-25","post","type-post","status-publish","format-standard","hentry","category-general"],"_links":{"self":[{"href":"https:\/\/blog.rebalai.com\/en\/wp-json\/wp\/v2\/posts\/25","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.rebalai.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.rebalai.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.rebalai.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.rebalai.com\/en\/wp-json\/wp\/v2\/comments?post=25"}],"version-history":[{"count":18,"href":"https:\/\/blog.rebalai.com\/en\/wp-json\/wp\/v2\/posts\/25\/revisions"}],"predecessor-version":[{"id":492,"href":"https:\/\/blog.rebalai.com\/en\/wp-json\/wp\/v2\/posts\/25\/revisions\/492"}],"wp:attachment":[{"href":"https:\/\/blog.rebalai.com\/en\/wp-json\/wp\/v2\/media?parent=25"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.rebalai.com\/en\/wp-json\/wp\/v2\/categories?post=25"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.rebalai.com\/en\/wp-json\/wp\/v2\/tags?post=25"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}