<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Pratik Patel – Tech Insights]]></title><description><![CDATA[Building scalable SaaS systems with real-world engineering practices. I share deep dives on Node.js, PostgreSQL, Prisma, and system architecture—focused on performance, multi-tenant design, and production-grade backend development.]]></description><link>https://blog.pratikpatel.pro</link><generator>RSS for Node</generator><lastBuildDate>Fri, 10 Apr 2026 00:00:53 GMT</lastBuildDate><atom:link href="https://blog.pratikpatel.pro/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Prisma N+1 in Production:Real Query Plans and Fixes
]]></title><description><![CDATA[The silent killer hiding in your ORM
It was a Tuesday morning when our on-call alert fired. P95 latency on /api/dashboard had crossed 4 seconds. Nothing had been deployed. No traffic spike. The databa]]></description><link>https://blog.pratikpatel.pro/prisma-query-optimization-guide</link><guid isPermaLink="true">https://blog.pratikpatel.pro/prisma-query-optimization-guide</guid><category><![CDATA[prisma]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Backend performance]]></category><category><![CDATA[Database Optimization,]]></category><category><![CDATA[n-plus-one]]></category><category><![CDATA[saas architecture]]></category><category><![CDATA[orm-performance]]></category><category><![CDATA[query-optimization]]></category><category><![CDATA[sql-optimization]]></category><category><![CDATA[Backend Engineering]]></category><dc:creator><![CDATA[PRATIK PATEL]]></dc:creator><pubDate>Tue, 31 Mar 2026 12:28:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/66ecf4127470ec193becfb63/8f4f8b92-4a23-45a7-ae92-282c2d8a1dae.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The silent killer hiding in your ORM</h2>
<p>It was a Tuesday morning when our on-call alert fired. P95 latency on <code>/api/dashboard</code> had crossed 4 seconds. Nothing had been deployed. No traffic spike. The database CPU was sitting at 78% on a <code>db.r6g.2xlarge</code> we'd just scaled up to "fix" the same problem two weeks earlier.</p>
<p>We opened DataDog, found the trace, and stared at it for a moment.</p>
<p><strong>1,847 database queries. For a single request.</strong></p>
<p>The offending code had been in production for four months. It passed code review. It passed QA. It looked completely normal:</p>
<pre><code class="language-typescript">const tenants = await prisma.tenant.findMany({
  where: { status: 'active' }
});

const enriched = await Promise.all(
  tenants.map(t =&gt;
    prisma.subscription.findFirst({
      where: { tenantId: t.id }
    })
  )
);
</code></pre>
<p>At 12 tenants in staging, this ran in 40ms. At 1,847 active tenants in production, Prisma fired 1,848 queries — one to fetch the tenants, then one per tenant to fetch its subscription. The ORM hid every single one behind a clean <code>await</code>. No warnings. No errors. Just a silent, compounding tax that scaled linearly with your growth.</p>
<p>This is the N+1 problem. It doesn't crash your app. It just makes it slower every time you succeed.</p>
<p>The fix took 11 minutes. The diagnosis took three hours. This post is about closing that gap.</p>
<hr />
<h2>Section 01 — Detecting it</h2>
<h3>Step 1: enable Prisma query logging</h3>
<pre><code class="language-typescript">const prisma = new PrismaClient({
  log: [
    { emit: 'event', level: 'query' },
    { emit: 'stdout', level: 'error' },
  ],
});

prisma.$on('query', (e) =&gt; {
  console.log(`[QUERY] \({e.query} | \){e.duration}ms`);
});
</code></pre>
<p>A route that emits 200+ log lines for a single request is your N+1. But logging alone doesn't tell you why it's slow at the database level.</p>
<h3>Step 2: read the query plan</h3>
<pre><code class="language-typescript">// Raw output from EXPLAIN ANALYZE on the child query:
// Seq Scan on "Subscription"
//   actual time=4.831..4.831 rows=1 loops=1847
//   Filter: tenantId = $1
//   Rows Removed by Filter: 49999
// Execution Time: 8,921.4 ms

// What this means:
// loops=1847   → this query plan ran 1,847 times
// Seq Scan     → full table read every loop (50k rows × 1,847 = 92M row reads)
// No index     → tenantId column is unindexed, every loop scans the whole table
</code></pre>
<h3>Step 3: find repeat queries in production</h3>
<pre><code class="language-typescript">const hotQueries = await prisma.$queryRaw&lt;HotQuery[]&gt;`
  SELECT query, calls, mean_exec_time, total_exec_time
  FROM pg_stat_statements
  ORDER BY total_exec_time DESC
  LIMIT 10
`;
</code></pre>
<hr />
<h2>Section 02 — Fix #1: <code>include</code> and <code>select</code></h2>
<pre><code class="language-typescript">// Before: 1 + N queries (1,848 total at 1,847 tenants)
const tenants = await prisma.tenant.findMany({
  where: { status: 'active' }
});
const enriched = await Promise.all(
  tenants.map(t =&gt; prisma.subscription.findFirst({
    where: { tenantId: t.id }
  }))
);

// After: 2 queries flat — regardless of tenant count
const tenants = await prisma.tenant.findMany({
  where: { status: 'active' },
  include: {
    subscription: {
      select: { id: true, plan: true, status: true }
    }
  }
});

// Results:
// Queries  → 1,848  down to 2
// Latency  → 8.9s   down to 38ms
// DB CPU   → 78%    down to 4%
</code></pre>
<table>
<thead>
<tr>
<th>Scenario</th>
<th>Before (N+1)</th>
<th>After (fixed)</th>
</tr>
</thead>
<tbody><tr>
<td>Basic N+1</td>
<td>1,848 queries / 8.9s</td>
<td>2 queries / 38ms</td>
</tr>
<tr>
<td>Under load (100 RPS)</td>
<td>184,800 queries/s</td>
<td>200 queries/s</td>
</tr>
<tr>
<td>DB CPU (db.r6g.2xlarge)</td>
<td>78%</td>
<td>4%</td>
</tr>
</tbody></table>
<blockquote>
<p><strong>Gotcha:</strong> Deep <code>include</code> chains (3+ levels) can produce enormous JOINs that are slower than the N+1 they replace. Benchmark anything beyond 2 levels.</p>
</blockquote>
<hr />
<h2>Section 03 — Fix #2: the dataloader pattern</h2>
<pre><code class="language-typescript">// lib/dataloader.ts — no external library needed
type BatchFn&lt;T&gt; = (ids: string[]) =&gt; Promise&lt;(T | null)[]&gt;;

function createLoader&lt;T&gt;(batchFn: BatchFn&lt;T&gt;) {
  const queue: { id: string; resolve: (v: T | null) =&gt; void }[] = [];
  let scheduled = false;

  return async function load(id: string): Promise&lt;T | null&gt; {
    return new Promise((resolve) =&gt; {
      queue.push({ id, resolve });
      if (!scheduled) {
        scheduled = true;
        process.nextTick(async () =&gt; {
          const ids = queue.map(q =&gt; q.id);
          const results = await batchFn([...new Set(ids)]);
          const map = new Map(ids.map((id, i) =&gt; [id, results[i]]));
          queue.forEach(q =&gt; q.resolve(map.get(q.id) ?? null));
          queue.length = 0;
          scheduled = false;
        });
      }
    });
  };
}

const userLoader = createLoader&lt;User&gt;(async (ids) =&gt; {
  const users = await prisma.user.findMany({
    where: { id: { in: ids } },
    select: { id: true, name: true, email: true }
  });
  return ids.map(id =&gt; users.find(u =&gt; u.id === id) ?? null);
});

const user = await userLoader.load(post.authorId);
</code></pre>
<hr />
<h2>Section 04 — Fix #3: <code>$queryRaw</code> for complex cases</h2>
<pre><code class="language-typescript">const result = await prisma.$queryRaw&lt;Post[]&gt;`
  SELECT p.*
  FROM "User" u
  CROSS JOIN LATERAL (
    SELECT *
    FROM "Post"
    WHERE "authorId" = u.id
    ORDER BY "createdAt" DESC
    LIMIT 5
  ) p
  WHERE u."tenantId" = ${tenantId}
`;

// Benchmark vs include chain:
// 10 authors  →  11 queries / 42ms   vs  1 query / 6ms
// 100 authors → 101 queries / 390ms  vs  1 query / 11ms
// 1k authors  → 1,001 queries / 4.1s vs  1 query / 68ms
</code></pre>
<hr />
<h2>Section 05 — The part everyone skips: indexes</h2>
<pre><code class="language-typescript">// schema.prisma

model Subscription {
  id       String @id @default(cuid())
  tenantId String
  plan     String
  status   String

  @@index([tenantId])
  @@index([tenantId, status])
}

model Post {
  id       String @id @default(cuid())
  authorId String
  tenantId String

  @@index([authorId])
  @@index([tenantId, authorId])
}

// Rule: every FK column in a Prisma include, where, or orderBy needs an index.
// Confirm with EXPLAIN ANALYZE: "Index Scan" not "Seq Scan"
</code></pre>
<hr />
<h2>Section 06 — Preventing regression with CI query budgets</h2>
<pre><code class="language-typescript">// lib/queryCounter.ts
export function createQueryCounter(prisma: PrismaClient) {
  let count = 0;
  prisma.$on('query', () =&gt; { count++; });
  return {
    reset: () =&gt; { count = 0; },
    get: () =&gt; count,
    assertMax: (max: number, label?: string) =&gt; {
      if (count &gt; max) {
        throw new Error(
          `Query budget exceeded on \({label ?? 'unknown'}: expected ≤\){max}, got ${count}`
        );
      }
    }
  };
}
</code></pre>
<pre><code class="language-typescript">// __tests__/dashboard.test.ts
describe('GET /api/dashboard', () =&gt; {
  it('stays within query budget', async () =&gt; {
    const counter = createQueryCounter(prisma);
    await request(app)
      .get('/api/dashboard')
      .set('Authorization', `Bearer ${token}`);
    counter.assertMax(3, 'GET /api/dashboard');
  });
});
</code></pre>
<pre><code class="language-typescript">// Express middleware — emit query count to APM
app.use(async (req, res, next) =&gt; {
  const counter = createQueryCounter(prisma);
  res.on('finish', () =&gt; {
    datadog.gauge('prisma.query_count', counter.get(), {
      route: req.route?.path ?? req.path,
      status: res.statusCode,
    });
  });
  next();
});
</code></pre>
<hr />
<h2>Results</h2>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Before</th>
<th>After</th>
</tr>
</thead>
<tbody><tr>
<td>P95 latency <code>/dashboard</code></td>
<td>4.2s</td>
<td>210ms</td>
</tr>
<tr>
<td>DB queries per request</td>
<td>1,847</td>
<td>3</td>
</tr>
<tr>
<td>DB CPU (db.r6g.2xlarge)</td>
<td>78%</td>
<td>4%</td>
</tr>
<tr>
<td>Monthly RDS bill</td>
<td>$1,840</td>
<td>$420</td>
</tr>
</tbody></table>
<p>No infra changes. No cache layer. No schema redesign.</p>
]]></content:encoded></item><item><title><![CDATA[SQS vs BullMQ vs Cron: Which One Should You Use for Background Jobs?]]></title><description><![CDATA[When building backend systems, especially SaaS applications, handling background jobs correctly is critical. Whether it's sending emails, processing payments, or scheduling reminders, choosing the rig]]></description><link>https://blog.pratikpatel.pro/sqs-vs-bullmq-vs-cron</link><guid isPermaLink="true">https://blog.pratikpatel.pro/sqs-vs-bullmq-vs-cron</guid><category><![CDATA[SQS]]></category><category><![CDATA[cronjob]]></category><category><![CDATA[bullmq]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[tech ]]></category><dc:creator><![CDATA[PRATIK PATEL]]></dc:creator><pubDate>Mon, 30 Mar 2026 10:10:57 GMT</pubDate><content:encoded><![CDATA[<p>When building backend systems, especially SaaS applications, handling background jobs correctly is critical. Whether it's sending emails, processing payments, or scheduling reminders, choosing the right approach can significantly impact reliability and scalability.</p>
<p>The three most common options are:</p>
<ul>
<li><p>Cron jobs</p>
</li>
<li><p>BullMQ (Redis-based queue)</p>
</li>
<li><p>AWS SQS (cloud-based queue)</p>
</li>
</ul>
<p>Each serves a different purpose. Choosing the wrong one can lead to missed jobs, performance issues, or system failures.</p>
<hr />
<h2>Understanding the Core Difference</h2>
<ul>
<li><p>Cron is a time-based scheduler</p>
</li>
<li><p>BullMQ is an in-app job queue powered by Redis</p>
</li>
<li><p>SQS is a distributed cloud queue service</p>
</li>
</ul>
<hr />
<h2>Cron Jobs</h2>
<p>Cron is the simplest way to run tasks at fixed intervals.</p>
<h3>Example</h3>
<pre><code class="language-js">import cron from "node-cron";

cron.schedule("0 9 * * *", () =&gt; {
  console.log("Run every day at 9 AM");
});
</code></pre>
<h3>Advantages</h3>
<ul>
<li><p>Easy to set up</p>
</li>
<li><p>No external dependencies</p>
</li>
<li><p>Works well for simple recurring tasks</p>
</li>
</ul>
<h3>Limitations</h3>
<ul>
<li><p>Runs on a single server</p>
</li>
<li><p>No retry mechanism</p>
</li>
<li><p>Jobs are lost if the server crashes</p>
</li>
<li><p>Not suitable for scaling</p>
</li>
</ul>
<h3>Best Use Cases</h3>
<ul>
<li><p>Daily reports</p>
</li>
<li><p>Cleanup scripts</p>
</li>
<li><p>Small applications with low traffic</p>
</li>
</ul>
<hr />
<h2>BullMQ (Redis-based Queue)</h2>
<p>BullMQ is a powerful queue system built on Redis, widely used in Node.js applications.</p>
<h3>Example</h3>
<pre><code class="language-ts">import { Queue, Worker } from "bullmq";

const queue = new Queue("emails");

await queue.add("send-email", {
  to: "user@test.com",
});

const worker = new Worker("emails", async job =&gt; {
  console.log("Processing:", job.data);
});
</code></pre>
<h3>Advantages</h3>
<ul>
<li><p>Built-in retries</p>
</li>
<li><p>Supports delayed jobs</p>
</li>
<li><p>Good performance</p>
</li>
<li><p>Easy integration with Node.js</p>
</li>
</ul>
<h3>Limitations</h3>
<ul>
<li><p>Requires Redis</p>
</li>
<li><p>Limited horizontal scalability</p>
</li>
<li><p>Can become unstable at very high scale</p>
</li>
<li><p>Tightly coupled with your backend</p>
</li>
</ul>
<h3>Best Use Cases</h3>
<ul>
<li><p>Email processing</p>
</li>
<li><p>Notifications</p>
</li>
<li><p>Background jobs in SaaS applications</p>
</li>
<li><p>Medium-scale systems</p>
</li>
</ul>
<hr />
<h2>AWS SQS (Cloud Queue)</h2>
<p>Amazon SQS is a fully managed, distributed message queue service designed for high scalability.</p>
<h3>Example</h3>
<pre><code class="language-ts">import { SQSClient, SendMessageCommand } from "@aws-sdk/client-sqs";

const client = new SQSClient({ region: "ap-south-1" });

await client.send(new SendMessageCommand({
  QueueUrl: process.env.SQS_URL,
  MessageBody: JSON.stringify({ task: "send-email" }),
}));
</code></pre>
<h3>Advantages</h3>
<ul>
<li><p>Highly scalable</p>
</li>
<li><p>Fully managed (no infrastructure to maintain)</p>
</li>
<li><p>Reliable and fault-tolerant</p>
</li>
<li><p>Works well with microservices and serverless systems</p>
</li>
</ul>
<h3>Limitations</h3>
<ul>
<li><p>More setup required</p>
</li>
<li><p>Debugging can be harder</p>
</li>
<li><p>No built-in scheduling (requires EventBridge)</p>
</li>
<li><p>Requires AWS ecosystem</p>
</li>
</ul>
<h3>Best Use Cases</h3>
<ul>
<li><p>High-scale applications</p>
</li>
<li><p>Microservices architecture</p>
</li>
<li><p>Serverless systems</p>
</li>
<li><p>Critical job processing</p>
</li>
</ul>
<hr />
<h2>Comparison Table</h2>
<table>
<thead>
<tr>
<th>Feature</th>
<th>Cron</th>
<th>BullMQ</th>
<th>SQS</th>
</tr>
</thead>
<tbody><tr>
<td>Setup</td>
<td>Easy</td>
<td>Medium</td>
<td>Medium</td>
</tr>
<tr>
<td>Scalability</td>
<td>Low</td>
<td>Moderate</td>
<td>High</td>
</tr>
<tr>
<td>Reliability</td>
<td>Low</td>
<td>Good</td>
<td>Very High</td>
</tr>
<tr>
<td>Retry Support</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Scheduling</td>
<td>Yes</td>
<td>Yes</td>
<td>No (needs EventBridge)</td>
</tr>
<tr>
<td>Infrastructure</td>
<td>None</td>
<td>Redis</td>
<td>AWS</td>
</tr>
</tbody></table>
<hr />
<h2>What Should You Choose?</h2>
<h3>Use Cron if:</h3>
<ul>
<li><p>You have simple scheduled tasks</p>
</li>
<li><p>The application is small</p>
</li>
<li><p>Reliability is not critical</p>
</li>
</ul>
<h3>Use BullMQ if:</h3>
<ul>
<li><p>You are building a SaaS application</p>
</li>
<li><p>You need queues and scheduling</p>
</li>
<li><p>You want a simple in-app solution</p>
</li>
<li><p>Your scale is moderate</p>
</li>
</ul>
<h3>Use SQS if:</h3>
<ul>
<li><p>You need high scalability</p>
</li>
<li><p>You are building production-grade systems</p>
</li>
<li><p>You are using AWS or serverless architecture</p>
</li>
<li><p>Jobs are critical and must not fail</p>
</li>
</ul>
<hr />
<h2>Common Mistakes to Avoid</h2>
<ul>
<li><p>Using Cron for critical jobs</p>
</li>
<li><p>Not implementing retries</p>
</li>
<li><p>Over-engineering with SQS too early</p>
</li>
<li><p>Ignoring monitoring and failure handling</p>
</li>
</ul>
<hr />
<h2>Final Thoughts</h2>
<p>There is no single best solution for all cases. The right choice depends on your application's scale, complexity, and reliability requirements.</p>
<p>For most SaaS developers: Start simple, validate your product, and then scale your architecture as needed.</p>
]]></content:encoded></item><item><title><![CDATA[Implementing Zero-Downtime CI/CD for Node.js Apps on AWS EC2]]></title><description><![CDATA[Deploying code sounds easy — until you’re doing it multiple times a day while keeping your production server stable.
Our Node.js backend powers critical features, so even a broken build or 2–3 minutes of downtime wasn’t acceptable. As our team grew, ...]]></description><link>https://blog.pratikpatel.pro/implementing-zero-downtime-cicd-for-nodejs-apps-on-aws-ec2</link><guid isPermaLink="true">https://blog.pratikpatel.pro/implementing-zero-downtime-cicd-for-nodejs-apps-on-aws-ec2</guid><category><![CDATA[ec2]]></category><category><![CDATA[cicd]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[#ZeroDowntimeDeployment]]></category><dc:creator><![CDATA[PRATIK PATEL]]></dc:creator><pubDate>Thu, 20 Nov 2025 06:01:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763618102670/b2b035cf-ddfd-4539-9431-09ac16c681d2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Deploying code sounds easy — until you’re doing it multiple times a day while keeping your production server stable.</p>
<p>Our Node.js backend powers critical features, so even a broken build or 2–3 minutes of downtime wasn’t acceptable. As our team grew, deployment became risky:</p>
<ul>
<li><p>Manual SSH deployments were slow</p>
</li>
<li><p>Missed steps caused crashes</p>
</li>
<li><p>Different machines had different Node versions</p>
</li>
<li><p>Rollbacks were painful</p>
</li>
<li><p>Hotfixes slowed down because the pipeline wasn’t automated</p>
</li>
</ul>
<p>We needed a <strong>bulletproof CI/CD pipeline</strong>.</p>
<p>Below is the journey of how we moved from a patchy deployment process to a <strong>fully automated CI/CD setup</strong> using <strong>GitHub Actions + AWS EC2 + PM2</strong>.</p>
<hr />
<h1 id="heading-the-problem">The Problem</h1>
<h2 id="heading-1-manual-deployments-became-a-bottleneck">1 — Manual Deployments Became a Bottleneck</h2>
<p>Every update required logging into EC2, pulling code, installing dependencies, restarting PM2, and praying we didn’t break something.</p>
<p>During busy days, this became tedious — and errors slipped in.</p>
<h2 id="heading-2-no-consistency-across-deployments">2 — No Consistency Across Deployments</h2>
<p>One laptop used Node 16, another had Node 18.<br />Some deployments skipped tests.<br />Some forgot <code>npm install</code>.<br />Some overwrote <code>.env</code> accidentally.</p>
<p>We were relying on luck, not process.</p>
<h2 id="heading-3-zero-rollback-support">3 — Zero Rollback Support</h2>
<p>If something broke:</p>
<ul>
<li><p>We scrambled to revert to the previous commit</p>
</li>
<li><p>Re-deployed manually</p>
</li>
<li><p>Restarted PM2 again</p>
</li>
</ul>
<p>Not fast, not reliable.</p>
<p>We needed automation.</p>
<hr />
<h1 id="heading-step-1-setting-up-github-actions-for-ci">Step 1 — Setting Up GitHub Actions for CI</h1>
<p>We began by automating the basics:</p>
<ul>
<li><p>Install Node</p>
</li>
<li><p>Install dependencies</p>
</li>
<li><p>Run tests</p>
</li>
<li><p>Prepare build artifacts</p>
</li>
</ul>
<p>GitHub Actions became our “CI engine”.</p>
<p><strong>Why GitHub Actions?</strong></p>
<ul>
<li><p>Close to the repository</p>
</li>
<li><p>Easy secrets management</p>
</li>
<li><p>Parallel jobs</p>
</li>
<li><p>Free for most usage</p>
</li>
<li><p>Very simple YAML-based workflows</p>
</li>
</ul>
<p>This ensured every push was validated before deployment.</p>
<hr />
<h1 id="heading-step-2-automating-deployment-to-aws-ec2">Step 2 — Automating Deployment to AWS EC2</h1>
<p>Next, we automated deployment.</p>
<p>Instead of SSH-ing manually to EC2, GitHub Actions now:</p>
<ol>
<li><p>Builds the backend</p>
</li>
<li><p>Uploads the updated code to EC2</p>
</li>
<li><p>Restarts PM2 automatically</p>
</li>
<li><p>Verifies that the server comes back online</p>
</li>
</ol>
<p>We securely stored:</p>
<ul>
<li><p>EC2 host</p>
</li>
<li><p>SSH user</p>
</li>
<li><p>Private key</p>
</li>
<li><p>Environment variables</p>
</li>
</ul>
<p>in <strong>GitHub Secrets</strong>, eliminating credential sharing.</p>
<p>This alone removed 90% of deployment friction.</p>
<hr />
<h1 id="heading-step-3-introducing-pm2-for-zero-downtime-restarts">Step 3 — Introducing PM2 for Zero-Downtime Restarts</h1>
<p>PM2 became our process manager.</p>
<p>It allowed:</p>
<ul>
<li><p>App restarts without downtime</p>
</li>
<li><p>Crash auto-recovery</p>
</li>
<li><p>CPU utilization monitoring</p>
</li>
<li><p>Log management</p>
</li>
<li><p>Easy rollbacks using saved snapshots</p>
</li>
</ul>
<p>Whenever a deployment happened:</p>
<pre><code class="lang-plaintext">pm2 restart node-app
</code></pre>
<p>The app restarted instantly while keeping old connections alive.</p>
<p>Downtime: <strong>0 seconds</strong>.</p>
<hr />
<h1 id="heading-step-4-handling-secure-environment-management">Step 4 — Handling Secure Environment Management</h1>
<p>We moved all sensitive configuration from code into:</p>
<ul>
<li><p><code>.env</code> file on EC2</p>
</li>
<li><p>GitHub Secrets for CI</p>
</li>
<li><p>PM2 ecosystem config for runtime variables</p>
</li>
</ul>
<p>This ensured:</p>
<ul>
<li><p>No secrets in Git</p>
</li>
<li><p>Easy environment change without redeploy</p>
</li>
<li><p>Safer rollbacks</p>
</li>
</ul>
<hr />
<h1 id="heading-step-5-improving-deployment-reliability">Step 5 — Improving Deployment Reliability</h1>
<p>We refined the pipeline further:</p>
<h3 id="heading-added-health-checks-after-deploy">Added health checks after deploy</h3>
<p>To ensure the API booted correctly.</p>
<h3 id="heading-added-rollback-capability">Added rollback capability</h3>
<p>By keeping previous build folders.</p>
<h3 id="heading-cleaned-old-build-artifacts">Cleaned old build artifacts</h3>
<p>To reduce EC2 disk usage over time.</p>
<h3 id="heading-added-monitoring">Added monitoring</h3>
<p>Using PM2 logs + CloudWatch.</p>
<p>Small improvements → Big stability gains.</p>
<hr />
<h1 id="heading-final-architecture-diagram">Final Architecture Diagram</h1>
<pre><code class="lang-plaintext">                 ┌─────────────────────────┐
                 │     GitHub Actions      │
                 │ (CI: Build &amp; Test Code) │
                 └───────────┬─────────────┘
                             │
                             ▼
     ┌────────────────────────────────────────────────┐
     │        GitHub Actions Deployment Job           │
     │  1️⃣ SCP/SSH to EC2                             │
     │  2️⃣ Upload Build Files                         │
     │  3️⃣ Restart PM2                                │
     └───────────┬───────────────────────────┬────────┘
                 │                           │
     ┌───────────▼────────────┐   ┌──────────▼─────────────┐
     │      AWS EC2 Ubuntu    │   │   GitHub Secrets        │
     │  Node.js + PM2 Runtime │   │  (SSH, Host, Env Keys)  │
     └───────────┬────────────┘   └─────────────────────────┘
                 │
     ┌───────────▼────────────┐
     │   Node.js Application  │
     │ (Zero Downtime via PM2)│
     └────────────────────────┘
</code></pre>
<hr />
<h1 id="heading-conclusion">Conclusion</h1>
<p>Migrating to a CI/CD pipeline wasn’t just a tooling upgrade — it was an operational transformation.</p>
<p>With <strong>GitHub Actions + EC2 + PM2</strong>, we achieved:</p>
<ul>
<li><p>Fully automated deployments</p>
</li>
<li><p>Zero downtime during restarts</p>
</li>
<li><p>Safe and secure environment handling</p>
</li>
<li><p>Consistent, reliable build process</p>
</li>
<li><p>Faster development cycles</p>
</li>
<li><p>No more late-night "quick fixes" breaking production</p>
</li>
</ul>
<p>Today, pushing to <code>main</code> is all it takes for a clean, fast, safe deployment — every single time.</p>
]]></content:encoded></item><item><title><![CDATA[JavaScript: Pass by Value vs Pass by Reference]]></title><description><![CDATA[Introduction
When working with functions in JavaScript, you’ll often hear developers debating whether JavaScript is “pass by value” or “pass by reference.” The truth is slightly nuanced. Let’s break it down with examples so you’ll never get confused ...]]></description><link>https://blog.pratikpatel.pro/javascript-pass-by-value-vs-pass-by-reference</link><guid isPermaLink="true">https://blog.pratikpatel.pro/javascript-pass-by-value-vs-pass-by-reference</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Objects]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[js]]></category><dc:creator><![CDATA[PRATIK PATEL]]></dc:creator><pubDate>Sat, 06 Sep 2025 14:36:30 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>When working with functions in JavaScript, you’ll often hear developers debating whether JavaScript is “pass by value” or “pass by reference.” The truth is slightly nuanced. Let’s break it down with examples so you’ll never get confused again.</p>
<hr />
<h2 id="heading-pass-by-value">Pass by Value</h2>
<p>When you pass a <strong>primitive type</strong> (like number, string, boolean, null, undefined, symbol, bigint) into a function, JavaScript passes it <strong>by value</strong>. This means the function gets a copy, and changes inside the function don’t affect the original variable.</p>
<h3 id="heading-example">Example:</h3>
<pre><code><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">updateValue</span>(<span class="hljs-params">x</span>) </span>{
  x = x + <span class="hljs-number">10</span>;
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Inside function:"</span>, x);
}

<span class="hljs-keyword">let</span> num = <span class="hljs-number">5</span>;
updateValue(num);

<span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Outside function:"</span>, num); <span class="hljs-comment">// Still 5</span>
</code></pre><p>Here, the variable <code>num</code> is unaffected because only a copy was modified.</p>
<hr />
<h2 id="heading-pass-by-reference-actually-reference-copy">Pass by Reference (Actually Reference Copy)</h2>
<p>For <strong>objects and arrays</strong>, JavaScript passes a reference — but not the actual object itself. Instead, it passes a <strong>copy of the reference</strong>. That’s why changes to the object inside a function reflect outside.</p>
<h3 id="heading-example-1">Example:</h3>
<pre><code><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">updateSkills</span>(<span class="hljs-params">obj</span>) </span>{
  obj.skills.push(<span class="hljs-string">"Node.js"</span>);
}

<span class="hljs-keyword">let</span> user = { <span class="hljs-attr">name</span>: <span class="hljs-string">"Pratik"</span>, <span class="hljs-attr">skills</span>: [<span class="hljs-string">"JavaScript"</span>] };
updateSkills(user);

<span class="hljs-built_in">console</span>.log(user.skills); <span class="hljs-comment">// ["JavaScript", "Node.js"]</span>
</code></pre><p>Here, <code>user</code> was updated because the function received a reference to the same memory location.</p>
<hr />
<h2 id="heading-key-takeaway">Key Takeaway</h2>
<ul>
<li><p>Primitives → passed by <strong>value</strong></p>
</li>
<li><p>Objects/arrays → passed by <strong>reference copy</strong> (so modifications affect the original)</p>
</li>
</ul>
<hr />
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>JavaScript is <strong>always pass by value</strong> — but when dealing with objects, the value being passed is actually a <strong>reference to the object</strong>. Understanding this subtlety helps avoid confusion in interviews and in real-world debugging.</p>
]]></content:encoded></item><item><title><![CDATA[Mastering Deep Copy vs Shallow Copy in JavaScript: A Must-Know for Developers]]></title><description><![CDATA[When we work with objects and arrays in JavaScript, we often need to copy data. But copying in JavaScript is not always as simple as it looks. Depending on the method you use, you may end up with shared references which can cause unexpected changes i...]]></description><link>https://blog.pratikpatel.pro/mastering-deep-copy-vs-shallow-copy</link><guid isPermaLink="true">https://blog.pratikpatel.pro/mastering-deep-copy-vs-shallow-copy</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[DeepCopy]]></category><category><![CDATA[Objects]]></category><category><![CDATA[TypeScript]]></category><dc:creator><![CDATA[PRATIK PATEL]]></dc:creator><pubDate>Fri, 05 Sep 2025 05:30:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757049277554/44221ab5-bb79-4f16-bb15-18908538669a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When we work with objects and arrays in JavaScript, we often need to copy data. But copying in JavaScript is not always as simple as it looks. Depending on the method you use, you may end up with <strong>shared references</strong> which can cause unexpected changes in your data. To handle this properly, it’s important to understand the concepts of <strong>shallow copy</strong> and <strong>deep copy</strong>.</p>
<hr />
<h2 id="heading-shallow-copy">Shallow Copy</h2>
<p>A <strong>shallow copy</strong> creates a new object but only copies the top-level properties.</p>
<ul>
<li><p>If the property is a primitive value like string, number, boolean, null, undefined, symbol, or bigint, it gets copied directly.</p>
</li>
<li><p>If the property is an object, array, or function (reference type), then only the reference is copied. This means both the original and the copied object will point to the same nested structure.</p>
</li>
</ul>
<h3 id="heading-example">Example:</h3>
<pre><code><span class="hljs-keyword">const</span> original = {
  <span class="hljs-attr">name</span>: <span class="hljs-string">"Pratik"</span>,
  <span class="hljs-attr">skills</span>: [<span class="hljs-string">"JavaScript"</span>, <span class="hljs-string">"React"</span>]
};

<span class="hljs-keyword">const</span> shallowCopy = { ...original };

shallowCopy.name = <span class="hljs-string">"Alex"</span>;
shallowCopy.skills.push(<span class="hljs-string">"Node.js"</span>);

<span class="hljs-built_in">console</span>.log(original.name);   <span class="hljs-comment">// "Pratik" → unchanged</span>
<span class="hljs-built_in">console</span>.log(original.skills); <span class="hljs-comment">// ["JavaScript", "React", "Node.js"] → changed</span>
</code></pre><p>Here, the primitive property <code>name</code> is copied safely, but the <code>skills</code> array is shared. Updating <code>shallowCopy.skills</code> also updates <code>original.skills</code>.</p>
<h3 id="heading-common-ways-to-create-shallow-copies">Common Ways to Create Shallow Copies:</h3>
<ul>
<li><p>Object spread: <code>{ ...obj }</code></p>
</li>
<li><p><code>Object.assign({}, obj)</code></p>
</li>
<li><p>Array methods like:</p>
<ul>
<li><p><code>arr.slice()</code></p>
</li>
<li><p><code>[...arr]</code></p>
</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-deep-copy">Deep Copy</h2>
<p>A <strong>deep copy</strong> goes one step further. It copies not only the top-level properties but also all nested objects and arrays. This makes the copied object completely independent of the original.</p>
<h3 id="heading-example-1">Example:</h3>
<pre><code><span class="hljs-keyword">const</span> original = {
  <span class="hljs-attr">name</span>: <span class="hljs-string">"Pratik"</span>,
  <span class="hljs-attr">skills</span>: [<span class="hljs-string">"JavaScript"</span>, <span class="hljs-string">"React"</span>]
};

<span class="hljs-comment">// Deep copy using structuredClone</span>
<span class="hljs-keyword">const</span> deepCopy = structuredClone(original);

deepCopy.name = <span class="hljs-string">"Alex"</span>;
deepCopy.skills.push(<span class="hljs-string">"Node.js"</span>);

<span class="hljs-built_in">console</span>.log(original.name);   <span class="hljs-comment">// "Pratik" → safe</span>
<span class="hljs-built_in">console</span>.log(original.skills); <span class="hljs-comment">// ["JavaScript", "React"] → safe</span>
</code></pre><p>Here, the <code>deepCopy</code> object is fully independent. Any change in <code>deepCopy</code> does not affect <code>original</code>.</p>
<h3 id="heading-common-ways-to-create-deep-copies">Common Ways to Create Deep Copies:</h3>
<ol>
<li><p><strong>structuredClone()</strong> (works in modern browsers and Node.js 17+)</p>
<pre><code> <span class="hljs-keyword">const</span> clone = structuredClone(obj);
</code></pre></li>
<li><p><strong>JSON method</strong> (works but has limitations)</p>
<pre><code> <span class="hljs-keyword">const</span> clone = <span class="hljs-built_in">JSON</span>.parse(<span class="hljs-built_in">JSON</span>.stringify(obj));
</code></pre><p> Note: This method removes functions, symbols, undefined values, and fails with circular references.</p>
</li>
<li><p><strong>Libraries</strong> like Lodash (<code>_.cloneDeep(obj)</code>) or Immer.</p>
</li>
</ol>
<hr />
<h2 id="heading-when-should-you-use-each">When Should You Use Each?</h2>
<ul>
<li><p><strong>Shallow Copy</strong> is enough when:</p>
<ul>
<li><p>You only need to copy simple, top-level data.</p>
</li>
<li><p>You are sure that nested objects or arrays will not be modified.</p>
</li>
</ul>
</li>
<li><p><strong>Deep Copy</strong> is necessary when:</p>
<ul>
<li><p>You want complete independence between the copy and the original.</p>
</li>
<li><p>You are working with frameworks like React/Redux, where immutability is important.</p>
</li>
<li><p>You are dealing with nested or complex data structures.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-quick-comparison">Quick Comparison</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Shallow Copy</td><td>Deep Copy</td></tr>
</thead>
<tbody>
<tr>
<td>Copies primitives</td><td>Yes</td><td>Yes</td></tr>
<tr>
<td>Copies references</td><td>Yes (shared)</td><td>No (independent)</td></tr>
<tr>
<td>Performance</td><td>Faster</td><td>Slower</td></tr>
<tr>
<td>Use case</td><td>Simple, flat objects/arrays</td><td>Nested or complex objects</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>Understanding the difference between shallow copy and deep copy is very important for every JavaScript developer. It helps avoid unexpected bugs, especially when handling state management in frameworks like React.</p>
<ul>
<li><p>Use <strong>shallow copy</strong> when you only need a quick top-level copy.</p>
</li>
<li><p>Use <strong>deep copy</strong> when you want complete isolation between the original and the copied data.</p>
</li>
</ul>
<p>This clarity will save you from a lot of hidden issues in real-world applications.</p>
]]></content:encoded></item><item><title><![CDATA[How We Enabled Internet Access Inside a Private VPC Subnet Using NAT]]></title><description><![CDATA[Delivering modern apps securely in AWS often requires placing critical resources inside private subnets — for example, EC2 app servers, Lambda functions, or RDS databases.
But here’s the challenge:These resources can’t directly access the internet.An...]]></description><link>https://blog.pratikpatel.pro/how-we-enabled-internet-access-inside-a-private-vpc-subnet-using-nat</link><guid isPermaLink="true">https://blog.pratikpatel.pro/how-we-enabled-internet-access-inside-a-private-vpc-subnet-using-nat</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws nat gateway]]></category><category><![CDATA[#AWS #CloudComputing #NATGateway #VPC #NetworkSecurity #AWSArchitecture #DevOps #CloudSecurity #Infrastructure #TechTips #CloudUpdates #AWSVPC #InternetAccess #PrivateSubnet #AWSBestPractices]]></category><dc:creator><![CDATA[PRATIK PATEL]]></dc:creator><pubDate>Thu, 04 Sep 2025 09:00:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756810244395/5ae61bb8-4745-4cfc-a914-09bcc25a1569.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Delivering modern apps securely in AWS often requires placing critical resources inside <strong>private subnets</strong> — for example, EC2 app servers, Lambda functions, or RDS databases.</p>
<p>But here’s the challenge:<br />These resources <strong>can’t directly access the internet</strong>.<br />And yet, they often <strong>need outbound connectivity</strong> (e.g., downloading packages, calling third-party APIs).</p>
<p>This blog explains how we solved that problem at scale using <strong>NAT Gateway</strong>.</p>
<hr />
<h2 id="heading-the-problem">The Problem</h2>
<ol>
<li><p><strong>Private Subnet Isolation</strong></p>
<ul>
<li><p>By design, private subnets cannot route traffic to the Internet Gateway.</p>
</li>
<li><p>EC2 or Lambda inside them had <strong>no internet access</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>Broken External Calls</strong></p>
<ul>
<li><p>Our workloads needed outbound internet:</p>
<ul>
<li><p>EC2 instances → install security patches.</p>
</li>
<li><p>Lambdas → call Firebase/FCM APIs.</p>
</li>
</ul>
</li>
<li><p>Without internet, everything failed.</p>
</li>
</ul>
</li>
<li><p><strong>Direct IGW Not an Option</strong></p>
<ul>
<li><p>Attaching Internet Gateway directly to private subnet would expose resources.</p>
</li>
<li><p>Security rules prevented that.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-step-1-introducing-nat-gateway">Step 1 — Introducing NAT Gateway</h2>
<p>We deployed a <strong>NAT Gateway</strong> in the <strong>public subnet</strong> of our VPC.</p>
<ul>
<li><p>NAT Gateway gets an <strong>Elastic IP</strong>.</p>
</li>
<li><p>It can reach the internet via the Internet Gateway.</p>
</li>
</ul>
<p>Private subnet instances send traffic → NAT Gateway → Internet.<br />Inbound traffic from the internet is blocked.</p>
<hr />
<h2 id="heading-step-2-updating-private-subnet-route-tables">Step 2 — Updating Private Subnet Route Tables</h2>
<p>For each private subnet, we updated the route table:</p>
<ul>
<li><p>Default route <code>0.0.0.0/0</code> → NAT Gateway.</p>
</li>
<li><p>Internal traffic (<code>10.0.0.0/16</code>) still routes locally within the VPC.</p>
</li>
</ul>
<p>This gave private resources outbound internet access while keeping them hidden.</p>
<hr />
<h2 id="heading-step-3-testing-with-ec2-in-private-subnet">Step 3 — Testing with EC2 in Private Subnet</h2>
<p>We launched an EC2 in the private subnet and verified connectivity:</p>
<pre><code class="lang-plaintext">ping google.com
sudo yum update -y
curl https://api.github.com
</code></pre>
<p>Outbound requests succeeded.<br />Inbound requests from internet were blocked — exactly what we wanted.</p>
<hr />
<h2 id="heading-final-architecture-diagram">Final Architecture Diagram</h2>
<pre><code class="lang-plaintext">                   +-----------------------+
                   |       Internet        |
                   +-----------+-----------+
                               |
                        (Internet Gateway)
                               |
                   +-----------+-----------+
                   |       Public Subnet   |
                   |  NAT Gateway + EIP    |
                   +-----------+-----------+
                               |
                    Route to NAT Gateway
                               |
                   +-----------+-----------+
                   |      Private Subnet   |
                   |  EC2 / Lambda / RDS   |
                   +-----------------------+
</code></pre>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>By introducing a <strong>NAT Gateway</strong>, we achieved:</p>
<ul>
<li><p><strong>Secure isolation</strong>: Private subnets remain unreachable from the internet.</p>
</li>
<li><p><strong>Controlled outbound</strong>: Instances can fetch updates, call APIs, or access S3.</p>
</li>
<li><p><strong>Scalability</strong>: Fully managed by AWS (no manual patching or scaling).</p>
</li>
</ul>
<p>Today, our workloads run inside private subnets with:</p>
<ul>
<li><p><strong>Private, low-latency access</strong> to databases inside the VPC.</p>
</li>
<li><p><strong>Outbound internet connectivity</strong> for external APIs via NAT Gateway.</p>
</li>
</ul>
<p>This simple design pattern is a must-know for any AWS-based architecture where you want <strong>security + flexibility</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[React Virtual DOM: Why It Makes React Better and How It Really Works]]></title><description><![CDATA[If you’ve been in frontend development over the last decade, you’ve heard the same question again and again:

Why is React so popular, and what makes it better than traditional approaches?

The short answer is Virtual DOM. But if you’ve only read the...]]></description><link>https://blog.pratikpatel.pro/react-virtual-dom-why-it-makes-react-better-and-how-it-really-works</link><guid isPermaLink="true">https://blog.pratikpatel.pro/react-virtual-dom-why-it-makes-react-better-and-how-it-really-works</guid><category><![CDATA[React]]></category><category><![CDATA[virtual dom]]></category><category><![CDATA[MERN Stack]]></category><category><![CDATA[learning]]></category><dc:creator><![CDATA[PRATIK PATEL]]></dc:creator><pubDate>Wed, 03 Sep 2025 13:50:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756896266073/60d6d7c0-e452-4a79-beb2-a0f13e15a4ae.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you’ve been in frontend development over the last decade, you’ve heard the same question again and again:</p>
<blockquote>
<p>Why is React so popular, and what makes it better than traditional approaches?</p>
</blockquote>
<p>The short answer is <strong>Virtual DOM</strong>. But if you’ve only read the high-level explanations (“React keeps a copy of the DOM in memory”), you’re missing the real picture. The Virtual DOM isn’t just a performance hack — it’s an architectural innovation that reshaped how developers <strong>think about UIs, manage state, and scale applications</strong>.</p>
<p>In this article, we’ll take a <strong>deep dive</strong> into:</p>
<ul>
<li><p>Why direct DOM manipulation became a bottleneck</p>
</li>
<li><p>What Virtual DOM actually is (beyond buzzwords)</p>
</li>
<li><p>How React uses Virtual DOM (render, diff, commit phases)</p>
</li>
<li><p>The <strong>Fiber engine</strong> and how it enables concurrency</p>
</li>
<li><p>Real-world cases where Virtual DOM made React superior</p>
</li>
<li><p>Comparisons with Angular, Vue, Svelte, SolidJS</p>
</li>
<li><p>Best practices to maximize performance with Virtual DOM</p>
</li>
<li><p>The <strong>future of React</strong> in the age of Server Components and concurrent rendering</p>
</li>
</ul>
<p>By the end, you’ll have an <strong>interview-ready, production-grade understanding</strong> of Virtual DOM — not just the textbook definition.</p>
<hr />
<h2 id="heading-1-the-dom-problem-why-react-needed-a-new-model">1. The DOM Problem: Why React Needed a New Model</h2>
<p>Before React, developers mainly used:</p>
<ul>
<li><p><strong>jQuery</strong> → powerful DOM utilities but fully imperative.</p>
</li>
<li><p><strong>AngularJS</strong> → two-way binding with dirty-checking.</p>
</li>
<li><p><strong>Backbone/Knockout</strong> → some structure, but still DOM-heavy.</p>
</li>
</ul>
<h3 id="heading-11-why-the-dom-is-slow">1.1 Why the DOM is Slow</h3>
<p>The DOM is a tree of nodes. Manipulating it directly is costly because:</p>
<ol>
<li><p><strong>Reflows</strong> → recalculating element positions.</p>
</li>
<li><p><strong>Repaints</strong> → redrawing pixels on screen.</p>
</li>
<li><p><strong>Layout Thrashing</strong> → repeated reads/writes block the main thread.</p>
</li>
<li><p><strong>Event Listeners</strong> → attaching them to many nodes consumes memory.</p>
</li>
</ol>
<p>Updating just one property may force recalculations across thousands of nodes.</p>
<h3 id="heading-12-stateui-mismatch">1.2 State–UI Mismatch</h3>
<p>With jQuery, state lives in <strong>JavaScript variables</strong> and UI lives in the <strong>DOM tree</strong>.<br />Keeping them in sync is error-prone. Example:</p>
<pre><code class="lang-plaintext">let count = 0;
function increment() {
  count++;
  $('#counter').text(count);
}
</code></pre>
<p>If some developer forgets to update <code>#counter</code>, the UI no longer matches state. This gets worse with nested components.</p>
<h3 id="heading-13-angularjs-dirty-checking">1.3 AngularJS Dirty Checking</h3>
<p>AngularJS improved things with <strong>two-way binding</strong>. But dirty-checking loops checked every binding repeatedly → O(n²) complexity on large UIs.</p>
<p>👉 The ecosystem needed a <strong>predictable, scalable model</strong>.</p>
<hr />
<h2 id="heading-2-enter-react-a-declarative-mindset">2. Enter React: A Declarative Mindset</h2>
<p>React introduced a new formula:</p>
<pre><code class="lang-plaintext">UI = f(state)
</code></pre>
<p>Instead of manually updating the DOM step-by-step, developers just describe the UI for the current state.</p>
<ul>
<li><p>If state changes → React re-renders → UI syncs automatically.</p>
</li>
<li><p>No more manual patching.</p>
</li>
<li><p>No more spaghetti jQuery logic.</p>
</li>
</ul>
<p>But rendering the full DOM tree on each state change would be wasteful. That’s where the <strong>Virtual DOM</strong> comes in.</p>
<hr />
<h2 id="heading-3-what-is-the-virtual-dom">3. What is the Virtual DOM?</h2>
<p>The <strong>Virtual DOM (VDOM)</strong> is one of React’s core innovations. At a basic level, it is:</p>
<ul>
<li><p>A <strong>lightweight, in-memory representation</strong> of the actual DOM.</p>
</li>
<li><p>Made of simple <strong>JavaScript objects</strong> that describe UI elements and their structure.</p>
</li>
<li><p>A kind of <strong>staging area</strong> where React prepares and calculates changes before applying them to the browser’s DOM.</p>
</li>
</ul>
<p>Think of it as a <strong>blueprint of a building</strong>:</p>
<ul>
<li><p>You first modify the blueprint (cheap, quick, no real-world cost).</p>
</li>
<li><p>Only after the design is finalized do you renovate the actual building (time-consuming, costly).</p>
</li>
</ul>
<p>React follows the same idea. It updates the Virtual DOM first, figures out what actually changed, and then makes the <strong>minimal possible updates</strong> to the real DOM.</p>
<hr />
<h3 id="heading-31-why-do-we-need-a-virtual-dom">3.1 Why Do We Need a Virtual DOM?</h3>
<p>The <strong>real DOM</strong> is slow to manipulate because it is directly tied to the rendering engine of the browser. When you change one element:</p>
<ul>
<li><p>The browser often needs to <strong>recalculate CSS styles</strong>.</p>
</li>
<li><p>It may have to <strong>reflow layouts</strong>, repositioning elements on the page.</p>
</li>
<li><p>Finally, it must <strong>repaint</strong> pixels on the screen.</p>
</li>
</ul>
<p>Even small updates can cascade into <strong>expensive operations</strong>. In apps with hundreds or thousands of elements, this becomes a serious performance bottleneck.</p>
<p>By introducing a <strong>Virtual DOM layer</strong>, React ensures:</p>
<ul>
<li><p>Updates happen in <strong>memory first</strong> (fast, cheap).</p>
</li>
<li><p>Only the <strong>necessary changes</strong> are applied to the actual DOM (optimized).</p>
</li>
</ul>
<hr />
<h3 id="heading-32-how-the-virtual-dom-is-structured">3.2 How the Virtual DOM is Structured</h3>
<p>The Virtual DOM is essentially a <strong>tree of JavaScript objects</strong> that mirrors the real DOM tree.</p>
<p>Example real DOM:</p>
<pre><code class="lang-plaintext">&lt;div id="app"&gt;
  &lt;h1&gt;Hello World&lt;/h1&gt;
  &lt;button&gt;Click Me&lt;/button&gt;
&lt;/div&gt;
</code></pre>
<p>Virtual DOM equivalent:</p>
<pre><code class="lang-plaintext">{
  type: "div",
  props: { id: "app" },
  children: [
    { type: "h1", props: {}, children: ["Hello World"] },
    { type: "button", props: {}, children: ["Click Me"] }
  ]
}
</code></pre>
<p>Instead of manipulating real nodes directly, React manipulates this <strong>lightweight JavaScript object tree</strong>.</p>
<hr />
<h3 id="heading-33-analogy-draft-vs-final-version">3.3 Analogy: Draft vs. Final Version</h3>
<p>Think of writing a 50-page report:</p>
<ul>
<li><p>If you edit directly on the printed copy, every small change forces you to <strong>reprint all 50 pages</strong> — slow and wasteful.</p>
</li>
<li><p>Instead, you edit in a <strong>draft document (Word, Google Docs)</strong>. Once finalized, you only print the <strong>few pages that changed</strong>.</p>
</li>
</ul>
<p>The Virtual DOM is that <strong>draft workspace</strong> for React.</p>
<hr />
<h3 id="heading-34-why-this-makes-react-efficient">3.4 Why This Makes React Efficient</h3>
<p>By working with a Virtual DOM, React:</p>
<ol>
<li><p><strong>Reduces direct DOM operations</strong> — since most changes happen in memory.</p>
</li>
<li><p><strong>Batches updates</strong> — React groups multiple state changes and processes them efficiently.</p>
</li>
<li><p><strong>Performs diffing</strong> — React compares the new Virtual DOM with the previous one and updates only what’s necessary.</p>
</li>
</ol>
<p>This is the reason React can handle highly interactive UIs (dashboards, live feeds, chat apps) without becoming sluggish.</p>
<hr />
<h2 id="heading-4-how-virtual-dom-works">4. How Virtual DOM Works</h2>
<h3 id="heading-41-initial-render">4.1 Initial Render</h3>
<ol>
<li><p>React components produce elements (via JSX).</p>
</li>
<li><p>React builds a Virtual DOM tree.</p>
</li>
<li><p>React converts it into real DOM nodes and mounts them.</p>
</li>
</ol>
<h3 id="heading-42-on-state-update">4.2 On State Update</h3>
<ol>
<li><p>Component re-runs → produces a new Virtual DOM.</p>
</li>
<li><p>React compares old VDOM vs new VDOM (<strong>diffing</strong>).</p>
</li>
<li><p>React computes the minimal DOM mutations.</p>
</li>
<li><p>React updates the DOM in a batch.</p>
</li>
</ol>
<h3 id="heading-43-diffing-rules">4.3 Diffing Rules</h3>
<ul>
<li><p><strong>Different types</strong> → replace node.</p>
</li>
<li><p><strong>Same type, different props</strong> → update props only.</p>
</li>
<li><p><strong>Lists</strong> → use <strong>keys</strong> to track items across renders.</p>
</li>
</ul>
<p>Example:</p>
<pre><code class="lang-plaintext">&lt;ul&gt;
  {items.map(item =&gt; &lt;li key={item.id}&gt;{item.text}&lt;/li&gt;)}
&lt;/ul&gt;
</code></pre>
<p>If you change only one <code>item.text</code>, React updates just that <code>&lt;li&gt;</code>, not the entire <code>&lt;ul&gt;</code>.</p>
<hr />
<h2 id="heading-5-the-render-and-commit-phases">5. The Render and Commit Phases</h2>
<p>React splits updates into:</p>
<ol>
<li><p><strong>Render Phase (diffing)</strong></p>
<ul>
<li><p>Create new VDOM.</p>
</li>
<li><p>Compare with previous VDOM.</p>
</li>
<li><p>Collect changes.</p>
</li>
</ul>
</li>
<li><p><strong>Commit Phase</strong></p>
<ul>
<li><p>Apply minimal mutations to real DOM.</p>
</li>
<li><p>Run effects, refs, lifecycle hooks.</p>
</li>
</ul>
</li>
</ol>
<p>This batching avoids intermediate, wasteful DOM reflows.</p>
<hr />
<h2 id="heading-6-react-fiber-beyond-virtual-dom">6. React Fiber: Beyond Virtual DOM</h2>
<p>The Virtual DOM solved correctness and performance. But React 15 and earlier had a problem: <strong>updates were synchronous</strong>.</p>
<p>A large render could freeze the UI until finished.</p>
<h3 id="heading-61-fiber-architecture">6.1 Fiber Architecture</h3>
<p>React 16 introduced <strong>Fiber</strong>, a new reconciliation engine:</p>
<ul>
<li><p>Breaks rendering into <strong>units of work</strong>.</p>
</li>
<li><p>Allows React to pause, resume, abort, or prioritize work.</p>
</li>
<li><p>Each VDOM element is linked to a <strong>Fiber node</strong> (a data structure with pointers to parent/child/sibling).</p>
</li>
</ul>
<h3 id="heading-62-work-loop-amp-priorities">6.2 Work Loop &amp; Priorities</h3>
<ul>
<li><p>Updates are scheduled with <strong>lanes</strong> (priorities).</p>
</li>
<li><p>Urgent updates (typing, animations) interrupt less important ones (background data fetching).</p>
</li>
<li><p>The browser remains responsive.</p>
</li>
</ul>
<h3 id="heading-63-concurrent-rendering-react-18">6.3 Concurrent Rendering (React 18)</h3>
<ul>
<li><p>With <code>startTransition</code>, developers can mark expensive updates as low priority.</p>
</li>
<li><p>Example: Typing in a search bar stays smooth while React filters a huge list in the background.</p>
</li>
</ul>
<hr />
<h2 id="heading-7-why-virtual-dom-makes-react-better">7. Why Virtual DOM Makes React Better</h2>
<h3 id="heading-71-declarative-programming">7.1 Declarative Programming</h3>
<ul>
<li><p>No need to write manual DOM manipulation code.</p>
</li>
<li><p>React ensures UI always reflects state.</p>
</li>
</ul>
<h3 id="heading-72-predictable-updates">7.2 Predictable Updates</h3>
<ul>
<li><p>Virtual DOM diffing ensures consistency.</p>
</li>
<li><p>Fiber scheduling ensures responsiveness.</p>
</li>
</ul>
<h3 id="heading-73-cross-platform">7.3 Cross-Platform</h3>
<ul>
<li><p>React DOM → web.</p>
</li>
<li><p>React Native → maps VDOM to native views.</p>
</li>
<li><p>React Three Fiber → maps VDOM to WebGL.</p>
</li>
</ul>
<p>All possible because UI description is abstracted in the Virtual DOM.</p>
<h3 id="heading-74-maintainability-at-scale">7.4 Maintainability at Scale</h3>
<ul>
<li><p>Components are isolated and predictable.</p>
</li>
<li><p>Even massive teams (Facebook, Airbnb) can scale apps with fewer bugs.</p>
</li>
</ul>
<hr />
<h2 id="heading-8-real-world-case-studies">8. Real-World Case Studies</h2>
<h3 id="heading-81-facebook">8.1 Facebook</h3>
<ul>
<li><p>News Feed updates frequently (likes, comments, live updates).</p>
</li>
<li><p>Virtual DOM ensures minimal re-renders → smooth scrolling.</p>
</li>
</ul>
<h3 id="heading-82-instagram">8.2 Instagram</h3>
<ul>
<li><p>Complex media grids, stories, infinite scrolling.</p>
</li>
<li><p>React’s VDOM + Fiber enable background rendering for feed updates without freezing.</p>
</li>
</ul>
<h3 id="heading-83-netflix">8.3 Netflix</h3>
<ul>
<li><p>Migrated UI to React in 2015.</p>
</li>
<li><p>Result: 50% faster startup, smoother TV UIs.</p>
</li>
</ul>
<h3 id="heading-84-airbnb">8.4 Airbnb</h3>
<ul>
<li><p>Thousands of reusable components.</p>
</li>
<li><p>Virtual DOM abstraction allowed consistent rendering across web and mobile.</p>
</li>
</ul>
<hr />
<h2 id="heading-9-comparisons-with-other-frameworks">9. Comparisons with Other Frameworks</h2>
<p>While React’s <strong>Virtual DOM</strong> approach is innovative, it’s important to understand how it compares to other major frontend frameworks. Each framework takes a slightly different route to solve the same fundamental problem: <strong>how to efficiently update the UI when state changes</strong>.</p>
<hr />
<h3 id="heading-91-angular-legacy-and-modern">9.1 Angular (Legacy and Modern)</h3>
<p>AngularJS (the original Angular 1.x) used a <strong>dirty-checking mechanism</strong>.</p>
<ul>
<li><p>Every time a change happened, Angular ran a <strong>digest cycle</strong> that checked all the variables in scope and compared them with previous values.</p>
</li>
<li><p>This worked fine for small apps, but as the app size and number of bindings grew, the digest cycles became expensive.</p>
</li>
<li><p>For large, dynamic UIs, performance dropped significantly because Angular had to keep checking every variable, even if most didn’t change.</p>
</li>
</ul>
<p>React’s Virtual DOM solved this by:</p>
<ul>
<li><p>Representing the UI as a <strong>tree</strong>.</p>
</li>
<li><p>Running <strong>diffing</strong> only where state/props actually changed.</p>
</li>
<li><p>Avoiding a global check on all variables.</p>
</li>
</ul>
<p>With <strong>Angular 2+ (modern Angular)</strong>, the framework moved closer to React’s model:</p>
<ul>
<li><p>It introduced <strong>zone.js</strong> for change detection.</p>
</li>
<li><p>It compiles templates into optimized JavaScript instructions.</p>
</li>
<li><p>Performance improved drastically, but React’s Virtual DOM still offered more <strong>predictability</strong> and <strong>fine-grained rendering control</strong> (especially with concurrent rendering and Fiber).</p>
</li>
</ul>
<hr />
<h3 id="heading-92-vuejs">9.2 Vue.js</h3>
<p>Vue also uses a <strong>Virtual DOM</strong>, but pairs it with a <strong>reactivity system</strong>.</p>
<ul>
<li><p>Vue tracks dependencies at the component level.</p>
</li>
<li><p>When a reactive property changes, Vue knows exactly which components depend on it and updates only those.</p>
</li>
<li><p>This can be more efficient in many cases because Vue avoids unnecessary diffing work.</p>
</li>
</ul>
<p><strong>Example:</strong><br />If you have a large form with 50 fields, but only one changes, Vue updates just that field’s component. React will still diff the entire subtree but optimize rendering through reconciliation.</p>
<ul>
<li><p>Vue’s <strong>computed properties</strong> and <strong>watchers</strong> make it easy to optimize performance.</p>
</li>
<li><p>React focuses more on <strong>state as a function of props</strong> (unidirectional data flow) and scheduling updates with Fiber.</p>
</li>
</ul>
<p>The difference:</p>
<ul>
<li><p>Vue = <strong>reactivity-driven updates + VDOM</strong>.</p>
</li>
<li><p>React = <strong>scheduler-driven updates + VDOM</strong>.</p>
</li>
</ul>
<p>Both work well, but React’s scheduling system scales better for <strong>concurrent rendering</strong> and complex UIs.</p>
<hr />
<h3 id="heading-96-why-reacts-virtual-dom-still-wins">9.6 Why React’s Virtual DOM Still Wins</h3>
<p>When comparing across frameworks:</p>
<ul>
<li><p>Angular (legacy) struggled with dirty-checking.</p>
</li>
<li><p>Vue combines reactivity + VDOM but doesn’t have React’s scheduling.</p>
</li>
<li><p>Svelte and SolidJS avoid VDOM but sacrifice flexibility for performance.</p>
</li>
<li><p>Preact is fast but limited.</p>
</li>
</ul>
<p>React’s Virtual DOM + Fiber strikes a balance:</p>
<ul>
<li><p><strong>Performance</strong> (with concurrent rendering).</p>
</li>
<li><p><strong>Flexibility</strong> (custom renderers, React Native).</p>
</li>
<li><p><strong>Ecosystem</strong> (tools, libraries, hiring pool).</p>
</li>
<li><p><strong>Predictable mental model</strong> (UI = f(state)).</p>
</li>
</ul>
<p>That’s why, despite new challengers, React’s Virtual DOM continues to be the <strong>dominant abstraction in frontend development</strong>.</p>
<hr />
<h2 id="heading-10-best-practices-for-developers">10. Best Practices for Developers</h2>
<ol>
<li><p><strong>Always use keys for list items</strong></p>
<pre><code class="lang-plaintext"> {todos.map(todo =&gt; &lt;li key={todo.id}&gt;{todo.text}&lt;/li&gt;)}
</code></pre>
</li>
<li><p><strong>Minimize state</strong></p>
<ul>
<li><p>Store only what you must.</p>
</li>
<li><p>Derive values when possible.</p>
</li>
</ul>
</li>
<li><p><strong>Use memoization</strong></p>
<ul>
<li><p><code>React.memo</code>, <code>useMemo</code>, <code>useCallback</code>.</p>
</li>
<li><p>Prevents re-renders of unchanged components.</p>
</li>
</ul>
</li>
<li><p><strong>Split components</strong></p>
<ul>
<li>Smaller components = less diffing work.</li>
</ul>
</li>
<li><p><strong>Batch updates</strong></p>
<ul>
<li><p>React already batches in event handlers.</p>
</li>
<li><p>Don’t force synchronous DOM reads/writes.</p>
</li>
</ul>
</li>
<li><p><strong>Concurrent rendering features</strong></p>
<ul>
<li>Use <code>startTransition</code> for heavy updates.</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-11-the-future-of-virtual-dom">11. The Future of Virtual DOM</h2>
<ul>
<li><p><strong>Server Components</strong> → render parts of UI on server.</p>
</li>
<li><p><strong>Streaming SSR</strong> → faster hydration.</p>
</li>
<li><p><strong>Selective Hydration</strong> → hydrate only parts visible to user.</p>
</li>
<li><p>Virtual DOM remains the <strong>foundation</strong> for these innovations.</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>The Virtual DOM is more than a clever optimization. It’s the <strong>architectural backbone</strong> that:</p>
<ul>
<li><p>Enables declarative programming.</p>
</li>
<li><p>Ensures scalable, maintainable UIs.</p>
</li>
<li><p>Powers concurrency and cross-platform rendering.</p>
</li>
</ul>
<p>React is “better” not because Virtual DOM is the fastest approach in micro-benchmarks — but because it offers the <strong>best trade-offs for real-world apps</strong>: performance, predictability, scalability, and developer experience.</p>
<p>As frontend continues to evolve, React’s Virtual DOM will keep adapting — not as a buzzword, but as a proven foundation for building UIs at scale.</p>
]]></content:encoded></item><item><title><![CDATA[Mastering AWS for Developers: Key Services Every MERN Stack Engineer Should Know]]></title><description><![CDATA[When it comes to deploying modern applications, Amazon Web Services (AWS) remains the go-to cloud platform. But with 200+ services, it’s easy to feel overwhelmed. As a MERN (MongoDB, Express, React, Node.js) developer preparing for real-world project...]]></description><link>https://blog.pratikpatel.pro/mastering-aws-for-developers-key-services-every-mern-stack-engineer-should-know</link><guid isPermaLink="true">https://blog.pratikpatel.pro/mastering-aws-for-developers-key-services-every-mern-stack-engineer-should-know</guid><category><![CDATA[AWS]]></category><category><![CDATA[MERN Stack]]></category><category><![CDATA[S3]]></category><category><![CDATA[ec2]]></category><category><![CDATA[lambda]]></category><dc:creator><![CDATA[PRATIK PATEL]]></dc:creator><pubDate>Tue, 02 Sep 2025 05:12:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756789897173/4e4e07bf-c613-41bb-8181-4c85993c03e0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When it comes to deploying modern applications, Amazon Web Services (AWS) remains the <strong>go-to cloud platform</strong>. But with <strong>200+ services</strong>, it’s easy to feel overwhelmed. As a MERN (MongoDB, Express, React, Node.js) developer preparing for <strong>real-world projects or interviews</strong>, you don’t need to know <em>everything</em> — just the <strong>core AWS services</strong> that directly impact application development and deployment.</p>
<p>In this blog, we’ll explore <strong>5 AWS services every developer should master</strong>, with examples and use cases tailored for SaaS and MERN stack projects.</p>
<hr />
<h2 id="heading-1-amazon-ec2-elastic-compute-cloud-your-virtual-server">1. <strong>Amazon EC2 (Elastic Compute Cloud)</strong> – Your Virtual Server</h2>
<p><strong>What it is</strong>: EC2 provides resizable virtual machines in the cloud.</p>
<p><strong>When to use</strong>:</p>
<ul>
<li><p>Running Node.js backend APIs.</p>
</li>
<li><p>Hosting services that require full server control.</p>
</li>
<li><p>Running cron jobs or background workers.</p>
</li>
</ul>
<p><strong>Example</strong>: Deploying an Express.js backend on an EC2 instance with <strong>NGINX + PM2</strong> for process management.</p>
<hr />
<h2 id="heading-2-amazon-s3-simple-storage-service-scalable-object-storage">2. <strong>Amazon S3 (Simple Storage Service)</strong> – Scalable Object Storage</h2>
<p><strong>What it is</strong>: Secure and highly scalable object storage service.</p>
<p><strong>When to use</strong>:</p>
<ul>
<li><p>Store user-uploaded files (profile pictures, reports, invoices).</p>
</li>
<li><p>Host static websites (React SPA).</p>
</li>
<li><p>Store backups and logs.</p>
</li>
</ul>
<p><strong>Code Example (Node.js file upload)</strong>:</p>
<pre><code class="lang-plaintext">import { S3Client } from '@aws-sdk/client-s3';
import multerS3 from 'multer-s3';
import { configData } from '../config/config';
import path from 'path';

export const s3Client = new S3Client({
    region: 'ap-south-1',
    credentials: {
        accessKeyId: configData.s3accessKeyId,
        secretAccessKey: configData.s3secretAccessKey,
    },
});

export const s3Storage = multerS3({
    s3: s3Client,
    bucket: `${process.env.S3_BUCKETNAME}`,
    metadata: (req, file, cb) =&gt; {
        cb(null, { fieldname: file.fieldname });
    },
    key: (req, file, cb) =&gt; {

        if (!file || !file.originalname) {
            return cb(new Error('No file provided.'));
        }

        const allowedExtensions = ['.jpg', '.jpeg', '.png'];
        const ext = path.extname(file.originalname).toLowerCase();

        if (!allowedExtensions.includes(ext)) {
            return cb(new Error('Only .jpg, .jpeg, and .png formats are allowed.'));
        }

        const fileName = `Profile-Images/${Date.now()}_${file.fieldname}_${file.originalname}`;
        cb(null, fileName);
    }

});
</code></pre>
<hr />
<h2 id="heading-3-aws-lambda-serverless-functions">3. <strong>AWS Lambda</strong> – Serverless Functions</h2>
<p><strong>What it is</strong>: Run code without managing servers.</p>
<p><strong>When to use</strong>:</p>
<ul>
<li><p>Event-driven tasks (file upload triggers, email sending).</p>
</li>
<li><p>Background processing (image compression, PDF generation).</p>
</li>
<li><p>Low-traffic APIs to save cost.</p>
</li>
</ul>
<p><strong>Example</strong>: Triggering a Lambda when a new file is uploaded to S3 → process → save result in MongoDB.</p>
<hr />
<h2 id="heading-4-amazon-rds-relational-database-service-amp-dynamodb">4. <strong>Amazon RDS (Relational Database Service) &amp; DynamoDB</strong></h2>
<ul>
<li><p><strong>RDS</strong>: Managed SQL databases (MySQL, PostgreSQL).</p>
</li>
<li><p><strong>DynamoDB</strong>: NoSQL key-value store, fully serverless.</p>
</li>
</ul>
<p><strong>When to use</strong>:</p>
<ul>
<li><p>Use <strong>RDS</strong> when you need transactions &amp; complex queries.</p>
</li>
<li><p>Use <strong>DynamoDB</strong> for fast lookups, real-time apps, or event-driven workloads.</p>
</li>
</ul>
<hr />
<h2 id="heading-5-amazon-cloudwatch-amp-iam-monitoring-amp-security">5. <strong>Amazon CloudWatch &amp; IAM</strong> – Monitoring &amp; Security</h2>
<ul>
<li><p><strong>CloudWatch</strong>: Logs, metrics, alarms for apps.</p>
</li>
<li><p><strong>IAM (Identity &amp; Access Management)</strong>: Secure access with least-privilege policies.</p>
</li>
</ul>
<p><strong>Why it matters</strong>:</p>
<ul>
<li><p>CloudWatch helps debug production issues (API latency, memory leaks).</p>
</li>
<li><p>IAM ensures your app isn’t overexposed (e.g., public S3 buckets = security nightmare).</p>
</li>
</ul>
<hr />
<h2 id="heading-quick-deployment-example-mern-app-on-aws">Quick Deployment Example: MERN App on AWS</h2>
<ol>
<li><p>React frontend → Deploy on <strong>S3 + CloudFront</strong>.</p>
</li>
<li><p>Node.js backend → Run on <strong>EC2</strong> or <strong>Elastic Beanstalk</strong>.</p>
</li>
<li><p>MongoDB → Use <strong>MongoDB Atlas on AWS</strong> (or DynamoDB if allowed).</p>
</li>
<li><p>File uploads → Store on <strong>S3</strong>.</p>
</li>
<li><p>Monitoring → Use <strong>CloudWatch</strong> for logs + alerts.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Working with AWS Lambda in Dev and Prod Environments using Serverless Framework]]></title><description><![CDATA[When building cloud-native applications, separating development and production environments is crucial. AWS Lambda makes it easy to deploy serverless functions, but managing different environments without chaos requires a structured approach.
In this...]]></description><link>https://blog.pratikpatel.pro/aws-lambda-dev-prod-serverless-framework</link><guid isPermaLink="true">https://blog.pratikpatel.pro/aws-lambda-dev-prod-serverless-framework</guid><category><![CDATA[Cloud Development ]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[serverless framework]]></category><dc:creator><![CDATA[PRATIK PATEL]]></dc:creator><pubDate>Tue, 19 Aug 2025 12:02:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755604632274/0b111f4a-3553-4b56-aacf-b571e4ec5e44.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When building cloud-native applications, separating <strong>development</strong> and <strong>production</strong> environments is crucial. AWS Lambda makes it easy to deploy serverless functions, but managing different environments without chaos requires a structured approach.</p>
<p>In this blog, we’ll explore how to manage <strong>Dev</strong> and <strong>Prod</strong> Lambda deployments using the <strong>Serverless Framework</strong>. We’ll set up environment-specific configurations, IAM permissions, and function names to keep both environments isolated and manageable.</p>
<hr />
<h2 id="heading-why-separate-dev-and-prod">Why Separate Dev and Prod?</h2>
<ul>
<li><p><strong>Avoid breaking production:</strong> Experiment safely in Dev before pushing to Prod.</p>
</li>
<li><p><strong>Environment-specific configs:</strong> API keys, DB URIs, and secrets differ between Dev and Prod.</p>
</li>
<li><p><strong>Cost control:</strong> Run lightweight configs in Dev, scale Prod as needed.</p>
</li>
<li><p><strong>Audit &amp; Monitoring:</strong> Logs and alerts should be separated per environment.</p>
</li>
</ul>
<hr />
<h2 id="heading-serverless-framework-setup">Serverless Framework Setup</h2>
<p>The <strong>Serverless Framework</strong> simplifies AWS Lambda deployments by allowing us to declare everything (functions, IAM, env variables, resources) in a single YAML file.</p>
<h3 id="heading-install-serverless">Install Serverless</h3>
<pre><code class="lang-plaintext">npm install -g serverless
</code></pre>
<p>Initialize a new project:</p>
<pre><code class="lang-plaintext">serverless create --template aws-nodejs --path my-service
cd my-service
npm init -y
</code></pre>
<hr />
<h2 id="heading-example-serverlessyml-with-dev-amp-prod-environments">Example <code>serverless.yml</code> with Dev &amp; Prod Environments</h2>
<p>Here’s a clean example:</p>
<pre><code class="lang-plaintext">service: user-api-service
frameworkVersion: '3'

custom:
  currentStage: ${opt:stage, 'dev'}
  functionNameMap:
    dev: userApiFunction
    prod: userApiFunction-prod

provider:
  name: aws
  runtime: nodejs18.x
  region: ap-south-1
  stage: ${self:custom.currentStage}
  memorySize: ${self:custom.currentStage == 'dev' ? 128 : 512}
  timeout: ${self:custom.currentStage == 'dev' ? 10 : 30}

  environment:
    NODE_ENV: ${self:custom.currentStage}
    DB_URI: ${self:custom.currentStage == 'dev' 
      ? 'mongodb+srv://dev-cluster/test' 
      : 'mongodb+srv://prod-cluster/live'}

  iam:
    role:
      statements:
        - Effect: Allow
          Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
          Resource: "*"

functions:
  userApi:
    handler: handler.main
    name: ${self:custom.functionNameMap.${self:custom.currentStage}}
    description: "Lambda for ${self:custom.currentStage} environment"

plugins:
  - serverless-dotenv-plugin
  - serverless-plugin-typescript
</code></pre>
<hr />
<h2 id="heading-breaking-it-down">Breaking It Down</h2>
<ul>
<li><p><code>custom.currentStage</code>: Picks up stage from CLI (<code>sls deploy --stage prod</code>). Defaults to <code>dev</code>.</p>
</li>
<li><p><code>functionNameMap</code>: Ensures Lambda functions have unique names across environments.</p>
</li>
<li><p><code>environment</code>: Stage-specific environment variables (e.g., DB URIs).</p>
</li>
<li><p><code>memorySize</code> &amp; <code>timeout</code>: Different performance settings for Dev vs Prod.</p>
</li>
</ul>
<hr />
<h2 id="heading-deploying-to-dev-vs-prod">Deploying to Dev vs Prod</h2>
<p>To deploy in <strong>Dev</strong> (default):</p>
<pre><code class="lang-plaintext">sls deploy
</code></pre>
<p>To deploy in <strong>Prod</strong>:</p>
<pre><code class="lang-plaintext">sls deploy --stage prod
</code></pre>
<p>Each environment will create a <strong>separate Lambda function, logs, and resources</strong>, ensuring clean isolation.</p>
<hr />
<h2 id="heading-logging-amp-monitoring-per-environment">Logging &amp; Monitoring per Environment</h2>
<p>Every deployment automatically creates CloudWatch log groups:</p>
<ul>
<li><p><code>/aws/lambda/userApiFunction</code> → Dev logs</p>
</li>
<li><p><code>/aws/lambda/userApiFunction-prod</code> → Prod logs</p>
</li>
</ul>
<p>This separation makes debugging easier and prevents noisy logs from mixing.</p>
<hr />
<h2 id="heading-best-practices">Best Practices</h2>
<ol>
<li><p><strong>Use Secrets Manager/SSM Parameter Store</strong> for sensitive credentials instead of hardcoding in <code>serverless.yml</code>.</p>
</li>
<li><p><strong>Enable log retention</strong> to avoid bloated CloudWatch bills.</p>
</li>
<li><p><strong>Automate deployments</strong> with CI/CD pipelines (GitHub Actions, CodePipeline).</p>
</li>
<li><p><strong>Test in Dev before promoting to Prod.</strong></p>
</li>
</ol>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>Managing AWS Lambda for <strong>multiple environments</strong> can get messy if not planned well. Using <strong>Serverless Framework</strong> with <strong>stage-aware configs</strong> ensures clean isolation between Dev and Prod. You can tune resources, environment variables, and IAM permissions per environment, while still keeping everything in a single configuration file.</p>
]]></content:encoded></item><item><title><![CDATA[How We Solved the On-Time Notification Delivery Problem at Scale]]></title><description><![CDATA[Delivering notifications exactly on time sounds easy — until you have to do it for thousands of users at the same second.
Our medicine reminder app depends heavily on precise dose reminders. Even a 1–2 minute delay can cause users to miss their doses...]]></description><link>https://blog.pratikpatel.pro/aws-step-functions-rds-proxy-lambda-scaling</link><guid isPermaLink="true">https://blog.pratikpatel.pro/aws-step-functions-rds-proxy-lambda-scaling</guid><dc:creator><![CDATA[PRATIK PATEL]]></dc:creator><pubDate>Thu, 14 Aug 2025 08:55:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755166079927/c953cc8d-9d7d-4ca3-a6d8-217caf8f9a64.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Delivering notifications <em>exactly on time</em> sounds easy — until you have to do it for <strong>thousands of users at the same second</strong>.</p>
<p>Our medicine reminder app depends heavily on <strong>precise dose reminders</strong>. Even a 1–2 minute delay can cause users to miss their doses, so reliability was critical.</p>
<p>Initially, we used <strong>BullMQ + Node.js workers</strong> for scheduling and sending notifications. It worked fine for a small number of users, but at scale, the system started to break.</p>
<hr />
<h2 id="heading-the-problem"><strong>The Problem</strong></h2>
<h3 id="heading-1-too-many-notifications-at-the-same-time"><strong>1. Too Many Notifications at the Same Time</strong></h3>
<ul>
<li><p>Thousands of notifications were scheduled for the same second.</p>
</li>
<li><p>Workers pulled huge batches from Redis, causing <strong>CPU &amp; memory spikes</strong>.</p>
</li>
<li><p>Redis queues became congested and some jobs got delayed.</p>
</li>
</ul>
<h3 id="heading-2-worker-overload"><strong>2. Worker Overload</strong></h3>
<ul>
<li><p>Even with multiple worker instances, the Node.js event loop struggled during peaks.</p>
</li>
<li><p>Delays became more frequent as user count increased.</p>
</li>
</ul>
<hr />
<h2 id="heading-step-1-moving-to-aws-step-functions-lambda"><strong>Step 1 — Moving to AWS Step Functions + Lambda</strong></h2>
<p>We redesigned the scheduling process to <strong>spread the load</strong> more efficiently.</p>
<p><strong>New Flow:</strong></p>
<ol>
<li><p><strong>Step Function</strong> schedules notification batches at exact times.</p>
</li>
<li><p>Each execution <strong>triggers a Lambda</strong> dedicated to a subset of notifications.</p>
</li>
<li><p>We keep <strong>2–3 hot Lambdas</strong> ready during peak times to avoid cold starts.</p>
</li>
</ol>
<p><strong>Why Step Functions?</strong></p>
<ul>
<li><p>Native CRON/Rate-based scheduling.</p>
</li>
<li><p>Orchestration of multiple parallel Lambda executions.</p>
</li>
<li><p>Fully managed — no manual queue management.</p>
</li>
</ul>
<hr />
<h2 id="heading-step-2-horizontal-scaling-in-lambda"><strong>Step 2 — Horizontal Scaling in Lambda</strong></h2>
<p>AWS Lambda scales automatically, so during high load, we got <strong>dozens of Lambdas in parallel</strong>.</p>
<p>This solved the compute bottleneck — but introduced a <strong>new problem</strong>…</p>
<hr />
<h2 id="heading-step-3-the-database-connection-storm"><strong>Step 3 — The Database Connection Storm</strong></h2>
<p>Each Lambda invocation created a new PostgreSQL connection.</p>
<p>At scale:</p>
<ul>
<li><p>RDS hit the <strong>max_connections</strong> limit.</p>
</li>
<li><p>Some Lambdas failed instantly due to connection errors.</p>
</li>
<li><p>This caused missed or late notifications.</p>
</li>
</ul>
<hr />
<h2 id="heading-step-4-introducing-amazon-rds-proxy"><strong>Step 4 — Introducing Amazon RDS Proxy</strong></h2>
<p><strong>RDS Proxy</strong> pools and shares DB connections across Lambdas.</p>
<p><strong>Benefits:</strong></p>
<ul>
<li><p>Reuses existing DB connections.</p>
</li>
<li><p>Reduces connection churn and overhead.</p>
</li>
<li><p>Eliminates <code>too many connections</code> errors.</p>
</li>
<li><p>Lowers latency because connections are pre-warmed.</p>
</li>
</ul>
<hr />
<h2 id="heading-step-5-putting-lambdas-in-a-vpc"><strong>Step 5 — Putting Lambdas in a VPC</strong></h2>
<p>Since <strong>RDS Proxy</strong> lives inside a VPC:</p>
<ul>
<li><p>All Lambdas were moved into <strong>private subnets</strong> within the same VPC.</p>
</li>
<li><p>This allowed private, low-latency connections to RDS Proxy.</p>
</li>
</ul>
<hr />
<h2 id="heading-step-6-adding-internet-access-via-nat-gateway"><strong>Step 6 — Adding Internet Access via NAT Gateway</strong></h2>
<p>Once Lambdas were in the VPC, they <strong>lost internet access</strong> — which broke calls to FCM.</p>
<p><strong>Fix:</strong></p>
<ul>
<li><p>Created a <strong>NAT Gateway</strong> in the VPC.</p>
</li>
<li><p>Updated route tables so Lambdas could:</p>
<ul>
<li><p>Connect to RDS Proxy privately.</p>
</li>
<li><p>Still reach the internet for external APIs.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-final-architecture-diagram"><strong>Final Architecture Diagram</strong></h2>
<pre><code class="lang-plaintext">          ┌───────────────────┐
          │  Step Function    │
          │  (Scheduled CRON) │
          └─────────┬─────────┘
                    │
                    ▼
           ┌────────────────────────────────────┐
           │            AWS Lambda(s)           │
           │ (Horizontal Scaling, VPC-Enabled)  │
           │  1️⃣ Query DB via RDS Proxy         │
           │  2️⃣ Send Notifications via APIs    │
           └─────────┬────────────────┬─────────┘
                     │                │
        ┌────────────▼─────────┐   ┌──▼───────────────┐
        │   Amazon RDS Proxy   │   │ NAT Gateway      │
        │ (Connection Pooling) │   │ (Internet Access)│
        └───────────┬──────────┘   └──────────┬──────┘
                    │                        │
          ┌─────────▼──────────┐     ┌───────▼─────────────────┐
          │ PostgreSQL (RDS)   │     │ External APIs (FCM, SNS,│
          └────────────────────┘     │ Push Notification, etc.)│
                                     └─────────────────────────┘
</code></pre>
<hr />
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Building an on-time notification system at scale required more than just adding more workers — it demanded a complete architectural rethink. By moving from BullMQ workers to <strong>AWS Step Functions</strong> for scheduling, <strong>Lambda</strong> for scalable compute, and <strong>RDS Proxy</strong> for efficient database connectivity, we achieved a fully serverless, reliable, and low-maintenance solution.</p>
<p>Integrating <strong>VPC networking</strong> with a <strong>NAT Gateway</strong> ensured secure database access while still allowing internet connectivity for push and Firebase APIs. Today, our system delivers notifications <strong>precisely on time</strong>, even during heavy load, while remaining cost-efficient and easy to operate.</p>
]]></content:encoded></item></channel></rss>