<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Hi-Stakes Systems]]></title><description><![CDATA[Strategic insights on scaling million-user mobile systems. Engineering for 99.9% stability in high-stakes environments. Deep dives into architecture, system design, and technical leadership.]]></description><link>https://mamtagelanee.dev</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 20:59:28 GMT</lastBuildDate><atom:link href="https://mamtagelanee.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Why Dynamic array sizes are doubled in capacity?]]></title><description><![CDATA[In systems programming, the dynamic array (e.g., std::vector, ArrayList, or Python's list) is a fundamental tool. But have you ever paused to ask why these containers double their capacity when they r]]></description><link>https://mamtagelanee.dev/dynamic-array-math</link><guid isPermaLink="true">https://mamtagelanee.dev/dynamic-array-math</guid><category><![CDATA[#DynamicArray ]]></category><category><![CDATA[array]]></category><category><![CDATA[O(n)]]></category><category><![CDATA[ArrayList]]></category><category><![CDATA[list]]></category><category><![CDATA[O(1)]]></category><category><![CDATA[O(n2)]]></category><dc:creator><![CDATA[Mamta Gelanee]]></dc:creator><pubDate>Thu, 19 Feb 2026 18:30:12 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/624447b9fca8a5eb4a9e677e/4db03584-8454-44e4-a6cb-c97d80565535.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In systems programming, the dynamic array (e.g., <code>std::vector</code>, <code>ArrayList</code>, or Python's <code>list</code>) is a fundamental tool. But have you ever paused to ask why these containers <strong>double</strong> their capacity when they run out of space?</p>
<p>It might feel like "<strong>wasting</strong>" memory, but the math reveals that doubling is the brilliant way to keep our most common operation <code>append()</code> performing at best speed.</p>
<h2>The Performance Killer : Linear Growth (\(+K\))</h2>
<p>Suppose we decide to be "memory efficient" and grow our array by a fixed amount—say, <strong>100 slots</strong>—only when we absolutely need to.</p>
<p>If we want to insert <strong>N</strong> elements, how many times do we have to copy data to a new memory location?</p>
<ul>
<li><p><strong>Initial State:</strong> You allocate a block of <strong>100</strong>. You fill it up (Elements 1 to 100). <strong>Copies = 0.</strong></p>
</li>
<li><p><strong>The 101st Element:</strong> You have no room. You must allocate a new block of <strong>200</strong>. You copy the <strong>100</strong> existing elements over.</p>
</li>
<li><p><strong>The 201st Element:</strong> You are full again. You allocate a block of <strong>300</strong>. You copy the <strong>200</strong> existing elements over.</p>
</li>
<li><p><strong>The 301st Element:</strong> You allocate <strong>400</strong>. You copy the <strong>300</strong> existing elements over.</p>
</li>
</ul>
<p>The total work (W) is an arithmetic progression:</p>
<p>$$W = 100 + 200 + 300 + ... + (N - 100)$$</p>
<p>Factoring out the 100,</p>
<p>$$W = 100 \times (1 + 2 + 3 + \dots + \frac{N}{100} - 1)$$</p>
<p>Applying the Arithmetic Series Formula,</p>
<p>The sum of 1 to M is, The quadratic formula is \(\frac{M(M+1)}{2}\). Here, \(M \approx \frac{N}{100}\)</p>
<p>$$W \approx 100 \times \frac{(\frac{N}{100})^2}{2}$$</p>
<p>Using the sum of an arithmetic series, this results in:</p>
<p>$$W \approx 100 \times \frac{N^2}{100^2 \times 2} = 100 \times \frac{N^2}{10000 \times 2} = \frac{N^2}{100 \times 2} = \mathbf{\frac{N^2}{200}}$$</p>
<p>In Big-O terms, this is \(O(N^2)\) total work. If you divide that by <strong>N</strong> insertions, each individual <code>append()</code> costs \(O(N)\). As your dataset grows, your application will get exponentially slower.</p>
<h2>The Efficiency King: Geometric Growth(\(\times 2\))</h2>
<p>Now let's look at the same \(N\) where we start with 100 slots, using the <strong>Doubling</strong> method:</p>
<ol>
<li><p><strong>Initial State:</strong> Allocate <strong>100</strong>. Fill them. <strong>Copies = 0.</strong></p>
</li>
<li><p><strong>The 101st Element:</strong> Double to <strong>200</strong>. Copy <strong>100</strong>.</p>
</li>
<li><p><strong>The 201st Element:</strong> Double to <strong>400</strong>. Copy <strong>200</strong>.</p>
</li>
<li><p><strong>The 401st Element:</strong> Double to <strong>800</strong>. Copy <strong>400</strong>.</p>
</li>
<li><p><strong>The 801st Element:</strong> Double to <strong>1600</strong>. Copy <strong>800</strong>.</p>
</li>
</ol>
<p>For <strong>N</strong> elements (where N is a power of 2), our copy operations look like this:</p>
<p>$$W = 100 \times (1 + 2 + 4 + 8 + \dots + \frac{N}{2}) = 100 \times (2^0 + 2^1 + 2^2 + 2^3 + \dots + 2^{k-1})$$</p>
<p>This is a <strong>geometric series</strong> where base is <strong>2</strong>. A key property of this series is that the sum of all previous terms is always less than the next term. Specifically:</p>
<p>$$W \approx 100 \times \sum_{i=0}^{x} 2^i = 2^{x+1} - 1 \approx 100 \times \sum_{i=0}^{k-1} 2^i = 2^{k-1+1} - 1\approx 100 \times \sum_{i=0}^{k-1} 2^i = 2^{k} - 1$$</p>
<p> $$ W \approx 2^k \approx N-100$$</p>
<h3>The Big Reveal: Amortized (\(O(1)\))</h3>
<p>To find the average cost of a single <code>append</code>, we take the total work and divide it by the number of elements:</p>
<p>$$\frac{Total\ Work}{Total\ Elements} = \frac{N - 1}{N} \approx 1$$</p>
<p>Even though some appends are "expensive" (the ones that trigger a resize), the vast majority are "cheap." On average, every append costs a constant amount of time, or as we call it in software world, \(O(1)\)</p>
<h2>Memory vs. Speed: The Growth Factor (\(\alpha\))</h2>
<p>While doubling \(\alpha = 2\) is the textbook standard, the specific multiplier can vary:</p>
<ul>
<li><p><strong>Factor of 2.0:</strong> Extremely fast, but because \(1 + 2 + 4 &lt; 8\), the memory allocator can never "reuse" the old chunks of memory you just left behind. They are always too small for the next jump.</p>
</li>
<li><p><strong>Factor of 1.5:</strong> Used by many modern libraries (like <code>libc++</code> or <code>FBVector</code>). Because \(1.5\) is smaller, it allows the allocator to eventually "recycle" the memory from previous steps, improving <strong>cache locality</strong> and reducing fragmentation.</p>
</li>
</ul>
<table style="min-width:100px"><colgroup><col style="min-width:25px"></col><col style="min-width:25px"></col><col style="min-width:25px"></col><col style="min-width:25px"></col></colgroup><tbody><tr><th><p><strong>Strategy</strong></p></th><th><p><strong>Total Copies (N)</strong></p></th><th><p><strong>Avg. Append Cost</strong></p></th><th><p><strong>Memory Overhead</strong></p></th></tr><tr><td><p><strong>Linear (</strong>+100)</p></td><td><p>\approx N ^2</p></td><td><p>O(N) ~ Slow</p></td><td><p>Low</p></td></tr><tr><td><p><strong>Geometric ( </strong>\times 2 <strong>)</strong></p></td><td><p>\approx N</p></td><td><p>O(1) ~ Fast</p></td><td><p>Up to 50%</p></td></tr></tbody></table>

<hr />
<h3>Final Thought</h3>
<p>We don't double the array because we are greedy for RAM; we double it because it turns a quadratic performance disaster into a linear, predictable success. It is one of the most successful "space-time tradeoffs" in computer science.</p>
]]></content:encoded></item><item><title><![CDATA[From Code to Life: The Full Lifecycle of an Android Process]]></title><description><![CDATA[As Android developers, we spend most of our time in the IDE. But to build truly high-performance systems, we have to look past the UI and understand the Linux Kernel layers that govern how an app transitions from a static APK to a living process.
Thi...]]></description><link>https://mamtagelanee.dev/from-code-to-life-the-full-lifecycle-of-an-android-process</link><guid isPermaLink="true">https://mamtagelanee.dev/from-code-to-life-the-full-lifecycle-of-an-android-process</guid><category><![CDATA[zygote]]></category><category><![CDATA[baseline profile]]></category><category><![CDATA[app startup]]></category><category><![CDATA[Android]]></category><category><![CDATA[Art]]></category><category><![CDATA[Baseline Visualization]]></category><category><![CDATA[Performance Optimization]]></category><category><![CDATA[app development]]></category><dc:creator><![CDATA[Mamta Gelanee]]></dc:creator><pubDate>Tue, 13 Jan 2026 21:22:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768339690598/a243e287-2ecf-4150-9a9b-d34cc6a016fb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As Android developers, we spend most of our time in the IDE. But to build truly high-performance systems, we have to look past the UI and understand the Linux Kernel layers that govern how an app transitions from a static APK to a living process.</p>
<p>This journey is a delicate balance of security, resource efficiency, and aggressive hardware optimizations done by Android OS.</p>
<h2 id="heading-installation-constructing-the-sandbox">Installation: Constructing the Sandbox</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768340024604/__lv_HJWu.png?auto=format" alt="Installation: Constructing the Sandbox" /></p>
<p>At installation, Android assigns each app a unique Linux User ID (UID), establishing a secure sandbox through <a target="_blank" href="https://en.wikipedia.org/wiki/Discretionary_access_control"><strong>Discretionary Access Control (DAC)</strong></a>. This is part of a broader <a target="_blank" href="https://en.wikipedia.org/wiki/Defense_in_depth_(computing)"><strong>'Defense in Depth'</strong></a> strategy that strictly isolates the privileged Kernel (Ring 0) from the User Space (Ring 3). To prevent privilege escalation, <a target="_blank" href="https://source.android.com/docs/security/features/selinux/concepts"><strong>Mandatory Access Control (MAC) via SELinux</strong></a> enforces a global security policy that blocks unauthorized requests at the kernel level—even if an app appears to have the correct UID permissions.</p>
<h2 id="heading-the-infrastructure-zygote-the-template-master">The Infrastructure: Zygote, the Template Master</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768339859467/7diw78Dy1.png?auto=format" alt="The Infrastructure: Zygote, the Template Master" /></p>
<p>Starting a mobile app from scratch is resource-intensive. To optimize performance, Android utilizes the <strong>Zygote</strong>, a 'template' process initialized during system boot. Instead of cold-starting every app, Zygote performs the heavy lifting once eg. </p>
<ul>
<li>launches the <strong>Android Runtime (ART)</strong>, </li>
<li>sets up the JNI &amp; </li>
<li>pre-loads thousands of core framework classes and native libraries into memory. </li>
</ul>
<h2 id="heading-the-launch-trigger-what-happens-when-user-tap-on-app-icon">The Launch Trigger: What Happens When User Tap on App Icon?</h2>
<ol>
<li><strong>Intent Dispatch:</strong> The moment a user taps an icon, the Launcher sends an Intent to the <strong>ActivityTaskManagerService (ATMS)</strong> to request a new activity transition.</li>
<li><strong>Process Audit:</strong> The <strong>ActivityManagerService (AMS)</strong> evaluates the request. If the app isn't already running in the background (a "Cold Start"), the system prepares to host a new process.</li>
<li><strong>The Zygote Handshake:</strong> AMS sends a creation command through a Unix Domain Socket to the Zygote—the system's pre-warmed "Master Process".</li>
<li><strong>The Birth (Forking):</strong> Zygote executes a <strong>fork()</strong> system call. In milliseconds, a child process is "born" as a perfect clone of Zygote, inheriting the pre-loaded ART (Android Runtime) and core framework libraries, allowing the app to launch almost instantly.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768340199183/PzB8OiPZn.png?auto=format" alt="The Infrastructure: Zygote, the Template Master" /></p>
<p>While <strong>fork()</strong> is efficient, it introduces significant Kernel-level overhead,</p>
<ul>
<li><strong>Context Switching:</strong> fork() is a synchronous operation so the Kernel must context switch <strong>out of the Zygote</strong> to perform allocation logic and then context switch <strong>in the new child process</strong>. These transitions between <a target="_blank" href="https://en.wikipedia.org/wiki/Protection_ring">Ring 3 &amp; Ring 0</a> consume critical milliseconds.</li>
<li><strong>TLB Pollution:</strong> After a fork(), the CPU’s Translation Lookaside Buffer (TLB)—a fast cache for memory mappings—is often invalidated. This leads to a high TLB Miss rate, forcing the CPU to consult slower main memory tables.</li>
<li><strong>Copy-on-Write (CoW):</strong> To save memory, the child process initially shares parent process - Zygote’s memory pages as "Read-Only".</li>
<li><strong>Page Fault Stutters:</strong> The real cost emerges at runtime when the app's Main Thread modifies a shared page (e.g., creating an object), a Page Fault occurs when the Kernel must halt the app to physically copy that memory page in App's own process, causing unpredictable UI stuttering (Jank) during the first few seconds of launch.</li>
</ul>
<h2 id="heading-modern-architectural-optimisations">Modern Architectural Optimisations</h2>
<p>Android has introduced two major shifts to mitigate these hardware-level delays:</p>
<ul>
<li><strong>USAP (Unspecialized App Process):</strong> Introduced in Android 10, the USAP pool shifts the cost of fork() to system idle times. Zygote pre-forks several generic processes and keeps them in a pool. When an app is launched, the system simply "specialises" a USAP by giving it an identity, removing the ~25ms fork() latency from the user’s view.</li>
<li><strong>16KB Memory Pages:</strong> Traditionally, the Kernel managed RAM in 4KB units. Modern Android versions support 16KB pages, Moving from 4KB to 16KB pages reduces the number of page entries the Kernel needs to track, decreasing metadata overhead by 75%. This improves the TLB Hit Rate, leading to about ~3.16% reduction in app launch times under memory pressure.</li>
</ul>
<h2 id="heading-the-engineers-action-plan-bridging-kernel-to-code">The Engineer’s Action Plan: Bridging Kernel to Code</h2>
<p>Understanding that every app is a "Copy-on-Write" clone of the Zygote changes how we should approach optimization. Here is how to apply these kernel-level insights to your codebase,</p>
<h4 id="heading-1-defer-initialization-to-avoid-page-fault-storms">1. Defer Initialization to Avoid "Page Fault Storms</h4>
<p>Since the first few seconds of an app's life are plagued by Page Faults as the kernel physically copies memory pages, avoid heavy work in <code>Application.onCreate()</code> or static initializers.</p>
<ul>
<li>Action: Use by <a target="_blank" href="https://kotlinlang.org/api/core/kotlin-stdlib/kotlin/lazy.html">lazy</a> for heavy objects (e.g, Initialisations of Database, Analytics SDKs etc.).</li>
<li>Result: You move the "Physical Copy" cost away from the critical startup path, reducing UI jank.</li>
</ul>
<h4 id="heading-2-leverage-baseline-profiles">2. Leverage Baseline Profiles</h4>
<p>The Android Runtime (ART) can pre-compile your code into machine code, but it needs to know which paths are critical.</p>
<ul>
<li>Action: Ship <a target="_blank" href="https://developer.android.com/topic/performance/baselineprofiles/overview">Baseline Profiles</a>.</li>
<li>Result: This reduces the work the CPU has to do during the "specialization" phase, lessening the impact of TLB misses and instruction cache pressure.</li>
</ul>
<h4 id="heading-3-optimize-for-16kb-page-sizes">3. Optimize for 16KB Page Sizes</h4>
<p>With the move toward 16KB pages in modern Android, the way we handle native libraries is changing.</p>
<ul>
<li>Action: Ensure your or third party native C/C++ libraries are aligned to <a target="_blank" href="https://source.android.com/docs/core/architecture/16kb-page-size/optimize"><strong>16KB</strong> boundaries</a>.</li>
<li>Result: This significantly reduces the metadata the kernel must track. In memory-constrained scenarios, this optimization can reduce app launch times significantly.</li>
</ul>
<h3 id="heading-the-golden-rule">The Golden Rule</h3>
<blockquote>
<p>Every <strong>byte</strong> you initialize at startup is a page you've forced the kernel to copy. Keep your startup footprint light, lazy, and profile-driven.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[From Silicon to Screen]]></title><description><![CDATA[As Android engineers, we spend most of our time in the application layer. However, the smoothness of our UI is governed by hardware limitations and how the Linux kernel manages resources. To build high-performance apps, we must understand the journey...]]></description><link>https://mamtagelanee.dev/from-silicon-to-screen</link><guid isPermaLink="true">https://mamtagelanee.dev/from-silicon-to-screen</guid><category><![CDATA[cores]]></category><category><![CDATA[cpu]]></category><category><![CDATA[Threads]]></category><category><![CDATA[coroutines]]></category><category><![CDATA[Android]]></category><category><![CDATA[anr]]></category><category><![CDATA[jank]]></category><dc:creator><![CDATA[Mamta Gelanee]]></dc:creator><pubDate>Mon, 05 Jan 2026 21:09:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767644338098/f2b99944-bd45-4805-a53d-4f5a8fd665bd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As Android engineers, we spend most of our time in the application layer. However, the smoothness of our UI is governed by hardware limitations and how the Linux kernel manages resources. To build high-performance apps, we must understand the journey from a CPU instruction to a pixel being drawn.</p>
<h2 id="heading-the-hardware-foundation-cpus-cores-and-threads">The Hardware Foundation: CPUs, Cores, and Threads</h2>
<p>At the lowest level, your Android device is powered by a <strong>Central Processing Unit (CPU)</strong>. The <strong>CPU</strong> is the entire chip package (the "socket"). In the early days, a CPU had only one core. Today, a single CPU is essentially a <strong>cluster of cores</strong> sharing a few high-level resources Cache, Internal Bus etc.</p>
<ul>
<li><p><strong>Cores:</strong> A core is an independent processing unit within the CPU. Modern mobile SoCs (System on Chips) usually follow a "Heterogeneous" architecture (like ARM’s big.LITTLE), mixing high-performance cores with power-efficient cores. So today when mobile phone has 8 cores that means it’s CPU has 8 separate processing units to run instructions physically in parallel at same time.</p>
</li>
<li><p><strong>Threads (Software):</strong> While a <strong>Core</strong> is hardware (silicon), a <strong>Thread</strong> is a software/OS abstraction (short for "thread of execution") is the smallest unit of programmed instructions that can be executed independently by an Operating System scheduler. At a time then there is possibility of one or more threads running on a single core due to multi threading (operated by OS with techniques like time slicing) &amp; Hyper Threading (Achieved by Hardware)</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767647098477/5422c04b-90eb-4071-9558-abac44f778d7.png" alt class="image--center mx-auto" /></p>
<p>Now let’s get back to Android,</p>
<h2 id="heading-the-main-thread-the-ui-thread">The "Main Thread" (The UI Thread)</h2>
<p>When your app process starts, the system creates the <strong>Main Thread</strong>. This is the most "expensive" thread in your app because it has a monopoly on the <strong>UI Toolkit</strong>.</p>
<p><strong>The Main Thread's Responsibilities:</strong></p>
<ol>
<li><p><strong>Input Dispatching:</strong> Capturing touch events and key presses.</p>
</li>
<li><p><strong>UI Rendering:</strong> Executing the <code>Measure</code>, <code>Layout</code>, and <code>Draw</code> passes of your View hierarchy.</p>
</li>
<li><p><strong>Lifecycle Callbacks:</strong> Running <code>onCreate</code>, <code>onStart</code>, <code>onResume</code>, etc.</p>
</li>
</ol>
<h2 id="heading-the-physics-of-smoothness-60-fps-and-the-16ms-rule">The Physics of Smoothness: 60 FPS and the 16ms Rule</h2>
<p>Most mobile displays refresh at a rate of 60Hz (60 times per second). To provide a fluid experience, the Android system must generate a new frame every <strong>16.66 milliseconds</strong>.</p>
<p>1000 milliseconds / 60 frame per second = 16.66 ms per frame</p>
<p>The <strong>Choreographer</strong> is the system component that coordinates this timing. It waits for a <strong>VSYNC</strong> signal from the display hardware and then tells the Main Thread: <em>"You have 16ms to tell me what the next frame looks like."</em></p>
<h2 id="heading-when-things-go-wrong-jank-and-anr">When Things Go Wrong: Jank and ANR</h2>
<p>If you give the Main Thread a task that takes too long, you hit two levels of failure:</p>
<h3 id="heading-level-1-jank-dropped-frames">Level 1: Jank (Dropped Frames)</h3>
<p>If a task (like a heavy loop or complex layout calculation) takes <strong>20ms</strong>, the Main Thread misses the 16.66ms deadline. The Choreographer cannot draw the frame on time, so the previous frame stays on the screen. The user perceives this as a "stutter" or <strong>Jank</strong>.</p>
<h3 id="heading-level-2-anr-application-not-responding">Level 2: ANR (Application Not Responding)</h3>
<p>If the Main Thread is blocked for a significant period—typically <strong>5 seconds</strong> for an input event—the system triggers an <strong>ANR</strong>. The OS assumes the app has hung and gives the user a dialog to force-quit.</p>
<h2 id="heading-the-solution-background-threads">The Solution: Background Threads</h2>
<p>To protect the Main Thread, we offload "Heavy" or "Blocking" tasks to <strong>Background Threads</strong> (Worker Threads).</p>
<ul>
<li><p><strong>Blocking Operations:</strong> Networking (Retrofit), Database queries (Room), or heavy Image Processing.</p>
</li>
<li><p><strong>Modern Implementation:</strong> In modern Android, we use <strong>Kotlin Coroutines</strong> with <a target="_blank" href="http://Dispatchers.IO"><code>Dispatchers.IO</code></a> (for I/O tasks) or <code>Dispatchers.Default</code> (for CPU-intensive calculations).</p>
</li>
</ul>
<h2 id="heading-summary-checklist-for-android-engineers">Summary Checklist for Android Engineers</h2>
<ul>
<li><p><strong>Respect the 16ms budget:</strong> Keep <code>onDraw</code> and <code>onLayout</code> lean.</p>
</li>
<li><p><strong>Offload to Background:</strong> If it involves a disk, a network, or a complex loop, it doesn't belong on the Main Thread.</p>
</li>
<li><p><strong>Profile your app:</strong> Use the <a target="_blank" href="https://developer.android.com/studio/profile"><strong>Android Studio Profiler</strong></a> to visualize thread activity and identify "Janky" frames.</p>
</li>
</ul>
<p>By mastering hardware foundations and the Main Thread's role in UI rendering, developers can optimize app performance. Adhering to the 16ms frame budget and using background threads for heavy tasks prevent jank and ANR issues. Following these principles and using tools like the <a target="_blank" href="https://developer.android.com/studio/profile">Android Studio Profiler</a> ensures a seamless user experience.</p>
<p>For more dee dive, feel free to follow this holy grail!</p>
<ul>
<li><p><a target="_blank" href="https://developer.android.com/topic/performance/vitals/render">https://developer.android.com/topic/performance/vitals/render</a></p>
</li>
<li><p><a target="_blank" href="https://developer.android.com/topic/performance/vitals/anr">https://developer.android.com/topic/performance/vitals/anr</a></p>
</li>
<li><p><a target="_blank" href="https://developer.android.com/agi/sys-trace/threads-scheduling">https://developer.android.com/agi/sys-trace/threads-scheduling</a></p>
</li>
<li><p><a target="_blank" href="https://medium.com/@sabag.ronen/the-dude-that-can-help-you-to-verify-that-you-hit-60-fps-in-android-bd9c1310553d">https://medium.com/@sabag.ronen/the-dude-that-can-help-you-to-verify-that-you-hit-60-fps-in-android-bd9c1310553d</a></p>
</li>
<li><p><a target="_blank" href="https://www.liquidweb.com/blog/difference-cpu-cores-thread/">https://www.liquidweb.com/blog/difference-cpu-cores-thread/</a></p>
</li>
</ul>
]]></content:encoded></item></channel></rss>