<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Chandradeo Arya's Tech Blog]]></title><description><![CDATA[DevOps &amp; Cloud Instructor, Curriculum Author, Solutions &amp; AI Architect]]></description><link>https://blog.instructorchandra.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 11:16:13 GMT</lastBuildDate><atom:link href="https://blog.instructorchandra.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Power of Career Compounding: Lessons from 25 Years in Finance by Saurabh Jhalaria]]></title><description><![CDATA[About the speaker
Saurabh Jhalaria is a prominent finance leader and an Aditya Birla Scholar from IIM Bangalore (Class of 2000). He is a founding member of the InCred Group, where he currently serves as the Chief Investment Officer (CIO) for Alternat...]]></description><link>https://blog.instructorchandra.com/the-power-of-career-compounding-lessons-from-25-years-in-finance-by-saurabh-jhalaria</link><guid isPermaLink="true">https://blog.instructorchandra.com/the-power-of-career-compounding-lessons-from-25-years-in-finance-by-saurabh-jhalaria</guid><category><![CDATA[Career]]></category><category><![CDATA[leadership]]></category><category><![CDATA[mentorship]]></category><category><![CDATA[Entrepreneurship]]></category><category><![CDATA[Financial Services]]></category><category><![CDATA[career advice]]></category><category><![CDATA[Career Growth]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Fri, 07 Nov 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767039763166/c82699ce-f016-4f11-93a0-d5917a1a183d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-about-the-speaker">About the speaker</h3>
<p><strong>Saurabh Jhalaria</strong> is a prominent finance leader and an <strong>Aditya Birla Scholar</strong> from <strong>IIM Bangalore</strong> (Class of 2000). He is a founding member of the <strong>InCred Group</strong>, where he currently serves as the <strong>Chief Investment Officer (CIO)</strong> for Alternative Credit and heads the SME lending business. A <strong>CFA charterholder</strong>, he is recognized for his expertise in credit strategies and his active role as an angel investor in the Indian startup ecosystem.</p>
<p>Prior to InCred, Jhalaria spent over 13 years at <strong>Deutsche Bank</strong>, reaching the position of <strong>Managing Director</strong>. During his tenure in Singapore and Hong Kong, he led private financing and performing credit for India and Southeast Asia. He began his career at <strong>ICICI Securities</strong> after graduating from <strong>St. Xavier’s College, Kolkata</strong>, and has since become a key figure in evolving India’s non-banking financial landscape.</p>
<hr />
<h1 id="heading-summary-of-the-talk">Summary of the talk</h1>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">A talk delivered at the Aditya Birla Scholars Reunion, November 2025</div>
</div>

<p>In finance, we talk a lot about terminal value and the compounding of capital. However, we often overlook how these same principles apply to our professional lives. If you want to build a career that doesn't just grow linearly but scales exponentially, you must understand how to let your skills, relationships, and experiences compound.</p>
<h3 id="heading-1-the-role-of-mentorship-in-navigating-transitions">1. The Role of Mentorship in Navigating Transitions</h3>
<p>One of the most vital ingredients for compounding is having a mentor. A mentor isn't just someone who gives you technical advice; they provide a perspective that you cannot see from where you are standing.</p>
<p>In my own career, having a mentor for the last 25 years has been transformative. Every three or four years, when I felt the itch to change my role or felt I had hit a plateau, my mentor helped me see the road from "A to B". They help you answer the critical question: <em>"What comes after this?"</em>. When you are in the middle of a role, you are focused on the task; a mentor is someone who has already watched the full "movie" of a career and can tell you how the current scene fits into the long-term plot.</p>
<h3 id="heading-2-own-the-outcome-not-just-the-task">2. Own the Outcome, Not Just the Task</h3>
<p>To truly compound, you must move away from a "clerical" mindset—the idea of "I did my work and I went home". High-growth careers are built by people who own the outcome.</p>
<p>Early in my career, I constantly looked at my boss and asked: <em>"What is my boss doing that I am currently unable to do?"</em>. Whether it was a technical gap or a soft skill, I realized that if I couldn't serve the needs of the person above me, I couldn't eventually take their role. Compounding only works if you are internally motivated to move out of your "box" and understand the broader organizational goals. If you restrict yourself to your job description, you stop learning, and the moment you stop learning, you stop compounding.</p>
<h3 id="heading-3-learn-on-the-job-and-beyond">3. Learn on the Job and Beyond</h3>
<p>I remember my time in Hong Kong, which was then the biggest market in Asia. Every evening, I would sit down and chart out option pricing or structural launches, trying to understand the "why" behind the market moves.</p>
<p>The idea is not just to have technical events or certificates but to ensure that every day on the job adds to your knowledge base. If you stay in the same job or company for a long time, the only way to keep the compounding effect alive is to go deeper and broader into the business than what is required of you.</p>
<h3 id="heading-4-the-leap-into-entrepreneurship">4. The Leap into Entrepreneurship</h3>
<p>Nine years ago, I decided to leave a stable banking career to help build InCred. The goal was to build something larger and more autonomous.</p>
<p>Entrepreneurship is a different way of compounding. It forces you to pick up additional skills rapidly and provides a level of autonomy that a traditional job might not. Looking back at the last several years—navigating the India growth story, the 2017-2018 cycles, and the COVID-19 crisis—I’ve realized that careers aren't a straight line. There will be years where you feel you are not moving, but those are often the years where the "base" for future compounding is being built.</p>
<h3 id="heading-5-integrity-the-non-negotiable">5. Integrity: The Non-Negotiable</h3>
<p>Finally, we must talk about the "non-compromisables." In finance, and especially in leadership, your reputation is your terminal value.</p>
<p>Mistakes will happen. When a client or a firm looks at a mistake, they ask: <em>"Is this irreversible? Is this a mistake of character or a mistake of judgment?"</em>. You can recover from a volumetric or a technical error, but an "integrity gap" is impossible to fix. It is very difficult to be a person of high integrity professionally if you are not an honest person personally. In the long run, the market identifies leaders not just by their intelligence, but by their fundamental character.</p>
<h3 id="heading-closing-thoughts">Closing Thoughts</h3>
<p>As you look at your own career, don't just think about the next two years. Think about the skills you are compounding for the next twenty. Find a mentor, own your outcomes, stay curious, and never compromise on your integrity. That is how you turn a job into a legacy.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[The Four Dimensions of Great Leadership: Beyond Academic Excellence by Shilpa Rangaswamy]]></title><description><![CDATA[About the speaker
Shilpa Rangaswamy is an Aditya Birla Scholar from the IIM Bangalore class of 2008 and a prominent consultant at Egon Zehnder. Based in Mumbai, she specializes in executive search, board advisory, and leadership development, with a p...]]></description><link>https://blog.instructorchandra.com/the-four-dimensions-of-great-leadership-beyond-academic-excellence-by-shilpa-rangaswamy</link><guid isPermaLink="true">https://blog.instructorchandra.com/the-four-dimensions-of-great-leadership-beyond-academic-excellence-by-shilpa-rangaswamy</guid><category><![CDATA[leadership]]></category><category><![CDATA[#executivesearch]]></category><category><![CDATA[Soft Skills]]></category><category><![CDATA[Career]]></category><category><![CDATA[career advice]]></category><category><![CDATA[Career Growth]]></category><category><![CDATA[Career development ]]></category><category><![CDATA[curiosity]]></category><category><![CDATA[learning]]></category><category><![CDATA[determination]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Fri, 07 Nov 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767040370301/c8d0fb07-cfbd-4fdd-8ccc-3c0674cbacd8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-about-the-speaker">About the speaker</h3>
<p><strong>Shilpa Rangaswamy</strong> is an <strong>Aditya Birla Scholar</strong> from the <strong>IIM Bangalore</strong> class of 2008 and a prominent consultant at <strong>Egon Zehnder</strong>. Based in Mumbai, she specializes in executive search, board advisory, and leadership development, with a particular focus on the Financial Services and Private Capital practices. She is also a core member of the firm’s Family Business Advisory, where she helps organizations navigate succession and leadership transitions.</p>
<p>Prior to her career in leadership advisory, Shilpa was a consultant at <strong>McKinsey &amp; Company</strong>, serving major banks and conglomerates across India and Southeast Asia. She began her professional journey as a software engineer at <strong>Tata Consultancy Services</strong> after earning her engineering degree from <strong>Mumbai University</strong>. Today, she is recognized for her commitment to driving diversity in leadership and helping firms build resilient executive teams.</p>
<hr />
<h1 id="heading-summary-of-the-talk">Summary of the talk</h1>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">A talk delivered at the Aditya Birla Scholars Reunion, November 2025</div>
</div>

<p>In my work at Egon Zehnder, I spend a significant amount of time with senior leaders—from rising executives to chairmen as old as 78. Whether I am coaching them through career choices or advising boards on CEO successions, one question remains central: <em>What makes a great leader great?</em>.</p>
<p>While academic excellence and professional "hunger" are baseline requirements for a group like the Aditya Birla Scholars, they eventually become "hygiene factors". Once you reach a certain level, everyone is smart and everyone is ambitious. To differentiate yourself, you must look at four specific traits that determine how much further you can go.</p>
<h3 id="heading-1-curiosity-the-child-like-wonder">1. Curiosity: The "Child-Like" Wonder</h3>
<p>The first trait is curiosity—a child-like willingness to never stop asking, "I wonder why?" or "What if?". Curiosity is not just about the world around you; it is about self-evolution.</p>
<p>I know very successful leaders who make it a point to speak to three people outside their industry every month just to gain a fresh perspective. They aren't afraid to ask basic questions like, "Why is it done this way?". This trait allows a leader to pivot—for example, moving from a lifelong career in banking to a leadership role in tech. If you stop being curious, you stop growing.</p>
<h3 id="heading-2-insight-connecting-the-dots">2. Insight: Connecting the Dots</h3>
<p>Insight is fundamentally different from intelligence. Intelligence is numerical and memory-based; it’s the ability to read and remember information. Insight, however, is the ability to look at disparate data points and see a pattern that others miss.</p>
<p>Great leaders can sense a shift in the market quarters before it actually manifests in the numbers. When you ask them how they knew, they can explain the "why" by connecting experiences across different sectors. They don't just see the data; they see the story the data is trying to tell.</p>
<h3 id="heading-3-determination-the-response-to-failure">3. Determination: The Response to Failure</h3>
<p>When boards interview CEO candidates, one of the most standard questions is: <em>"Tell us about your most miserable failure"</em>. This is a test of determination.</p>
<p>Determination is not just about working hard; it is your ability to face a challenge and not just survive it, but be excited by it. When you are asked to run a business that is in "terrible shape," is your reaction to find an exit, or is it to say, "This is going to be interesting, and I will be much better for it on the other side"?. Your ability to "hold the line" during a crisis is what marks you as a leader.</p>
<h3 id="heading-4-engagement-the-most-underrated-trait">4. Engagement: The Most Underrated Trait</h3>
<p>Engagement is perhaps the most underrated quality in leadership. In our early careers, we often look down on the "soft skills" of people management. However, when we do CEO searches, nine times out of ten, the person who gets the job is the one people <em>want</em> to work for.</p>
<p>Leadership is ultimately about how you make people feel. Ten years from now, people won't remember your spreadsheets; they will remember how you treated the person sitting next to you. How you treat the person serving you coffee or a junior team member says more about your leadership potential than your technical output.</p>
<h3 id="heading-navigating-the-what-next">Navigating the "What Next?"</h3>
<p>As you navigate your career, you will face moments of doubt—periods where the "energy factor" dips or you feel boxed in. During these times, comparison is often the biggest driver of anxiety. You see a peer become a partner at a top firm and wonder if you are falling behind.</p>
<p>My advice is to return to the core: Are you still curious? Are you still gaining insights? If you can't answer those questions with conviction, it may be time for a change. But remember, frequent job-hopping without delivery is a red flag. Boards look for leaders who have stayed long enough to see the results of their decisions.</p>
<h3 id="heading-closing-thoughts">Closing Thoughts</h3>
<p>Great leadership isn't about being "Sampann" (perfectly complete). It’s about the constant practice of these four dimensions. Listen to your heart, stay curious, and focus on how you engage with the world around you. That is what determines how far the "roller coaster" of a professional career will take you</p>
]]></content:encoded></item><item><title><![CDATA[Exploration & Discovery: Career paths when the future is uncertain by Arijit Sarkar]]></title><description><![CDATA[About the speaker
Arijit Sarkar is an Aditya Birla Scholar (class of 2010) and an accomplished investment professional currently serving as a Director of Credit at Trifecta Capital. With over 13 years of experience, he has held diverse roles as an in...]]></description><link>https://blog.instructorchandra.com/exploration-and-discovery-career-paths-when-the-future-is-uncertain-by-arijit-sarkar</link><guid isPermaLink="true">https://blog.instructorchandra.com/exploration-and-discovery-career-paths-when-the-future-is-uncertain-by-arijit-sarkar</guid><category><![CDATA[Career]]></category><category><![CDATA[career advice]]></category><category><![CDATA[Career Growth]]></category><category><![CDATA[Uncertainty]]></category><category><![CDATA[startup]]></category><category><![CDATA[ Startup Lessons]]></category><category><![CDATA[strategic planning]]></category><category><![CDATA[Strategic Thinking]]></category><category><![CDATA[risk management]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Fri, 07 Nov 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767041118657/a3e7b5f4-fe55-4c02-8f98-ff766ad1072b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-about-the-speaker">About the speaker</h3>
<p><strong>Arijit Sarkar</strong> is an <strong>Aditya Birla Scholar</strong> (class of 2010) and an accomplished investment professional currently serving as a <strong>Director of Credit</strong> at <strong>Trifecta Capital</strong>. With over 13 years of experience, he has held diverse roles as an investor, entrepreneur, and senior business leader across sectors such as fintech, healthcare, and retail. His investment portfolio includes prominent companies like BharatPe, Vedantu, and Infra.Market, where he focuses on leveraging technology to create non-linear value.</p>
<p>Sarkar’s career history includes serving as a Strategy Consultant at <strong>McKinsey &amp; Company</strong>, where he advised large financial institutions on growth and capital management. He also founded the fintech startup <strong>Tavaga</strong> and served as the CEO of <strong>Sugha Vazhvu</strong>, a sustainable primary healthcare business. Academically, he holds a degree from <strong>IIT Bombay</strong> (2006) and an MBA from <strong>IIM Bangalore</strong> (2012).</p>
<hr />
<h1 id="heading-summary-of-the-talk">Summary of the talk</h1>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">A talk delivered at the Aditya Birla Scholars Reunion, November 2025</div>
</div>

<p>Life and careers rarely move in a straight line. While we often hear about the power of compounding within a single organization, my journey has been almost the opposite—one defined by exploration, pivoting between sectors, and navigating the friction of an uncertain future.</p>
<h3 id="heading-1-beyond-the-academic-pedigree">1. Beyond the Academic Pedigree</h3>
<p>Coming from a background like the Aditya Birla Scholars, we often rely on our academic credentials—IITs, IIMs, and prestigious scholarships. Early in my journey, I followed the traditional technical path, influenced by a family of PhDs and scientists. I thought my future was in mathematical finance because I understood the "math" of it.</p>
<p>However, as I progressed, I realized that technical skills are just the baseline. The real world of business—the non-technical side—was just starting to open up during my undergraduate days with the arrival of consulting firms and investment banks. I had to learn that understanding a financial problem is not the same as understanding a business.</p>
<h3 id="heading-2-the-power-of-the-privilege-trap-and-networks">2. The Power of the "Privilege Trap" and Networks</h3>
<p>We often talk about the "privilege trap"—having access to elite institutions gives you the flexibility to take risks that others cannot. I am a beneficiary of the network this scholarship provides.</p>
<p>When I was 22, I took a leap of faith and moved to Chennai—a city where I had no friends—to work with a small incubator focused on social justice and microfinance. I didn't fully understand that world at the time, but the network of high-quality mentors I met there allowed me to see how larger businesses are built from the ground up. Your network is not just for finding jobs; it is the safety net that gives you the confidence to explore when the market is in flux.</p>
<h3 id="heading-3-transitioning-from-mathematical-to-human">3. Transitioning from "Mathematical" to "Human"</h3>
<p>My career moved from the mathematical rigor of finance to the "human" complexity of consulting and entrepreneurship. In consulting at McKinsey, I began to see how institutions actually breathe—how banks operate, how collections work, and how strategy is executed on the ground.</p>
<p>Later, starting my own venture taught me a level of accountability and satisfaction that you simply cannot get by analyzing cases from the outside. Even if a product doesn't scale exactly as planned, the learning you gain from building something of your own is a form of compounding that stays with you forever.</p>
<h3 id="heading-4-evaluating-the-startup-path">4. Evaluating the Startup Path</h3>
<p>Now, as an investor, I look at the startup ecosystem differently. When the future is uncertain, many wonder if they should join a startup or start one. My advice is to rationalize the decision, but don't let the "math" make it for you.</p>
<p>When we invest, we look for founders who have a high "comfort with ambiguity". We look for people who can see what is possible without being bogged down by the limitations of the present. If you are in your 20s, the risk of failure is at its lowest. This is the best time to leverage your network and try something that conflicts with the "traditional" path.</p>
<h3 id="heading-closing-thoughts">Closing Thoughts</h3>
<p>Don't be afraid if your career doesn't look like a straight line. Whether you are compounding within a firm or exploring through startups, what matters is whether you are learning and staying flexible. The advantage of being part of this elite community is that you have the privilege to earn, learn, and lead in any direction you choose.</p>
]]></content:encoded></item><item><title><![CDATA[Inside the AWS Community with Jason: A Candid Conversation in Bengaluru]]></title><description><![CDATA[Inside the AWS Community with Jason: A Candid Conversation in Bengaluru
Recently, at AWS Community Day Bengaluru, I had the pleasure of sitting down with Jason, the Community Head for the AWS Community Builders program. Jason’s journey from a self-ta...]]></description><link>https://blog.instructorchandra.com/inside-the-aws-community-with-jason-a-candid-conversation-in-bengaluru</link><guid isPermaLink="true">https://blog.instructorchandra.com/inside-the-aws-community-with-jason-a-candid-conversation-in-bengaluru</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS Community Builder]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[aws learning ]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Sat, 24 May 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-inside-the-aws-community-with-jason-a-candid-conversation-in-bengaluru"><strong>Inside the AWS Community with Jason: A Candid Conversation in Bengaluru</strong></h2>
<p>Recently, at <strong>AWS Community Day Bengaluru</strong>, I had the pleasure of sitting down with <strong>Jason</strong>, the Community Head for the <strong>AWS Community Builders</strong> program. Jason’s journey from a self-taught tech enthusiast to a leader at AWS is as inspiring as the vibrant developer community he oversees.</p>
<p>We talked about everything from his unconventional career path to the incredible "India scale" of the AWS community. Here is the full podcast.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=lbcgmwTVxY4">https://www.youtube.com/watch?v=lbcgmwTVxY4</a></div>
<p> </p>
<hr />
<p><strong>Chandra:</strong> Jason, most welcome to the podcast! Let’s start with your career at AWS. How did it all begin for you?</p>
<p><strong>Jason:</strong> Thank you, Chandra! I just celebrated my fifth anniversary with AWS. I was actually hired right at the start of the pandemic, in March 2020. I remember interviewing for the role while I was on vacation in Mexico with my family—which was pretty funny looking back. Starting a new job when the whole world was shutting down was definitely a strange experience.</p>
<p><strong>Chandra:</strong> Did you apply specifically for the community program, or did your journey start elsewhere?</p>
<p><strong>Jason:</strong> I actually wasn't part of the AWS community before this. I’m not a technical builder and I had never used AWS. My background is in marketing and communications. When I saw a job listing for a community program within the marketing department, it felt like the perfect fit for my skill set.</p>
<p><strong>Chandra:</strong> That’s interesting! A lot of people are curious about your background. How did you get into this space?</p>
<p><strong>Jason:</strong> I studied applied communications in school—things like public relations, writing, and organizational dynamics. But I never had a "traditional" communications job. For years, I was self-employed; I ran technology blog sites, was a YouTuber long before it was mainstream, and was a Microsoft MVP for 15 years for my contributions to their technical community.</p>
<p>Eventually, my freelance work for Microsoft transitioned into a role at HTC, where I worked on their Android phone community. From there, I moved to AT&amp;T Business to manage a community of small business owners, and finally, I landed here at AWS.</p>
<p><strong>Chandra:</strong> AWS is known for its incredible community. From your lens, what makes the program so special?</p>
<p><strong>Jason:</strong> When I joined, I was amazed by the passion. I didn't know much about user groups back then, but my job was to tap into that energy and build a new type of community from scratch. We launched the <strong>AWS Community Builders</strong> program in beta in June 2020—just 60 days after I started. At Amazon, we have a leadership principle called "Bias for Action," so we just figured out the best way to get it out the door.</p>
<p><strong>Chandra:</strong> You’re quite a celebrity here! People are lining up for selfies. Did you expect this kind of reception in India?</p>
<p><strong>Jason:</strong> I had a hunch! At events like re:Invent, I take a lot of pictures, but I knew coming to India would be on another level.</p>
<p><strong>Chandra:</strong> What is the most unique thing you’ve noticed about the developer community here?</p>
<p><strong>Jason:</strong> It’s what I call <strong>"India scale"</strong>. Everything is just bigger here. For example, when we opened Community Builder applications this year, we had about 4,700 total—and 2,000 of those were from India alone. This Community Day in Bengaluru feels like a professional AWS summit; the energy and the population of young developers are just incredible.</p>
<p><strong>Chandra:</strong> You actually chose to attend this Community Day over the AWS Summit that happened two weeks ago. Why was that?</p>
<p><strong>Jason:</strong> Two reasons. At a Summit, my job is mostly standing in a booth and shaking hands. But at a Community Day, I get to speak, present, and really get to the heart of the community. I had heard how amazing the Indian community events were, and I wanted to experience it firsthand.</p>
<p><strong>Chandra:</strong> We’ve seen great growth in India, but what about regions where the community is smaller, like Saudi Arabia? Do you have plans for those areas?</p>
<p><strong>Jason:</strong> It’s not always easy growing in new regions, but our strategy is to leverage existing members. We encourage our builders in places like Saudi Arabia to share the application and grow the base. User groups are the most important part of this; if we can get a user group started in a city, the community grows from there, eventually producing Community Builders and AWS Heroes.</p>
<p><strong>Chandra:</strong> You also mentioned "Cloud Clubs" in your presentation. Can you tell us more about that?</p>
<p><strong>Jason:</strong> <strong>Cloud Clubs</strong> is a newer initiative, launched around late 2023 or early 2024. It’s essentially a user group specifically for students at colleges and universities. It’s led by students, for students, to get them interested in AWS early on. We’re already seeing Cloud Club captains transition into becoming Community Builders.</p>
<p><strong>Chandra:</strong> How many countries have you visited to build these communities?</p>
<p><strong>Jason:</strong> Honestly, not that many yet! Last year was my first big international push—I went to Thailand and Malaysia. Malaysia was actually the first Community Day I ever attended. This year, India is my first stop, followed by Singapore, Indonesia, and Argentina. I’m trying to hit as many places as possible.</p>
<p><strong>Chandra:</strong> For someone in a small city wanting to start a local user group, what is your advice?</p>
<p><strong>Jason:</strong> We encourage people to partner with existing groups first. However, if you’re in a city where it’s not realistic for people to travel to the nearest group, we absolutely encourage you to start your own. Even in a "small" city of a million people—which I know is small by "India scale"—if there are people who want to learn, we want to help them connect.</p>
<p><strong>Chandra:</strong> Finally, how has your actual stay in India been so far?</p>
<p><strong>Jason:</strong> It had a rough start! I arrived at 4:00 AM after a long flight, and then the airline lost my bag. But now I’m well-rested, I have new clothes, and I’m looking forward to doing some "photo walking" around the city this weekend.</p>
<p>And the food has been great! Last night I had corn and cheese samosas from a food cart—I asked for the least spicy thing they had, and it was still pretty spicy for me, but delicious. I also had a sweet Lassi and some Gulab Jamun, which I’ve had before from my neighbors back home. I love Indian food!</p>
<p><strong>Chandra:</strong> Jason, it’s been a pleasure. I hope your stay is memorable, and we look forward to seeing you back in India soon.</p>
<p><strong>Jason:</strong> Thank you for having me! It’s been a great event.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[MySQL to PostgreSQL: A Hands-On AWS DMS Migration Guide with AWS RDS Aurora]]></title><description><![CDATA[Database migration
Databases are important part of any applications. But due to various reasons like cost, compliance, application compatibility etc. database migrations are required. In this blog we will discuss importance of doing database migratio...]]></description><link>https://blog.instructorchandra.com/mysql-to-postgresql-a-hands-on-aws-dms-migration-guide-with-aws-rds-aurora</link><guid isPermaLink="true">https://blog.instructorchandra.com/mysql-to-postgresql-a-hands-on-aws-dms-migration-guide-with-aws-rds-aurora</guid><category><![CDATA[AWS]]></category><category><![CDATA[Databases]]></category><category><![CDATA[migration]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[rds]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Wed, 22 Jan 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-database-migration">Database migration</h2>
<p>Databases are important part of any applications. But due to various reasons like cost, compliance, application compatibility etc. database migrations are required. In this blog we will discuss importance of doing database migration in the right way and using AWS DMS for this purpose with a hands-on example.</p>
<h2 id="heading-why-a-live-database-migration-is-challenge">Why a live database migration is challenge</h2>
<p>During Live migrations it is required to keep the database operational during the transition, which risks data inconsistency, extended downtime, and increased manual effort. Database migration is challenge due to various reasons like:</p>
<ul>
<li><p><strong>Data Type Differences:</strong> Handling discrepancies between different data types like MySQL and PostgreSQL has certain difference in schema and conventions.</p>
</li>
<li><p><strong>Schema Conversion:</strong> Adapting schema definitions and constraints to match PostgreSQL requirements.</p>
</li>
<li><p><strong>Downtime:</strong> Potential service disruption if the database is live and critical.</p>
</li>
<li><p><strong>Manual Effort:</strong> Increased risk of human error and additional time in manual configuration.</p>
</li>
<li><p><strong>Data Integrity:</strong> Ensuring consistency and accuracy during data transformation and transfer.</p>
</li>
</ul>
<p><img src="https://blog.ankitsanghvi.in/content/images/2022/09/migr.jpg" alt /></p>
<h3 id="heading-different-ways-to-perform-database-migration">Different ways to perform database migration</h3>
<p>There are different ways to perform database migration. All of these approach have their own pros and cons.</p>
<p>One of the most popular approach is using <strong>SQL Dumps.</strong> Here we Use mysqldump to export data and import into PostgreSQL but this is manual and error prone. Similarly, various ETL tools like Talend or Apache NiFi can be used and there are various specialized utilities like pgloader for automated schema and data conversion as well.</p>
<p>But all these approaches have significant risks of data loss, downtime and human error. This is where AWS DMS comes in picture.</p>
<h2 id="heading-what-is-aws-dms"><strong>What is AWS DMS?</strong></h2>
<p>AWS DMS is a managed service from AWS that facilitates database migration to AWS. It supports both homogeneous (same database engine) and heterogeneous (different database engines) migrations.</p>
<h3 id="heading-when-to-use-aws-dms"><strong>When to Use AWS DMS</strong></h3>
<ul>
<li><p>Migrating on-premises databases to the cloud.</p>
</li>
<li><p>Modernizing or consolidating databases across different environments.</p>
</li>
<li><p>Enabling continuous data replication during live migrations with minimal downtime.</p>
</li>
</ul>
<h3 id="heading-benefits-of-aws-dms"><strong>Benefits of AWS DMS</strong></h3>
<ul>
<li><p><strong>Cost-Efficient:</strong> Reduces overhead with a managed solution.</p>
</li>
<li><p><strong>Minimal Downtime:</strong> Supports continuous data replication, ensuring business continuity.</p>
</li>
<li><p><strong>Simplified Management:</strong> Automates replication and offers robust monitoring tools.</p>
</li>
<li><p><strong>Flexibility:</strong> Adapts to various migration scenarios, whether homogeneous or heterogeneous.</p>
</li>
</ul>
<h2 id="heading-how-to-setup-dms">How to setup DMS</h2>
<p>Migrating databases between different engines or environments can be challenging. As discussed earlier, out of various approaches AWS DMS provides the most robust way to perform this task. Let’s setup a DMS service using terraform.</p>
<h2 id="heading-automating-aws-dms-cluster-deployment-with-terraform"><strong>Automating AWS DMS Cluster Deployment with Terraform</strong></h2>
<hr />
<p>AWS Database Migration Service (DMS) setup with Terraform makes the process very highly automated and reproducible. In this lab, we’ll create a full Terraform configuration that provisions a DMS replication instance, configures source and target endpoints, defines a replication task for migrating an entire schema, and sets up the necessary IAM roles for seamless operation.</p>
<hr />
<p><strong>1. AWS Provider Setup</strong></p>
<p>Terraform starts by configuring the AWS provider. This tells Terraform where to deploy your resources. Adjust the region as needed:</p>
<pre><code class="lang-json">provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-attr">"us-east-1"</span>  # Adjust region as needed
}
</code></pre>
<p><strong>2. Creating the DMS Replication Instance</strong></p>
<p>The replication instance is the core compute resource for DMS. It handles the data migration process. Here’s the complete resource configuration:</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_dms_replication_instance"</span> <span class="hljs-string">"dms_instance"</span> {
  replication_instance_id   = <span class="hljs-attr">"dms-instance"</span>
  replication_instance_class = <span class="hljs-attr">"dms.t3.medium"</span>  # We are using Smallest available instance class for DMS to control cost.
  allocated_storage         = 50 # This is also smallest possible value
  publicly_accessible       = true # You can keep it false in prod env
}
</code></pre>
<p>In this code we are creating dms-instance and uses the dms.t3.medium class. We are allocatting 50 GB of storage.</p>
<hr />
<p><strong>3. Configuring DMS Endpoints</strong></p>
<p>Endpoints specify where data is coming from (source) and where it is going (target). This example uses an Aurora MySQL database as the source and an Aurora PostgreSQL database as the target.</p>
<p><strong>Source Endpoint – Aurora MySQL</strong></p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_dms_endpoint"</span> <span class="hljs-string">"source_mysql"</span> {
  endpoint_id   = <span class="hljs-attr">"source-mysql-endpoint"</span>
  endpoint_type = <span class="hljs-attr">"source"</span>
  engine_name   = <span class="hljs-attr">"mysql"</span>

  username      = <span class="hljs-attr">"admin"</span>
  password      = <span class="hljs-attr">"SUPERSECRET"</span>
  server_name   = <span class="hljs-attr">"rds-mysql-url.us-east-1.rds.amazonaws.com"</span>
  port          = 3306
  database_name = <span class="hljs-attr">"sample_db"</span>
}
</code></pre>
<p><strong>Target Endpoint – Aurora PostgreSQL</strong></p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_dms_endpoint"</span> <span class="hljs-string">"target_postgres"</span> {
  endpoint_id   = <span class="hljs-attr">"target-postgres-endpoint"</span>
  endpoint_type = <span class="hljs-attr">"target"</span>
  engine_name   = <span class="hljs-attr">"postgres"</span>

  username      = <span class="hljs-attr">"master"</span>
  password      = <span class="hljs-attr">"SUPERSECRET"</span>
  server_name   = <span class="hljs-attr">"rds-postgres-url.us-east-1.rds.amazonaws.com"</span>
  port          = 5432
  database_name = <span class="hljs-attr">"sample_db"</span>
}
</code></pre>
<p>Here we are setting the source and target. We specify the PostgreSQL connection details ensuring that DMS knows where to load the data.</p>
<p><strong>4. Defining the DMS Replication Task</strong></p>
<p>The replication task orchestrates the data migration. This task defines what to migrate, the migration type, and includes detailed settings to control the behavior during the migration.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_dms_replication_task"</span> <span class="hljs-string">"dms_task"</span> {
  replication_task_id      = <span class="hljs-attr">"mysql-to-postgres-migration"</span>
  replication_instance_arn = aws_dms_replication_instance.dms_instance.replication_instance_arn
  source_endpoint_arn      = aws_dms_endpoint.source_mysql.endpoint_arn
  target_endpoint_arn      = aws_dms_endpoint.target_postgres.endpoint_arn

  migration_type = <span class="hljs-attr">"full-load"</span>  # Use <span class="hljs-attr">"full-load-and-cdc"</span> for ongoing changes

  table_mappings = &lt;&lt;EOF
{
  <span class="hljs-attr">"rules"</span>: [
    {
      <span class="hljs-attr">"rule-type"</span>: <span class="hljs-string">"selection"</span>,
      <span class="hljs-attr">"rule-id"</span>: <span class="hljs-string">"1"</span>,
      <span class="hljs-attr">"rule-name"</span>: <span class="hljs-string">"IncludeAllTables"</span>,
      <span class="hljs-attr">"object-locator"</span>: {
        <span class="hljs-attr">"schema-name"</span>: <span class="hljs-string">"sample_db"</span>,
        <span class="hljs-attr">"table-name"</span>: <span class="hljs-string">"%"</span>
      },
      <span class="hljs-attr">"rule-action"</span>: <span class="hljs-string">"include"</span>
    }
  ]
}
EOF

  replication_task_settings = &lt;&lt;EOF
{
  <span class="hljs-attr">"TargetMetadata"</span>: {
      <span class="hljs-attr">"TargetSchema"</span>: <span class="hljs-string">""</span>,
      <span class="hljs-attr">"SupportLobs"</span>: <span class="hljs-literal">true</span>,
      <span class="hljs-attr">"FullLobMode"</span>: <span class="hljs-literal">false</span>,
      <span class="hljs-attr">"LobChunkSize"</span>: <span class="hljs-number">64</span>,
      <span class="hljs-attr">"LimitedSizeLobMode"</span>: <span class="hljs-literal">true</span>,
      <span class="hljs-attr">"LobMaxSize"</span>: <span class="hljs-number">32</span>,
      <span class="hljs-attr">"InlineLobMaxSize"</span>: <span class="hljs-number">0</span>,
      <span class="hljs-attr">"LoadMaxFileSize"</span>: <span class="hljs-number">0</span>,
      <span class="hljs-attr">"ParallelLoadThreads"</span>: <span class="hljs-number">0</span>,
      <span class="hljs-attr">"ParallelLoadBufferSize"</span>: <span class="hljs-number">0</span>,
      <span class="hljs-attr">"BatchApplyEnabled"</span>: <span class="hljs-literal">false</span>,
      <span class="hljs-attr">"TaskRecoveryTableEnabled"</span>: <span class="hljs-literal">false</span>
  },
  <span class="hljs-attr">"FullLoadSettings"</span>: {
      <span class="hljs-attr">"TargetTablePrepMode"</span>: <span class="hljs-string">"DO_NOTHING"</span>,
      <span class="hljs-attr">"CreatePkAfterFullLoad"</span>: <span class="hljs-literal">false</span>,
      <span class="hljs-attr">"StopTaskCachedChangesApplied"</span>: <span class="hljs-literal">false</span>,
      <span class="hljs-attr">"StopTaskCachedChangesNotApplied"</span>: <span class="hljs-literal">false</span>,
      <span class="hljs-attr">"BatchApplyEnabled"</span>: <span class="hljs-literal">false</span>,
      <span class="hljs-attr">"BatchApplyPreserveTransaction"</span>: <span class="hljs-literal">false</span>,
      <span class="hljs-attr">"BatchApplyTimeoutMin"</span>: <span class="hljs-number">0</span>,
      <span class="hljs-attr">"BatchApplySkipErrorTables"</span>: <span class="hljs-literal">false</span>
  },
  <span class="hljs-attr">"Logging"</span>: {
      <span class="hljs-attr">"EnableLogging"</span>: <span class="hljs-literal">true</span>
  }
}
EOF
}
</code></pre>
<p><strong>Important details:</strong></p>
<ul>
<li><p><strong>Task Identification:</strong> The replication_task_id uniquely labels the task.</p>
</li>
<li><p><strong>Migration Type:</strong> It uses full-load to migrate the entire schema and data at once.</p>
</li>
<li><p><strong>Table Mappings:</strong> The JSON mapping rule ensures that all tables in the CompanyDB schema are included.</p>
</li>
<li><p><strong>Task Settings:</strong> These JSON blocks provide detailed configurations like LOB handling and logging options, ensuring that the task behaves as expected.</p>
</li>
</ul>
<hr />
<p><strong>5. Configuring IAM Roles for DMS</strong></p>
<p>AWS DMS requires specific IAM roles to interact with your VPC and CloudWatch logs. Two roles are created—one for VPC management and another for CloudWatch logging.</p>
<p><strong>IAM Role for VPC Management</strong></p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"dms_vpc_role"</span> {
  name = <span class="hljs-attr">"dms-vpc-role"</span>
  assume_role_policy = jsonencode({
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Statement"</span>: [
      {
        <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
        <span class="hljs-attr">"Principal"</span>: {
          <span class="hljs-attr">"Service"</span>: <span class="hljs-string">"dms.amazonaws.com"</span>
        },
        <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"sts:AssumeRole"</span>
      }
    ]
  })
}

resource <span class="hljs-string">"aws_iam_role_policy_attachment"</span> <span class="hljs-string">"dms_vpc_role_attach"</span> {
  role       = aws_iam_role.dms_vpc_role.name
  policy_arn = <span class="hljs-attr">"arn:aws:iam::aws:policy/service-role/AmazonDMSVPCManagementRole"</span>
}
</code></pre>
<p>Here, we are granting the DMS service permission to assume the role.</p>
<p><strong>IAM Role for CloudWatch Logs</strong></p>
<p>Here we are handling the logging permissions. Policy Attachment ensures DMS can write logs to CloudWatch for monitoring and troubleshooting.</p>
<pre><code class="lang-plaintext">resource "aws_iam_role" "dms_cloudwatch_logs_role" {
  name = "dms-cloudwatch-logs-role"
  assume_role_policy = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Principal": {
          "Service": "dms.amazonaws.com"
        },
        "Action": "sts:AssumeRole"
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "dms_cloudwatch_logs_role_attach" {
  role       = aws_iam_role.dms_cloudwatch_logs_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonDMSCloudWatchLogsRole"
}
</code></pre>
<h2 id="heading-creating-the-dms-cluster">Creating the DMS cluster</h2>
<p>Let’s apply the terraform configuration and observe the changes.</p>
<pre><code class="lang-bash">terraform apply
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020164189/ddd8c115-aae1-4958-a223-ca09d47d63a1.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020295151/bd6d3656-b07f-4c86-810f-ccc2e2d0e28e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-observe-the-created-dms-service">Observe the created DMS service</h3>
<p>If everything is correct then it should start creating the DMS replication instance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020366988/7d8dd19b-3ccc-4d27-9a98-0cacc174c9d0.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020403735/6cac9292-5a1f-49a3-b728-c845341b8864.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-pre-migration">Pre-migration</h2>
<p>Pre-migration assessment is important process to analyze the source database to identify compatibility issues, estimate migration effort, and plan the migration strategy.</p>
<p><strong>Why is it important?</strong></p>
<ul>
<li><p>Detects schema and data type mismatches.</p>
</li>
<li><p>Estimates required changes and risks.</p>
</li>
<li><p>Helps choose the right tools and migration path.</p>
</li>
<li><p>Reduces surprises during migration.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020525437/578e6ed7-e5f2-4de9-b5ee-8028b272a02f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020548998/9056adb3-cdd1-4416-af18-f30ffe6b3e27.png" alt class="image--center mx-auto" /></p>
<p>Once you hit create migration assessment, it will start the process and give you the migration assessment result.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020578737/2d5ba1fd-7994-411d-a45b-de8ce5e9362e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020597305/faa60f52-e94f-4e78-90bf-59dd6b9abe8c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-reviewing-migration-results">Reviewing migration results</h3>
<p>After migration, AWS DMS can store detailed logs and migration reports in an S3 bucket. When you are reviewing them you can focus on these points:</p>
<ul>
<li><p>Verify which tables and rows were migrated.</p>
</li>
<li><p>Identify any skipped or failed records.</p>
</li>
<li><p>Troubleshoot issues using detailed error logs.</p>
</li>
<li><p>Ensure data consistency between source and target.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020647248/1360aeb5-2a52-4372-81b7-3ff85b6b797e.png" alt class="image--center mx-auto" /></p>
<p>This is how a sample report looks like</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020687703/62e53dcd-84eb-4f87-b282-c10cd93a8c42.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-finally-lets-start-migration">Finally, let’s start migration</h2>
<p>Now comes the exciting part—starting the migration!</p>
<p>With everything configured and validated, kicking off the DMS task sets your data in motion. Watch as your tables begin flowing from MySQL to PostgreSQL, live and in real time. It’s the moment where planning turns into action, all with minimal downtime and full control via DMS.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020854294/f9e54aad-1b48-418e-9118-5cc2af833eae.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743020870672/0f0b95ab-a628-44e6-8a34-2e1551b76885.png" alt class="image--center mx-auto" /></p>
<p>Once this task is over, then your database has successfully migrated from Mysql to Postgres Sql. You can celebrate now! Yeyyy!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1743021103796/4669910e-1fa0-4779-b6f1-fa2734723ce4.gif" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Beginners guide to learn and build Generative AI applications on AWS Bedrock]]></title><description><![CDATA[Intro and hype of GenAI
Generative AI (GenAI) a new field of artificial intelligence has taken the world on revolution with OpenAI Chatgpt being launch almost an year back. Using this we can create entirely new things, from text and images to code an...]]></description><link>https://blog.instructorchandra.com/beginners-guide-to-learn-and-build-generative-ai-applications-on-aws-bedrock</link><guid isPermaLink="true">https://blog.instructorchandra.com/beginners-guide-to-learn-and-build-generative-ai-applications-on-aws-bedrock</guid><category><![CDATA[generative ai]]></category><category><![CDATA[Amazon Bedrock]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[learning]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Tue, 16 Jan 2024 18:30:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-intro-and-hype-of-genai">Intro and hype of GenAI</h2>
<p>Generative AI (GenAI) a new field of artificial intelligence has taken the world on revolution with OpenAI Chatgpt being launch almost an year back. Using this we can create entirely new things, from text and images to code and music. It has the potential to transform many industries and our daily lives.</p>
<p><img src="https://media.chandradeoarya.com/file/CT/amazonBedrockFundamentalsCourse/Amazon-Bedrock.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-what-future-holds">What future holds:</h2>
<p><strong>The Future of GenAI</strong> is very promising. With every predictions that GenAI models are going to become more sophisticated, affordable and accessible. We can expect even more creative applications, automation of complex tasks, and a deeper understanding of the world around us.</p>
<p>GenAI is going to increase <strong>enhanced creativity, increase efficiency, personalize experiences</strong> and propel speed of n<strong>ew discoveries.</strong></p>
<h2 id="heading-amazon-bedrock">Amazon Bedrock</h2>
<p>Amazon Bedrock is a powerful platform that makes GenAI more accessible and user-friendly. AWS is bringing its decade long cloud leadership and experience in making GenAI available for developers and users.</p>
<h3 id="heading-why-amazon-bedrock-matters">Why Amazon Bedrock matters:</h3>
<p>Amazon Bedrock is going to significantly effect the GenAI ecosystem due to following reaons:</p>
<ul>
<li><p><strong>Lower Barrier to Entry:</strong> Bedrock provides access to powerful generative models from various AI providers through a single platform. This eliminates the need for extensive technical knowledge. learning curve and huge cost investment. Pay-per-use model will make it more easily accessible.</p>
</li>
<li><p><strong>Simplified Development:</strong> Bedrock offers pre-built tools and playgrounds for experimentation and fine-tuning models. Being part of AWS cloud it has deep integrations with various core AWS services like Opensearch, S3, IAM, KMS etc which will make development faster and simpler.</p>
</li>
<li><p><strong>Focus on Innovation:</strong> Bedrock will abstract the infrastructure and technical complexities so that application developers can just focus on innovation rather than going into research which is different area of expertise.</p>
</li>
</ul>
<h1 id="heading-amazon-bedrock-for-beginners-a-hands-on-guide-for-developers">Amazon Bedrock for Beginners: A Hands-on Guide for developers</h1>
<p>This guide gets you started with Amazon Bedrock, a powerful tool for building generative AI applications. I'll break down the key concepts and present you a path forward to start your learning journey. The whole learning journey has been broken down in multiple modules.</p>
<h2 id="heading-prerequisites">Prerequisites:</h2>
<p>This guide is focused on developer. I assume you should have basic understanding of GenAI ecosystem, prompting and AWS cloud. If not then these modules are also important to learn.</p>
<p><strong>Getting Started (Optional):</strong></p>
<ul>
<li><p><strong>Module 1: Introduction module</strong>  covers the core ideas of generative AI and Amazon Bedrock if you are totally unaware of GenAI and its impact.</p>
</li>
<li><p><strong>Module 2: Intro to AWS</strong> (optional) module is essential guides you through setting up an AWS account if you don't already have one. You should also learn basic AWS cloud services before going to use Amazon Bedrock.</p>
</li>
<li><p><strong>Module 3: Prompt Engineering</strong> (optional) teaches you how to craft effective prompts, which are starting points for AI generation. It is optional but important to get quality response from the models.</p>
</li>
</ul>
<h2 id="heading-hands-on-with-amazon-bedrock"><strong>Hands-on with Amazon Bedrock:</strong></h2>
<p>Once you have learnt the above concepts or modules then you are ready to dive into learning Amazon Bedrock with following lessons.</p>
<ul>
<li><strong>Module 4:</strong> In this module you should dive into learning Amazon Bedrock's playgrounds and demos. Here, you'll experiment with pre-built models for text, chat, and image generation. You'll also learn to fine-tune these models for specific tasks.</li>
</ul>
<p><strong>Deep Dive into Amazon Bedrock:</strong></p>
<ul>
<li><strong>Module 5: Building Your App</strong> goes under the hood of Amazon Bedrock. You'll learn best practices for designing and building cost-effective and secure generative AI applications.</li>
</ul>
<p><strong>Building a Knowledge Base:</strong></p>
<ul>
<li><strong>Module 6: Knowledge Base</strong> teaches you how to build a knowledge base, which is a collection of information your AI can access. You'll also learn how to secure this data.</li>
</ul>
<p><strong>Building Generative AI Agents:</strong></p>
<ul>
<li><strong>Module 7: Building Agents</strong> walks you through creating secure and scalable generative AI applications. You'll learn to design and integrate the key components of these applications, called agents.</li>
</ul>
<h2 id="heading-building-genai-apps-with-amazon-bedrock"><strong>Building GenAI apps with Amazon Bedrock:</strong></h2>
<p><strong>Building Generative AI Applications with Python:</strong></p>
<ul>
<li><strong>Module 8: Building Applications</strong> shows you how to leverage foundational models for various applications. You'll build simple generative AI chatbots, image generators, and more using Python.</li>
</ul>
<p><strong>Wrapping Up:</strong></p>
<ul>
<li><strong>Module 9: Putting it All Together</strong> discusses the ethical considerations of generative AI and how to stay updated on the latest developments in Amazon Bedrock. You'll also learn how to troubleshoot common issues.</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Using AWS Organizations and AWS Nuke to create disposable low-cost cloud experience]]></title><description><![CDATA[Every beginner in AWS

Most AWS users in beginning miss to keep track of services they create. It’s very easy to forget the services creating via console in different regions. AWS is definitely a sea of services.
I myself have lost over 5000 in three...]]></description><link>https://blog.instructorchandra.com/using-aws-organizations-and-aws-nuke-to-create-disposable-low-cost-cloud-experience</link><guid isPermaLink="true">https://blog.instructorchandra.com/using-aws-organizations-and-aws-nuke-to-create-disposable-low-cost-cloud-experience</guid><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[billing]]></category><category><![CDATA[Alarms]]></category><category><![CDATA[nuke]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Mon, 13 Nov 2023 18:30:00 GMT</pubDate><content:encoded><![CDATA[<h3 id="heading-every-beginner-in-aws">Every beginner in AWS</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711874858449/b8244deb-a43c-41d0-8447-0aeb49bc79f8.webp" alt class="image--center mx-auto" /></p>
<p>Most AWS users in beginning miss to keep track of services they create. It’s very easy to forget the services creating via console in different regions. AWS is definitely a sea of services.</p>
<p>I myself have lost over 5000 in three occasions in total. With one alone being 4,000$ because I left a <code>P5d.large Nvidia H100</code> GPU running which I created for an experiment.</p>
<p><img src="https://media.chandradeoarya.com/file/CT/aws-billing-surpirse-bill-my-own.png" alt class="image--center mx-auto" /></p>
<p>Using AWS billings and alarms are some of the easiest way to keep track of AWS expenses. I also encourage using AWS organizations for centralised billing to utilise AWS accounts as disposals resources for all experiments and testing purposes.</p>
<p>I’ve written other blogs on using AWS billing and alarms and using AWS organizations effectively. In this article we will explore using AWS Nuke a powerful tool for destroying AWS resources.</p>
<p>I’m writing this blog as step by step lab instruction so that you can easily follow it just by running the given commands in sequence.</p>
<h2 id="heading-objective-of-the-lab">Objective of the lab:</h2>
<ul>
<li><p>Understand the purpose and functionalities of AWS Nuke.</p>
</li>
<li><p>Set up AWS Nuke for safe experimentation.</p>
</li>
<li><p>Write configurations to target specific resources for deletion.</p>
</li>
<li><p>Execute a dry run to simulate resource deletion.</p>
</li>
</ul>
<h3 id="heading-prerequisites"><strong>Prerequisites</strong></h3>
<p>To finish this lab you should have:</p>
<ul>
<li><p>An AWS account with access credentials. Preferably with some resources created.</p>
</li>
<li><p>Basic understanding of AWS resources.</p>
</li>
<li><p>Familiarity with the command line interface (CLI).</p>
</li>
</ul>
<h3 id="heading-use-cases">Use cases:</h3>
<ol>
<li><p>Destroying the disposable AWS accounts created for testing and experiments.</p>
</li>
<li><p>Destroying inter-dependable AWS resources which are otherwise hard to be deleted.</p>
</li>
<li><p>Finding all resources known or unknown in all regions and deleting them.</p>
</li>
</ol>
<h2 id="heading-steps">Steps:</h2>
<h3 id="heading-step1-setting-up-the-lab-environment-optional">Step1: Setting Up the Lab Environment (Optional)</h3>
<p>If you have an AWS account with some resources which you don’t need then you can try this on same account or you must create a new AWS account for testing AWS Nuke.</p>
<p><strong>Important Note:</strong> AWS Nuke is a destructive tool, so use it with caution even in a non-production environment.</p>
<h3 id="heading-step-2-install-aws-cli">Step 2: Install AWS CLI</h3>
<p>If you running this command from your local machine or non-Amazon Linus EC2 server then you must install AWS CLI.</p>
<ul>
<li>Running this command installs AWS CLI on Linux.</li>
</ul>
<pre><code class="lang-bash">curl <span class="hljs-string">"&lt;https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip&gt;"</span> -o <span class="hljs-string">"awscliv2.zip"</span>

sudo apt install unzip

unzip awscliv2.zip

sudo ./aws/install
</code></pre>
<p>For further guidance in setting up CLI and configuring it with credentials follow this link. <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html</a></p>
<p>You must ensure that using are using credentials of administrator level AWS user or root can be used as well (not preferred).</p>
<h3 id="heading-step-3-installing-aws-nuke">Step 3: Installing AWS Nuke</h3>
<ul>
<li><strong>For macOS</strong></li>
</ul>
<pre><code class="lang-bash">brew install aws-nuke
</code></pre>
<ul>
<li>For Linux machines</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-comment"># Download and extract </span>
wget -c &lt;https://github.com/rebuy-de/aws-nuke/releases/download/v2.25.0/aws-nuke-v2.25.0-linux-amd64.tar.gz&gt; -O - | tar -xz -C <span class="hljs-variable">$HOME</span>/bin

<span class="hljs-comment">#Run</span>
aws-nuke-v2.25.0-linux-amd64
</code></pre>
<p>You can find the latest version from  <a target="_blank" href="https://github.com/rebuy-de/aws-nuke/releases">release</a></p>
<h3 id="heading-step-4-configuring-aws-nuke-basic-configuration">Step 4: <strong>Configuring AWS Nuke basic configuration</strong></h3>
<p>AWS Nuke offers various configuration options. Explore the configuration file (<code>~/.aws-nuke/config.yml</code>) to customize behavior.</p>
<ul>
<li>This is a minimal configuration</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-attr">regions:</span> <span class="hljs-comment"># List of regions to execute</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">eu-west-1</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">us-east-1</span>

<span class="hljs-attr">account-blocklist:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">"999999999999"</span> <span class="hljs-comment"># production</span>

<span class="hljs-attr">accounts:</span>
  <span class="hljs-attr">"000000000000":</span> {} <span class="hljs-comment"># aws-nuke-example</span>
</code></pre>
<h3 id="heading-step-5-adding-target-and-excludes">Step 5: Adding target and excludes</h3>
<p><strong>aws-nuke</strong> provides option to target or exclude particular resources from deletion. There are multiple ways to configure this.</p>
<ul>
<li>Using target to delete certain resources</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">regions:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">"eu-west-1"</span>
<span class="hljs-attr">account-blocklist:</span>
<span class="hljs-bullet">-</span> <span class="hljs-number">987654321</span>

<span class="hljs-attr">resource-types:</span>
  <span class="hljs-comment"># only nuke these three resources</span>
  <span class="hljs-attr">targets:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">S3Object</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">S3Bucket</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">IAMRole</span>

<span class="hljs-attr">accounts:</span>
  <span class="hljs-attr">98769876:</span> {}
</code></pre>
<ul>
<li>Using excludes to delete all but specified resources.</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">regions:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">"eu-west-1"</span>
<span class="hljs-attr">account-blocklist:</span>
<span class="hljs-bullet">-</span> <span class="hljs-number">987654321</span>

<span class="hljs-attr">resource-types:</span>
  <span class="hljs-comment"># don't nuke IAM users</span>
  <span class="hljs-attr">excludes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">IAMUser</span>

<span class="hljs-attr">accounts:</span>
  <span class="hljs-attr">98769876:</span> {}
</code></pre>
<h3 id="heading-step-6-using-resources-filtering">Step 6: Using resources filtering</h3>
<p>aws-nuke provides option to avoid deleting the current user for example or for resources like S3 Buckets which have a globally shared namespace and might be hard to recreate. Currently the filtering is based on the resource identifier.</p>
<ul>
<li>For example we can delete all resources but certain <code>IAMUser</code> and <code>IAMUserPolicyAttachment</code></li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">regions:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">"eu-west-1"</span>

<span class="hljs-attr">account-blocklist:</span>
<span class="hljs-bullet">-</span> <span class="hljs-number">1234567890</span>

<span class="hljs-attr">accounts:</span>
  <span class="hljs-attr">0987654321:</span>
    <span class="hljs-attr">filters:</span>
      <span class="hljs-attr">IAMUser:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"admin"</span>
      <span class="hljs-attr">IAMUserPolicyAttachment:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"admin -&gt; AdministratorAccess"</span>
      <span class="hljs-attr">IAMUserAccessKey:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"admin -&gt; AKSDAFRETERSDF"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"admin -&gt; AFGDSGRTEWSFEY"</span>
</code></pre>
<p><code>aws-nuke</code> supports using multiple types of filters like exact filter, contains, regex, date based etc.</p>
<p><strong>Example configuration file.</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-number">0.34</span><span class="hljs-number">.0</span>  <span class="hljs-comment"># Replace with the installed version</span>

<span class="hljs-attr">resources:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">EC2</span>
    <span class="hljs-attr">filters:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">Name:</span> <span class="hljs-string">tag:Name=chandra-ec2</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">S3Bucket</span>
    <span class="hljs-attr">filters:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">Name:</span> <span class="hljs-string">name</span>  <span class="hljs-comment"># Matches bucket names containing "test"</span>
        <span class="hljs-attr">values:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">application-data-store-12345364</span>
</code></pre>
<p><strong>Explanation of the configuration file</strong></p>
<ul>
<li><p><code>version</code>: Specifies the AWS Nuke version used.</p>
</li>
<li><p><code>resources</code>: This section defines the resources to target.</p>
<ul>
<li><p>The first entry targets EC2 instances with the tag <code>Name: My-Test-Instance</code>.</p>
</li>
<li><p>The second entry targets S3 buckets containing "test" in their names.</p>
</li>
</ul>
</li>
</ul>
<p><strong>Filters:</strong></p>
<ul>
<li><p>Filters allow for more granular targeting within a resource type.</p>
</li>
<li><p>In the first entry, we use a tag filter to target a specific EC2 instance.</p>
</li>
<li><p>The second entry uses a name filter with a value to target buckets containing <code>application-data-store-12345364</code>.</p>
</li>
</ul>
<h3 id="heading-step-7-executing-a-dry-run">Step 7: Executing a dry run</h3>
<p>AWS Nuke is very distructive so it provides option to perform a dry run to see which resources would be deleted.</p>
<ul>
<li>Run the following command with flag <code>--dry-run</code> in the config path.</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-string">aws-nuke</span> <span class="hljs-string">--dry-run</span> <span class="hljs-string">-c</span> <span class="hljs-string">nuke-config.yml</span>
</code></pre>
<p>This command simulates the deletion process based on your configuration. It will display a list of resources slated for deletion without actually deleting them.</p>
<h3 id="heading-step-8-review-and-deletion-actual-amp-distructive"><strong>Step 8: Review and Deletion (ACTUAL &amp; DISTRUCTIVE)</strong></h3>
<p>This step involves actual resource deletion. Here we just remove the <code>--dry-run</code> flag and run only when we are confident in the targeted resources.</p>
<p><strong>Important:</strong> This will permanently delete the targeted resources. Use this step with extreme caution!</p>
<h1 id="heading-result"><strong>Result</strong></h1>
<p>Easily removed AWS resources giving me peace of mind with no surprise bills anymore.</p>
<p><img src="https://media.chandradeoarya.com/file/CT/aws-nuke-delete-success.png" alt class="image--center mx-auto" /></p>
<p><img src="https://media.chandradeoarya.com/file/CT/aws-nuke-delete-success-report.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Founders first moves to start building on AWS: situation and solution]]></title><description><![CDATA[Move #1 : Deciding Budgets and over-utilisation alarms
Situation:

Budget is always a constraint for startups

With founders wearing multiple hats, it’s easy to lose the track of cost. Common resources are unattached EBS volumes, Elastic IPs, Bastion...]]></description><link>https://blog.instructorchandra.com/founders-first-moves-to-start-building-on-aws-situation-and-solution</link><guid isPermaLink="true">https://blog.instructorchandra.com/founders-first-moves-to-start-building-on-aws-situation-and-solution</guid><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Sun, 29 Jan 2023 11:23:36 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680087004283/6d78b373-bb12-4c39-a43f-de78361e4f60.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-move-1-deciding-budgets-and-over-utilisation-alarms">Move #1 : Deciding Budgets and over-utilisation alarms</h2>
<h3 id="heading-situation">Situation:</h3>
<ul>
<li><p>Budget is always a constraint for startups</p>
</li>
<li><p>With founders wearing multiple hats, it’s easy to lose the track of cost. Common resources are unattached EBS volumes, Elastic IPs, Bastion hosts, Instances for ML model trainings etc.</p>
</li>
<li><p>You wake up with surprise bills</p>
</li>
<li><p>AWS provides billing predictions which helps to plan runway</p>
</li>
</ul>
<h3 id="heading-solution">Solution</h3>
<ul>
<li><p>AWS Billing provides budget planning and tracking features</p>
</li>
<li><p>You can create billing alarms to track unused resources and outlier costs</p>
</li>
<li><p>Alerts can be created on forecasted costs or actual costs</p>
</li>
</ul>
<p>AWS recommends following two approaches:</p>
<p><strong>Proactive approach:</strong> <a target="_blank" href="https://aws.amazon.com/solutions/implementations/instance-scheduler/">AWS Instance Scheduler</a> This approach involves using tags to resources like EC2/RDS instances and then create schedule to start or stop.</p>
<p><strong>Reactive approach:</strong> <a target="_blank" href="https://aws.amazon.com/blogs/startups/how-to-set-aws-budget-when-paying-with-aws-credits/">AWS Budgets</a> - This is the most basic and must do approach which involves creating email alerts when monthly AWS spend exceeds the budget threshold.</p>
<h2 id="heading-move-2-deciding-infrastructure-as-code-iac">Move #2 : Deciding Infrastructure as Code (IAC)</h2>
<p>Infrastructure as Code enables programmatic provisioning and management of cloud resources. It help to improve the efficiency, reliability, and scalability of your infrastructure, while also reducing costs and increasing flexibility.</p>
<h3 id="heading-situation-1">Situation</h3>
<ul>
<li><p>As startup grows services grow and scale increases. It becomes difficult to track resources and configurations.</p>
</li>
<li><p>With increasing user demands need to deploy additional server in a different availability zones or region increases which is difficult to replicate in absence of IAC.</p>
</li>
<li><p>Once the “Too small to be be notices” phase ends for a company attackers can create security nightmares. Implementing security policies and best practices are consistently applied across the infrastructure is difficult in absence of IAC.</p>
</li>
<li><p>As startup budget is always a constraint. IAC reduces time-consuming and error-prone manual tasks and brings automation and hence cost saving.</p>
</li>
</ul>
<h3 id="heading-solutions">Solutions:</h3>
<ul>
<li><p><strong>AWS CloudFormation:</strong> AWS CloudFormation is a good choice for organizations already using AWS, but it only supports AWS. This is a free tool and low learning curve. This is specially useful for startups to experiment faster.</p>
</li>
<li><p><strong>Terraform:</strong> Terraform is a good choice for organizations looking to support multiple cloud providers. But that has steeper compared to other Cloudformation and its complexity is higher as well.</p>
</li>
</ul>
<p>Generally speaking for startup cloudformation is easier option to start with because of easiness to adopt, low learning curve and simplicity.</p>
<h2 id="heading-move-3-choosing-managed-aws-services">Move #3 :Choosing managed AWS services</h2>
<p>AWS managed services can help organizations achieve greater scalability, reliability, security, and cost-effectiveness. Managed services are always updated to the latest cloud technologies and best practices in the industry.</p>
<h3 id="heading-situation-2">Situation</h3>
<ul>
<li><p>Self managed services have high cost and require more efforts in maintenance. So, to reduce cost and human efforts you should always use the correct AWS managed services.</p>
</li>
<li><p>As a startup you should ways focus on your niche and innovate around that. Managing cloud is just a side hassle to be always avoided.</p>
</li>
<li><p>As a startup you can never scale to match the level of experts of the domain like AWS and its engineers. So best idea is to just use the innovation at scale.</p>
</li>
<li><p>Managed services have built-in features such as automated backups, disaster recovery, and security controls. You just need to plug and play.</p>
</li>
</ul>
<h3 id="heading-solution-1">Solution</h3>
<p>Just use the right managed services.</p>
<p>Here is a table comparing self-managed solution with an AWS managed service.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Use case</td><td>Don’t</td><td>Do</td></tr>
</thead>
<tbody>
<tr>
<td>User authentication and management</td><td>Building own solution using library</td><td>Just use Amazon Cognito</td></tr>
<tr>
<td>Orchestration of containers</td><td>Using Docker swarm or Kubeadm etc</td><td>Just use ECS or EKS. For most general startups they are good.</td></tr>
<tr>
<td>Self-hosting MySQL or MongoDB</td><td>Launching mysql or MongoDB on EC2 or on-premise</td><td>Just use AWS Amazon DocumentDB or RDS.</td></tr>
<tr>
<td>CICD</td><td>Launching Jenkins on EC2</td><td>Just use AWS CICD tools like Codebuild, codepipeline etc. It has native support for Jenkins as well.</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[Top 6 AWS certification exam passing technique]]></title><description><![CDATA[Cracking AWS certification exams requires knowledge and experience. But at the same time being smart and hacky with choosing the right approach to solve the question to equally important. This becomes more relevant for professional exams which are to...]]></description><link>https://blog.instructorchandra.com/top-6-aws-certification-exam-passing-technique</link><guid isPermaLink="true">https://blog.instructorchandra.com/top-6-aws-certification-exam-passing-technique</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS certification]]></category><category><![CDATA[AWS Certified Solutions Architect Associate]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Thu, 10 Mar 2022 19:40:19 GMT</pubDate><content:encoded><![CDATA[<p>Cracking AWS certification exams requires knowledge and experience. But at the same time being smart and hacky with choosing the right approach to solve the question to equally important. This becomes more relevant for professional exams which are tough and questions are lengthy. </p>
<p>In this blog we are going to look at five such technique which give you edge in the exam. All techniques are discussed with examples and alerts or warning for all questions.</p>
<h2 id="heading-question-technique-1">Question technique 1</h2>
<p>An application uses an Application Load Balancer, an Auto Scaling Group and currently, 10 EC2 instances. To ensure cost efficacy you have been asked to ensure that when average CPU utilisation is below 20% instances are terminated and added when average CPU is above 70%.</p>
<p><strong>Alert1 -</strong> As it can be seen this is a pretty average type of question in term of its length and difficulty. All questions have some <strong>unnecessary information</strong> so we should start with <strong>identifying the keywords</strong> in it which <strong>actually matter</strong>. Click toggle button below to find the idetified keywords.</p>
<ul>
<li><p><strong>Question highlights</strong></p>
<p>  An application uses an <strong>Application Load Balancer</strong>, an <strong>Auto Scaling Group</strong> and currently, <strong>10 EC2 instances</strong>. To ensure <strong>cost efficacy</strong> you have been asked to ensure that when <strong>average CPU utilisation</strong> is <strong>below 15% instances are terminated</strong> and <strong>added when average CPU is above 65%.</strong></p>
</li>
</ul>
<p><strong>Which option should you suggest?</strong></p>
<ol>
<li>Implement a Scheduled Scaling Policy to add instances during periods of heavy CPU usage and remove them when CPU usage is below 20%</li>
<li>Run a script on each EC2 instance to report the CPU load back to the auto scaling service which can make decisions based on target rules</li>
<li>Implement Target Tracking Scaling Policies at 20% and 70%, use an IAM Role to provide the policies with permissions to add and remove EC2 instances</li>
<li>Use CloudWatch to monitor average CPU levels and create simple scaling policies within the Auto Scaling Group</li>
</ol>
<p><strong>Answer highlights</strong></p>
<ol>
<li>Implement a <strong>Scheduled Scaling Policy</strong> to <strong>add instances during periods of heavy CPU usage</strong> and <strong>remove</strong> them when <strong>CPU usage is below 20%</strong><blockquote>
<p>not the case of scheduled scaling policy. Eliminated.</p>
</blockquote>
</li>
<li>Run a <strong>script</strong> on each EC2 instance to report the CPU load back to the <strong>auto scaling service</strong> which can make decisions based on <strong>target rules</strong><blockquote>
<p>This is not the case of scheduled scaling policy and secondly we hardly use script based solution in AWS. </p>
</blockquote>
</li>
<li>Implement <strong>Target Tracking Scaling Policies</strong> at <strong>20% and 70%</strong>, use an <strong>IAM Role to provide the policies with permissions to add and remove EC2 instances</strong>        <blockquote>
<p>This talks about target tracking while questions seems to be needing simple scaling and it talks about IAM Role which we don’t use for autoscaling.</p>
</blockquote>
</li>
<li>Use <strong>CloudWatch</strong> to <strong>monitor average CPU levels</strong> and create <strong>simple scaling policies</strong> within the Auto Scaling Group<blockquote>
<p>Only option which fits well after elimination.</p>
</blockquote>
</li>
</ol>
<p><strong>Trick1 - Eliminate absurd answers.</strong> </p>
<p>Start with identifing the keywords in the question and the answer and eliminate one or two answers quickly.</p>
<h2 id="heading-question-technique-2">Question technique 2</h2>
<p>A professional baseball league has chosen to use AWS DynamoDB for its backend data storage. Many of the data requirements involve high-speed processing of images caputured using a flying drone. All these data including position and images are stored in DynamoDB. Users of the analytics applications using this database complain of slow load times for the positioning data including images. Currently the data and related informations are stored within DynamoDB. </p>
<p>Which option represents the best fix for this type of problem?</p>
<p><strong>Alert2-</strong> Just think why it mentions DynamoDB as backend data storage. It also mentions storing image in the database. Why would someone use DynamoDB to storage images.</p>
<p><strong>Question highlights</strong></p>
<p>A professional baseball league has chosen to use AWS <strong>DynamoDB</strong> for its <strong>backend data storage</strong>. Many of the data requirements involve high-speed processing of images caputured using a flying drone. All these data including position and images are stored in DynamoDB. Users of the analytics applications using this database complain of <strong>slow load times</strong> for the positioning data <strong>including images</strong>. Currently the data and related informations are <strong>stored within DynamoDB</strong>.</p>
<p>Which option represents the <strong>best fix</strong> for this type of problem?</p>
<p><strong>Which options do you suggest</strong></p>
<p>Which option represents the best fix for this type of problem? (choose one)</p>
<ol>
<li>Change from DynamoDB to Aurora running in a VPC and use multiple replicas to scale read capability for the analytics application.</li>
<li>Copy the drone images to S3, replace the database stored images with a link to the S3 location.</li>
<li>Change from DynamoDB to Aurora running in a VPC and use multiple replicas to scale read capability for the analytics application.</li>
<li>Modify the DynamoDB table to use on-demand pricing to cope with the incoming demand, use an SQS queue to buffer writes to cope with peak load.</li>
</ol>
<p><strong>Answer highlights</strong></p>
<ol>
<li><p>Change from DynamoDB to Aurora running in a VPC and use multiple replicas to scale read capability for the analytics application.        </p>
<blockquote>
<p>Analytics application is based on nosql database. We can’t change the whole database and application.</p>
</blockquote>
</li>
<li><p>Copy the drone images to S3, replace the database stored images with a link to the S3 location.       </p>
<blockquote>
<p>As it can be seen the main issue storing the captured image data in DynamoDB itself. DynamoDB has limits on per items storage and that loading will take time. Changing this to point to image location on S3 seems a good option as with minimal changes in the application to show the images from the coming link will solve the issue.</p>
</blockquote>
</li>
<li>Adjust the RCU and WCU on the DynamoDB tables to 10,000 each to cope with the load on the database.<blockquote>
<p>There is no limit to increasing the RCU and WCU. What if larger images keep getting stored in the database.</p>
</blockquote>
</li>
<li>Modify the DynamoDB table to use on-demand pricing to cope with the incoming demand, use an SQS queue to buffer writes to cope with peak load.        <blockquote>
<p>Similar to the last option, there is no limit to increasing demand of the application. Main issue is storing the image.</p>
</blockquote>
</li>
</ol>
<p><strong>Trick2- Find anti-patterns.</strong></p>
<p>Some questions give application level information which are not written obviously. Like this question mentions storing image in the DynamoDB itself. DynamoDB is not good for storing binary data. So trick is to find anti-patterns that the application might be using.</p>
<h2 id="heading-question-technique-3">Question technique 3</h2>
<p>A non-profit organization elastic <em>**</em>website runs on EC2 instances provisioned and terminated by an Auto Scaling Group. Authors connect to the system publish a post with attached images which can receive millions of views a day. With the introduction of the auto scaling group to allow the site to scale a regular bug is that posts have broken links instead of images.</p>
<p>Which of the following options is a potential fix? (choose one)</p>
<p><strong>Motivation -</strong> This question is actually interesting because it doesn't give you a heads as to what is the root cause of the problem. It is very open ended and they give you little information about its implementation.This means to get additional information to about the question we will have to use the answers. It also mentions that ec2 instances termination of older instances by auto scaling group. This means problem is likely to be related to this change. This could be causing the broken links in the broken images. Good thing with this question is the its answers are very short so easier and quicker to evaluate. </p>
<p><strong>Question highlights</strong></p>
<p>A non-profit organization <strong>elastic</strong> website runs on <strong>EC2 instances provisioned</strong> and <strong>terminated</strong> by an <strong>Auto Scaling Group</strong>. Authors connect to the system publish a post with <strong>attached images</strong> which can receive millions of views a day. With the introduction of the <strong>auto scaling group</strong> to allow the site to scale a regular bug is that posts have <strong>broken links</strong> instead of images.</p>
<p><strong>Which options do you suggest</strong></p>
<ol>
<li>Implement CloudFront to cache images to avoid the broken links</li>
<li>Change the EC2 volumes on all instances in the ASG from ST1 to GP2, adjust the ASG to use GP2 for any newly provisioned instances</li>
<li>Implement EFS and configure all Instances to mount it via a Mount Target</li>
<li>Use EBS Snapshots to restore any missing images on a case by case basis</li>
</ol>
<p><strong>Answer highlight</strong></p>
<ol>
<li>Implement <strong>CloudFront</strong> to <strong>cache</strong> images to avoid the broken links</li>
</ol>
<blockquote>
<p>Can’t decide to use CloudFront without actually knowing the backend source of images.</p>
</blockquote>
<ol>
<li>Change the EC2 volumes on all instances in the ASG from ST1 to GP2, adjust the ASG to use GP2 for any newly provisioned instances</li>
</ol>
<blockquote>
<p>ST1 to GP2 only changes the performance. Not the actual problem.</p>
</blockquote>
<ol>
<li>Implement EFS and configure all Instances to mount it via a Mount Target</li>
</ol>
<blockquote>
<p>This is the only good option after elimination of rest three. This is a shared file system to all instances with images stored in it permanently. So this will work like a charm.</p>
</blockquote>
<ol>
<li>Use EBS Snapshots to restore any missing images on a case by case basis</li>
</ol>
<blockquote>
<p>Elastic solutions need to be automated. Not case by casis.</p>
</blockquote>
<p><strong>Trick3 -</strong> Get clue from answers for open ended questions</p>
<p>This question required you need to read between the lines and understand the reasons why images could be broken because the storage of the instnace is vanishing as instances get terminated by ASG. This type of questions which require reviews and analysis are very common in the professional level. Even in associate level certifications have some questions like this.</p>
<h2 id="heading-question-technique-4">Question technique 4</h2>
<p>You are auditing a serverless application for a live aution system. The application uses API Gateway, S3 and Lambda to provide the frontend serverless compute and DynamoDB for backend data storage. During yearly auction registration periods the system is expected to have 10000x the load vs other times of the year. The DynamoDB tables use provisioned capacity of 50 RCU/WCU.</p>
<p>Which architecture changes could you suggest to reduce the impact of the extra load? (choose two)</p>
<p><strong>Motivation-</strong> This is a clasic example of burst problem that for a certain period of time large amount of requests come which vanish later. If we look closely we find that frontend and backend part of the application is serverless which means it will scale automatically so this can’t be a concern for us to scale. API gateway, S3 and lambda are serverless services. But if we look at DynamoDB it is not. So, we may have to focus on this. Last part mentioning 50 RCU/WCU also gives us clue that this detail is useful to us. So answer mostly lies in DynamoDB.</p>
<p><strong>Question highlights</strong></p>
<p>You are auditing a <strong>serverless</strong> application for a live aution system. The application uses <strong>API Gateway, S3 and Lambda</strong> to provide the frontend serverless compute and <strong>DynamoDB</strong> for backend data storage. During yearly auction registration periods the system is expected to have <strong>10000x</strong> the load vs <strong>other times of the year</strong>. The DynamoDB tables use provisioned capacity of <strong>50 RCU/WCU</strong>.</p>
<p>Which architecture changes could you suggest to reduce the impact of the extra load? <strong>(choose two)</strong></p>
<p><strong>Which options do you suggest</strong></p>
<ol>
<li>Launch 100 DynamoDB databases during the peak period to spread the 10000x load</li>
<li>Backup the data from DynamoDB and restore the snapshot into an Aurora Serverless cluster, configure for public access and modify the application code</li>
<li>Change from provisioned to on-demand capacity</li>
<li>Add an SQS queue, modify the application so it writes to the queue and use a backend Lambda to parse auction registration records from the queue and add to the database over time</li>
<li>Increase the RCU and WCU on the table from 50 to 500,000 for the brief peak periods and return afterwards</li>
</ol>
<p><strong>Answer highlights</strong></p>
<ol>
<li>Launch 100 DynamoDB databases during the peak period to spread the 10000x load<blockquote>
<p>Straight no.</p>
</blockquote>
</li>
<li>Backup the data from DynamoDB and restore the snapshot into an Aurora Serverless cluster, configure for public access and modify the application code            <blockquote>
<p>Straight no. Can’t replatform.</p>
</blockquote>
</li>
<li>Change from provisioned to on-demand capacity<blockquote>
<p>Yes. Let AWS handle the scaling. It may be very costly but this solution will work. So it is a potential solution. Since we have to select two this can be one of that. If we were to select cheapest one, then we could avoid this. But that’s not the case here.</p>
</blockquote>
</li>
<li>Add an SQS queue, modify the application so it writes to the queue and use a backend Lambda to parse auction registration records from the queue and add to the database over time.<blockquote>
<p>Decoupled solution are great. This soultion will work at low cost as well and handle the peak volume using SQS queue. So, it is a great solution.</p>
</blockquote>
</li>
<li>Increase the RCU and WCU on the table from 50 to 500,000 for the brief peak periods and return afterwards<blockquote>
<p>Nope. We need automated solution and there is no limit to load increase. What if load increases even higher.<br /><strong>Trick4- Focus on non-serverless services for performance. Go for decoupled solutions.</strong></p>
</blockquote>
</li>
</ol>
<h2 id="heading-question-technique-5">Question technique 5</h2>
<p>You are auditing the AWS environment for an enterprise application. It runs from EC2 instances provisioned via an ASG connected to an application load balancer. A SysAdmin team manages the AWS and EC2 environment, and development teams connect to EC2 when they perform application maintenance. You're adding SSL capability and need to ensure the Development team, who have Root access to the EC2 instances can't access the SSL Certificate for <a target="_blank" href="http://catagram.io/">t</a>he application.</p>
<p>Which solution should you suggest? (choose one)</p>
<p><strong>Motivation -</strong> </p>
<ul>
<li><strong>Question highlights</strong></li>
</ul>
<p>You are auditing the AWS environment for an enterprise application. It runs from <strong>EC2 instances provisioned via an ASG</strong> connected to an <strong>application load balancer</strong>. A SysAdmin team manages the <strong>AWS and EC2 environment</strong>, and <strong>development teams connect to EC2</strong> when they perform application maintenance. You're adding SSL capability and need to ensure the Development team, who have <strong>Root access to the EC2 instances can't access the SSL Certificate</strong> for the application.</p>
<p>Which solution should you suggest? <strong>(choose one)</strong></p>
<p><strong>Which options do you suggest</strong></p>
<ol>
<li>Store the SSL Certificate on S3, copy onto the EC2 instances at boot, load and remove afterwards</li>
<li>Store the SSL certificate on the EC2 instances and set the permissions to allow access onto from the SysAdmins IAM group</li>
<li>Generate a certificate within ACM, configure it on the ALB and set the EC2 instances to use the HTTPS protocol for ALB -&gt; Instance connections</li>
<li>Import the certificate into ACM, configure it on the ALB and set the EC2 instances to use the HTTP protocol for ALB-&gt; Instance Connections</li>
</ol>
<p><strong>Answer highlights</strong></p>
<ol>
<li>Store the SSL Certificate on <strong>S3, copy onto the EC2 instances at boot</strong>, load and remove afterwards</li>
</ol>
<blockquote>
<p>Once certificate is loaded on the instance root user will still have access to it.</p>
</blockquote>
<ol>
<li><p>Store the SSL certificate on the EC2 instances and <strong>set the permissions to allow access onto from the SysAdmins IAM group</strong>        </p>
<blockquote>
<p>Any user with root permission could remove this still.</p>
</blockquote>
</li>
<li><p><strong>Generate a certificate within ACM</strong>, configure it on the ALB and set the EC2 instances to use the <strong>HTTPS protocol for ALB -&gt; Instance connections</strong></p>
<blockquote>
<p>If you set certificate on ALB then ALB→ instance connection will become http not https without setting another certificate on ec2 again.</p>
</blockquote>
</li>
<li><strong>Import the certificate into ACM</strong>, configure it on the ALB and set the EC2 instances to use the <strong>HTTP protocol for ALB-&gt; Instance Connections</strong><blockquote>
<p>Works well. Set the certificate on ALB and that make the application level access to the end users https and we can leave the ALB→instance connection http.        </p>
</blockquote>
</li>
</ol>
<p><strong>Trick5- Focus on wording difference in similar answers.</strong></p>
<p>In this case last two options were very similar which is good because they have higher chances of being a valid answer and it is easy to find the keyword difference between them. And just focusing on that part will answer the question.</p>
<p>In this case first option of the last two options are the same because ACM supports both options → generating and importing certificates. Just focusing on the second part of the answers shows the difference.</p>
<h2 id="heading-question-technique-6">Question technique 6</h2>
<p>A software gaming company has produced an online racing game which has become an overnight craze. Due to the overwhelming success of the gaming application you need to implement security controls within the environment. The application uses EC2 instances, provisioned by an ASG, connected to an application load balancer. You need to implement a system using AWS tools and services which can conduct an analysis of EC2 instances checking for vulnerabilities. And a tool which can check the AWS account, products and services for compliance against best practice standards over time.</p>
<p>Which AWS products should you suggest? (choose two)</p>
<p><strong>Motivation -</strong> Essentially this question revolves around security. It talks about vulnerabilities scan and compliance. Most likely everything about ALB, ASG are just useless information. Also it asks for AWS products recommendations so we just need to find a suitable service which meets the requirement.</p>
<p><strong>Question highlight</strong></p>
<p>A software gaming company has produced an online racing game which has become an overnight craze. Due to the overwhelming success of the gaming application you need to implement security controls within the environment. The application uses EC2 instances, provisioned by an ASG, connected to an application load balancer. You need to implement a system using AWS tools and services which can <strong>conduct an analysis</strong> of EC2 instances checking for vulnerabilities. And a tool which can <strong>check the AWS account, products and services for compliance against best practice standards over time.</strong></p>
<p>Which AWS products should you suggest? <strong>(choose two)</strong></p>
<p><strong>Which options do you suggest</strong></p>
<ol>
<li>CloudTrail</li>
<li>AWS Config</li>
<li>Inspector</li>
<li>WAF &amp; Shield</li>
</ol>
<p><strong>Answer highlights</strong></p>
<ol>
<li><strong>CloudTrail</strong></li>
</ol>
<blockquote>
<p>Used for access logging. Nothing to do with vulnerabilities scan.</p>
</blockquote>
<ol>
<li><strong>AWS Config</strong></li>
</ol>
<blockquote>
<p>Great to check mismatch from compliance standards.</p>
</blockquote>
<ol>
<li><strong>Inspector</strong></li>
</ol>
<blockquote>
<p>Great to do vulnerability scan on ec2.</p>
</blockquote>
<ol>
<li><strong>WAF &amp; Shield</strong></li>
</ol>
<blockquote>
<p>Used for application security. Nothing to do with AWS account or ec2 instances.</p>
</blockquote>
<p><strong>Trick6- Ignore the garbage</strong></p>
<p>Some questions have large amount of unncessary information. Finding this comes with experience. For example in this last two requirements actually matter. Rest whole paragraph is useless.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Kinesis optimal shards and cost estimation]]></title><description><![CDATA[Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data at any scale at the most optimal costs. It supports real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for variou...]]></description><link>https://blog.instructorchandra.com/aws-kinesis-optimal-shards-and-cost-estimation</link><guid isPermaLink="true">https://blog.instructorchandra.com/aws-kinesis-optimal-shards-and-cost-estimation</guid><category><![CDATA[AWS]]></category><category><![CDATA[distributed system]]></category><category><![CDATA[kafka]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Sun, 23 Jan 2022 08:23:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1643010852515/3QBZrh7wY.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data at any scale at the most optimal costs. It supports real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for various applications. </p>
<p>Calculation of optimal number of shards is important for improving the efficiency and lower the cost of the data stream. </p>
<p>We are going to use a simpler producer and consumer process using the code base present in this repo <a target="_blank" href="https://github.com/chandradeoarya/kinesis-shard-estimation">Kinesis shard estimation</a></p>
<h4 id="heading-producer">Producer</h4>
<p>Generates random characters, and then put the generated random characters into the stream as records.</p>
<h4 id="heading-consumer">Consumer</h4>
<p>Gets batches of records and then seeks through the records for the search pattern and shows on terminal.</p>
<p>Now, install boto python package for interaction with AWS. <code>pip install boto</code> and start the long running tasks.</p>
<h4 id="heading-long-running-tasks">Long running tasks</h4>
<p><code>nohup python producer.py test --shard_count 1 --poster_count 50 --poster_time 34560 --quiet &amp;</code></p>
<p><code>nohup python worker.py test --sleep_interval 0.1 --worker_time 34560 &gt; 01consumer.out 2&gt; 01worker.err &lt; /dev/null &amp;</code></p>
<p>With this setup done we will start getting the results from consumer where it will find the patterns in the data. </p>
<pre><code><span class="hljs-operator">+</span><span class="hljs-operator">-</span><span class="hljs-operator">&gt;</span> shard_worker:<span class="hljs-number">0</span> Got <span class="hljs-number">25</span> Worker Records
<span class="hljs-operator">+</span><span class="hljs-operator">-</span><span class="hljs-operator">-</span><span class="hljs-operator">&gt;</span> egg location: [<span class="hljs-number">797</span>, <span class="hljs-number">1893</span>] <span class="hljs-operator">&lt;</span><span class="hljs-operator">-</span><span class="hljs-operator">-</span><span class="hljs-operator">+</span>
<span class="hljs-operator">+</span><span class="hljs-operator">-</span><span class="hljs-operator">-</span><span class="hljs-operator">&gt;</span> egg location: [<span class="hljs-number">1113</span>] <span class="hljs-operator">&lt;</span><span class="hljs-operator">-</span><span class="hljs-operator">-</span><span class="hljs-operator">+</span>
</code></pre><p>With this basic understanding and hands on we will look at a real world example and perform shard and cost estimation.</p>
<h2 id="heading-shard-estimation">Shard estimation</h2>
<h4 id="heading-question">Question</h4>
<p>20 stock exchange servers are generating 10 records of 250kb of data each second. 3 trading servers are consuming 50000kb of such data each second. Estimate no. of shards required for this requirements in AWS Kinesis.</p>
<h4 id="heading-solution">Solution</h4>
<p>AWS has defined the below formula to calculate the number of shards</p>
<p><strong>Number_of_shards = max(incoming_write_bandwidth_in_KiB/1024, outgoing_read_bandwidth_in_KiB/2048)</strong></p>
<p>In our case,</p>
<p><strong>incoming_write_bandwidth_in_KiB</strong>  =</p>
<pre><code>avg.data size in kb <span class="hljs-operator">*</span> records per second
                                <span class="hljs-operator">=</span> <span class="hljs-number">250</span> <span class="hljs-operator">*</span> <span class="hljs-number">20</span><span class="hljs-operator">*</span> <span class="hljs-number">10</span> <span class="hljs-operator">=</span> <span class="hljs-number">50000</span>
</code></pre><p><strong>outgoing_read_bandwidth_in_KiB</strong>  =</p>
<pre><code>incoming_write_bandwidth_in_KiB <span class="hljs-operator">*</span> consumers
                                <span class="hljs-operator">=</span>  <span class="hljs-number">50000</span> <span class="hljs-operator">*</span> <span class="hljs-number">3</span> <span class="hljs-operator">=</span> <span class="hljs-number">150000</span>
</code></pre><p>So, No.of.Shards</p>
<pre><code><span class="hljs-operator">=</span> max (<span class="hljs-number">50000</span><span class="hljs-operator">/</span><span class="hljs-number">1024</span>,<span class="hljs-number">150000</span><span class="hljs-operator">/</span><span class="hljs-number">2048</span>)
                 <span class="hljs-operator">=</span> max (<span class="hljs-number">48.8</span> , <span class="hljs-number">73.2</span>)
                 <span class="hljs-operator">=</span> <span class="hljs-number">73.2</span>
</code></pre><p><strong><em>and hence 74 shards.</em></strong></p>
<h2 id="heading-cost-estimation">Cost estimation</h2>
<p>Total number of shards = 74
Hours in a month = 730</p>
<p>74 shards x 730 hours in a month = 54,020.00 Shard hours per month</p>
<p>54,020.00 Shard hours per month x 0.015 USD = 810.30 USD</p>
<p>Shard hours per month cost: 810.30 USD</p>
<blockquote>
<p>There can be additional cost based on Extended data retention or Enhanced fan-out etc. if being used. </p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Enable Cluster Autoscaler in EKS with easy steps]]></title><description><![CDATA[Kubernetes offers human intervention free function to scale up or down the resources to meet the changing demands. Cloud Autoscaler is an EKS supported feature to enable on any existing cluster.
Prerequisites

An existing Amazon EKS cluster
Access to...]]></description><link>https://blog.instructorchandra.com/enable-cluster-autoscaler-in-eks-with-easy-steps</link><guid isPermaLink="true">https://blog.instructorchandra.com/enable-cluster-autoscaler-in-eks-with-easy-steps</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Beginner Developers]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Wed, 17 Nov 2021 20:03:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1642966496577/BVQKzb4k8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Kubernetes offers human intervention free function to scale up or down the resources to meet the changing demands. Cloud Autoscaler is an EKS supported feature to enable on any existing cluster.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ol>
<li>An existing Amazon EKS cluster</li>
<li>Access to Kubeconfig file</li>
<li>Kubectl and AWS Cli</li>
</ol>
<p>Right now the applied IAM roles in your cluster and nodes would be looking like this. We will be adding new IAM policy in our IAM role on the nodes.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1642967572579/M0vn-r1io.png" alt="Screenshot from 2022-01-24 01-22-26.png" /></p>
<h2 id="heading-configuring-cluster-autoscaler">Configuring Cluster Autoscaler</h2>
<ol>
<li>Create a policy with following content. You can name it as ClusterAutoscalerPolicy. </li>
</ol>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Statement"</span>: [
        {
            <span class="hljs-attr">"Action"</span>: [
                <span class="hljs-string">"autoscaling:DescribeAutoScalingGroups"</span>,
                <span class="hljs-string">"autoscaling:DescribeAutoScalingInstances"</span>,
                <span class="hljs-string">"autoscaling:DescribeLaunchConfigurations"</span>,
                <span class="hljs-string">"autoscaling:DescribeTags"</span>,
                <span class="hljs-string">"autoscaling:SetDesiredCapacity"</span>,
                <span class="hljs-string">"autoscaling:TerminateInstanceInAutoScalingGroup"</span>,
                <span class="hljs-string">"ec2:DescribeLaunchTemplateVersions"</span>
            ],
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"*"</span>,
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>
        }
    ]
}
</code></pre>
<ol>
<li>Attach this policy to the IAM Worker Node Role which is already in use.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1642967721521/kKO0rJJQ7.jpeg" alt="eks (1).jpg" /></p>
<ol>
<li>Deploy the <code>Cluster Autoscaler</code> with the following command.</li>
</ol>
<pre><code class="lang-bash">kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
</code></pre>
<ol>
<li><p>Add an annotation to the deployment with the following command.</p>
<pre><code class="lang-bash">kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict=<span class="hljs-string">"false"</span>
</code></pre>
</li>
<li><p>Edit the Cluster Autoscaler deployment with the following command.</p>
<pre><code class="lang-bash">kubectl -n kube-system edit deployment.apps/cluster-autoscaler
</code></pre>
<p>This command will open the yaml file for your editting. Replace  value with your own cluster name, and add the following command option <code>--skip-nodes-with-system-pods=false</code> to the command section under <code>containers</code> under <code>spec</code>. Save and exit the file by pressing <code>:wq</code>. The changes will be applied.</p>
</li>
<li><p>Find an appropriate version of your cluster autoscaler in the <a target="_blank" href="https://github.com/kubernetes/autoscaler/releases">link</a>. The version number should start with version number of the cluster Kubernetes version. For example, if you have selected the Kubernetes version 1.17, you should find something like <code>1.17.x</code>.</p>
</li>
<li><p>Then, in the following command, set the Cluster Autoscaler image tag as that version you have found in the previous step.</p>
<pre><code class="lang-bash">kubectl -n kube-system <span class="hljs-built_in">set</span> image deployment.apps/cluster-autoscaler cluster-autoscaler=us.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:&lt;YOUR-VERSION-HERE&gt;
</code></pre>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The Kubernetes Cluster Autoscaler is an easy to use function to prevent downtime when pods fail or are rescheduled onto other nodes by adjusting the number of nodes. This is typically already installed as a Deployment in an EKS cluster. Basic familiarity can save extensive human resources.</p>
]]></content:encoded></item><item><title><![CDATA[Dynamic Volume Provisionining in AWS EKS using EBS]]></title><description><![CDATA[Pods in Kubernetes need to store data which gets lost if kubelet restarts the pod or pods are intentionally deleted or recreated. 
Empty Dir volume or Host path volume use the local disk of the node to mount the volume. Cloud volumes like AWS EBS mou...]]></description><link>https://blog.instructorchandra.com/dynamic-volume-provisionining-in-aws-eks-using-ebs</link><guid isPermaLink="true">https://blog.instructorchandra.com/dynamic-volume-provisionining-in-aws-eks-using-ebs</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[AWS]]></category><category><![CDATA[storage]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Chandradeo Arya]]></dc:creator><pubDate>Wed, 29 Sep 2021 07:10:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1643006351340/jxQpSjzA1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Pods in Kubernetes need to store data which gets lost if kubelet restarts the pod or pods are intentionally deleted or recreated. </p>
<p>Empty Dir volume or Host path volume use the local disk of the node to mount the volume. Cloud volumes like AWS EBS mount the disk in the same manner but with different implementation.</p>
<p>In this tutorial we will learn the concept of Persistent Volume and Persistent Volume Claim and its implementation on AWS EKS using EBS.</p>
<h2 id="heading-persistent-volume-pv">Persistent volume (PV)</h2>
<p>As we saw that Empty Dir or Host path are tightly coupled with pods as they use the local disk of the node to mount the volume. It is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. </p>
<p>Important things to remember is:</p>
<ul>
<li>It is a resource in the cluster just like a node is a cluster resource. </li>
<li>PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV.</li>
<li>PV is used in the same manner as emptyDir or hostPath but not provisioned by pods.</li>
<li>It removes provisioning burden from developers to admin.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1643007075523/zhi5M8qFK.png" alt="Screenshot from 2022-01-24 12-20-33.png" /></p>
<h2 id="heading-persistent-volume-claim-pvc">Persistent volume claim (PVC)</h2>
<p>Cluster administrator creates the volumes as we saw previously and now pods can access them through PVC. It's a level of abstraction between the volume and its storage mechanism. Once PVC is created requirements of your application only matters eg. space, type, permission.</p>
<p>We can also give different level of access permission in PVC.</p>
<ul>
<li><strong>ReadWriteOnce</strong> - where only one node is allowed access to the volume.</li>
<li><strong>ReadOnlyMany</strong> - where one is allowed full access and other nodes are allowed read-only permission</li>
<li><strong>ReadWriteMany</strong> - for volumes that can be shared among many nodes and all of them have full access to it</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1643007328857/bqdqagcGA.png" alt="Screenshot from 2022-01-24 12-20-55.png" /></p>
<h2 id="heading-dynamic-volume-provisionining-in-aws-eks-using-ebs">Dynamic Volume Provisionining in AWS EKS using EBS</h2>
<ul>
<li>Create a StorageClass with the following settings.</li>
</ul>
<pre><code class="lang-bash">$ cat storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-standard
  annotations:
    storageclass.kubernetes.io/is-default-class: <span class="hljs-string">"true"</span>
provisioner: kubernetes.io/aws-ebs
parameters:
  <span class="hljs-built_in">type</span>: gp2
  fsType: ext4
</code></pre>
<ul>
<li>Create StorageClass with <code>kubectl apply</code> command.</li>
</ul>
<pre><code class="lang-bash">$ kubectl apply -f storage-class.yaml
storageclass.storage.k8s.io/aws-standard created
</code></pre>
<ul>
<li>Explain the default storageclass</li>
</ul>
<pre><code class="lang-bash">$ kubectl get storageclass
NAME                     PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
aws-standard (default)   kubernetes.io/aws-ebs   Delete          Immediate              <span class="hljs-literal">false</span>                  37s
gp2 (default)            kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   <span class="hljs-literal">false</span>                  112m
</code></pre>
<ul>
<li>Create a persistentvolumeclaim with the following settings and show that new volume is created on aws management console.</li>
</ul>
<pre><code class="lang-bash">$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
  storageClassName: aws-standard
</code></pre>
<pre><code class="lang-bash">$ kubectl apply -f pvc.yaml
persistentvolumeclaim/pvc created
</code></pre>
<ul>
<li>List the pv and pvc and explain the connections.</li>
</ul>
<pre><code class="lang-bash">$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
pvc-250719ae-36b8-441a-ad04-f6f69c1a11f0   3Gi        RWO            Delete           Bound    default/pv-claim   aws-standard            18s
$ kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pv-claim   Bound    pvc-250719ae-36b8-441a-ad04-f6f69c1a11f0   3Gi        RWO            aws-standard   2m30s
</code></pre>
<ul>
<li>Create a pod with the following settings.</li>
</ul>
<pre><code class="lang-bash">$ cat dynamic-storage-aws.yaml
apiVersion: v1
kind: Pod
metadata:
  name: test-eks-dynamic-storage
  labels:
    app : web-nginx
spec:
  containers:
  - image: nginx:latest
    ports:
    - containerPort: 80
    name: test-aws
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: aws-pd
  volumes:
  - name: aws-pd
    persistentVolumeClaim:
      claimName: pv-claim
</code></pre>
<pre><code class="lang-bash">$ kubectl apply -f dynamic-storage-aws.yaml
pod/test-eks-dynamic-storage created
</code></pre>
<ul>
<li><p>Enter the pod and see that ebs is mounted to  /usr/share/nginx/html path.
As it can be seen EBS has been mounted  to the path and once we are done we can delete this.</p>
<pre><code class="lang-bash">$ kubectl <span class="hljs-built_in">exec</span> -it test-aws -- bash
root@test-eks-dynamic-storage:/<span class="hljs-comment"># df -kh</span>
Filesystem      Size  Used Avail Use% Mounted on
overlay         8.0G  4.3G  3.7G  54% /
tmpfs            64M     0   64M   0% /dev
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/xvda1      8.0G  4.3G  3.7G  54% /etc/hosts
shm              64M     0   64M   0% /dev/shm
/dev/xvdbq      2.9G  9.0M  2.9G   1% /usr/share/nginx/html
tmpfs           2.0G   12K  2.0G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           2.0G     0  2.0G   0% /proc/acpi
tmpfs           2.0G     0  2.0G   0% /proc/scsi
tmpfs           2.0G     0  2.0G   0% /sys/firmware
root@test-eks-dynamic-storage:/<span class="hljs-comment">#</span>
</code></pre>
</li>
<li><p>Delete the storageclass that we created once done.</p>
</li>
</ul>
<pre><code class="lang-bash">$ kubectl get storageclass
NAME                     PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
aws-standard (default)   kubernetes.io/aws-ebs   Delete          Immediate              <span class="hljs-literal">false</span>                  16m
gp2 (default)            kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   <span class="hljs-literal">false</span>                  39m
$ kubectl delete storageclass gp2
storageclass.storage.k8s.io <span class="hljs-string">"gp2"</span> deleted
$ kubectl get storageclass
NAME                     PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
aws-standard (default)   kubernetes.io/aws-ebs   Delete          Immediate           <span class="hljs-literal">false</span>                  16m
</code></pre>
<ul>
<li>Delete the pod as well once done</li>
</ul>
<pre><code class="lang-bash">$ kubectl delete -f dynamic-storage-aws.yaml 
pod <span class="hljs-string">"test-eks-dynamic-storage"</span> deleted
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>As we saw Kubernetes volume provides an easy solution to solve the problem of ephemeral nature of storage in the container and secondly allows sharing files between containers.</p>
]]></content:encoded></item></channel></rss>