json=[
{
"title":"AWS Announcements Wishlist",
"body":"Before heading to AWS re:Invent this year, my team and I put together a wishlist of anticipated announcements\u2014features and services we hoped for. It was exciting to see some of them become reality! In this post, I'll walk through our wishlist for re:Invent 2024 but now for 2025 and discuss their real-world use cases. Amazon Q .NET (Announced) The previous .NET Modernization tool, AWS Por...",
"post_url":"https://www.kloia.com/blog/aws-2025-announcements-wishlist",
"author":"Derya (Dorian) Sezen",
"publish_date":"31-<span>Mar<\/span>-2025",
"author_url":"https://www.kloia.com/blog/author/derya-dorian-sezen",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/AWS%202025%20Announcements%20Wishlist-1-1.png",
"topics":{ "aws":"AWS","cloud":"Cloud","software":"Software","map":"map","aws-partner":"AWS Partner","reinvent2023":"reinvent2023","reinvent":"reinvent","awsambassador":"awsambassador" },
"search":"17 <span>apr</span>, 2025aws announcements wishlist aws,cloud,software,map,aws partner,reinvent2023,reinvent,awsambassador derya (dorian) sezen before heading to aws re:invent this year, my team and i put together a wishlist of anticipated announcements\u2014features and services we hoped for. it was exciting to see some of them become reality! in this post, i'll walk through our wishlist for re:invent 2024 but now for 2025 and discuss their real-world use cases. amazon q .net (announced) the previous .net modernization tool, aws porting assistant, relied primarily on predefined rules, limiting its ability to modernize anything outside those constraints. with the introduction of generative ai, amazon q .net delivered more accurate results in one of our modernization projects. you can read more about our experience in the linked blog post. amazon q cobol (announced) mainframe\/cobol-based systems are still widely used and are among the few that require specialized ibm hardware, with no direct support from cloud providers. the only way to migrate these workloads is through modernization. aws previously acquired bluage, a company specializing in mainframe modernization, but its rule-based approach came with high costs. we explored it in the past but couldn\u2019t proceed with real customers due to its limitations. in contrast, amazon q cobol leverages generative ai, which we expect to offer better accuracy and a more budget-friendly solution. autonomic computing ibm introduced the concept of autonomic computing in 2001, and we continue to see advancements toward realizing this vision. ref: ibm autonomic computing and solution installation david cole ibm ac customer and partner programs the levels from 1 to 5 define the maturity of a computing system. currently, we mostly experience levels 2 and 3. despite more than two decades of progress, the advancement has been slow. aws is in a position to introduce additional functionality within existing services to help advance towards level 3 (predictive) and level 4 (adaptive). in particular, functions addressing level 4 could be implemented. here are some of the functions that come to mind for potential inclusion in existing aws services: level 4: scalability use case: the system learns the business's peak traffic or computing periods based on historical data and adapts its scalability accordingly. this can be influenced by external factors such as: weather conditions political climate sociocultural events public events or news that affect the business security use case: blocks the addition of annotations to kubernetes in response to the latest nginx vulnerability. several examples can be added to help a system evolve towards level 4 (adaptive). carbon calculator aws has several future plans to become more carbon neutral, including optimizing data center energy use. currently, aws data centers are more than 4 times more efficient than traditional on-premises data centers. some aws services are also known for their energy efficiency, such as arm-based graviton instances, which offer over 60% better energy efficiency. the carbon calculator could be a tool that calculates customer workloads based on the aws services they are using and provides additional discounts based on the carbon index (ci) calculated by aws. this could encourage aws customers to shift more towards carbon-neutral services. cloudfront built-in functions cdn (content delivery network) is a large market with several players. when examining the maturity of market products, we see various built-in features, such as: image resizing image framing image watermarking for a full list of features from a major competitor, akamai, you can view their offerings here. in cloudfront, these features are not built-in. however, some marketplace products can address these needs, or you can create custom lambda functions. while possible, this requires additional effort and is not ideal for teams with limited aws expertise. to better meet advanced cdn requirements, we would love to see these functions offered as built-in features in cloudfront. cqrs\/event sourcing scalecube, as defined, has three dimensions, and aws\/kubernetes currently addresses only one of those: horizontal scaling. the other two dimensions are closely tied to software architecture, but could aws do something to support them? one dimension is functional decomposition, which opens the door to distributed architectures but also introduces challenges in managing distributed transactions. one way to address this is by replacing strict consistency with eventual consistency. as we know, many distributed systems that require scaling, like aws, rely on eventual consistency. however, aws still lacks services or features that explicitly support this. this may be a controversial topic, as it heavily depends on software architecture, but aws could still be in a position to develop open-source libraries that help developers leverage eventual consistency. these libraries could be pre-built on selected event store and event stream services. db saving plans i know many of you are waiting for this, but aws customers who are benefiting from savings plans on compute are also looking for similar options for rds. evs (announced during re:invent) vmware customers are seeking alternatives following the recent licensing changes, prompting the industry to explore other options. aws has also announced the evs (preview) service to provide vmware customers with an alternative path to aws. tech community is now waiting the ga(generally available) version of this service. at kloia, our focus is on modernization, and our \"vmware exit\" strategy is based on kubevirt, which we'll be discussing next: kubevirt we have been following kubevirt since its release, and our evaluation of it was completed quickly. at kloia, we are currently migrating workloads from vmware directly to kubevirt on aws. here\u2019s a brief value proposition for kubevirt: running vms within pods like containers, which merges the distinct worlds of vms and containers under kubernetes, simplifies infrastructure management. sounds interesting, right? :) although not officially announced during re:invent, aws was the first cloud provider to announce support for openshift virtualization (kubevirt). the typical architecture with kubevirt looks like this: ref: openshift virtualization on red hat openshift service on aws (rosa) finops with the release of the latest finops framework by the finops foundation, organizational requirements and practices around finops have evolved. aws already offers cost-related tools such as tagging, cost explorer, and trusted advisor. while these support finops efforts to some extent, we believe aws could develop an end-to-end dashboard that fully supports all processes and requirements defined in the finops framework. managed backstage platform engineering has been evolving rapidly, addressing gaps that organizations still face in the devops movement. idp (internal developer platform) is a key tool in platform engineering, providing delivery teams (development teams) with all the necessary functions to consume the platform. backstage, developed by the spotify team and open-sourced to the community, is one of the most popular idps. during re:invent 2024, spotify independently announced a managed backstage service. we hope aws will recognize the demand from platform engineering teams and bring this managed service to aws as well. multi-control tower organizations as a constraint, an organization can only have one control tower on aws, which limits the ability to govern multiple financial accounts under a single account. additionally, services like sso, which are managed at the organization level, highlight the need for separating the organizational management account from the financial management account. i understand this has been a long-standing request from aws, and given its impact on financial and billing processes, it requires careful consideration. however, we are eager to see this functionality implemented soon on aws. rag retrieval-augmented generation (rag) is a well-known technique used in genai projects. but what if aws were to introduce rag-specific services, such as: amazon bedrock rag accelerator: a fully managed service to streamlineretrieval-augmented generation pipelines with native support foramazon kendra, opensearch, and vector dbs. aws rag agent for bedrock:a serverless function that automatically manages retrieval, embedding storage, and grounding to improve response accuracy. rag-aware api gateway: a managed api that can automatically handle retrieval latency, caching, and chunking strategies for llm applications. slm not everything requires an llm (large language model); we\u2019ve found that slms (small language models) are sufficient for many businesses. what if aws were to introduce slm-specific services, such as: amazon slm studioa low-cost model hosting and fine-tuning service optimized for slms like mistral or tinyllama, with on-demand or event-driven inference. aws lambda ai extensionsbring slm inference capabilities to aws lambda with low-memory optimizations and lightweight inference runtimes. bedrock model shrinkera tool that compresses and distills large models into slms while maintaining accuracy for specific use cases. vectordb serverless we frequently receive the same question from our customers: which vector db do you recommend? here are some vector db-related features that aws could develop: amazon aurora vector editionadds native vector storage and search capabilities to aurora with sql-based embedding queries. amazon dynamodb vector modea fully managed, serverless vector database optimized for fast, scalable, key-value retrieval with integrated similarity search. amazon opensearch serverless vector indexenhancements to opensearch serverless to natively support hnsw (hierarchical navigable small world) indexing and approximate nearest neighbor (ann) search. turkiye localzone aws is a growing community and business in t\u00FCrkiye, but has been waiting for a local zone to meet data residency requirements for some time. we are hopeful that aws will announce the t\u00FCrkiye local zone in 2025. --------- in summary, i am hopeful that aws will make significant announcements in 2025 that will address many of the features and improvements we've been wishing for. with the growing demand from the aws community and the evolving needs of businesses, it's clear that there are opportunities to enhance the platform in ways that would better support both technical and organizational requirements. from more advanced services to fulfill the needs of platform engineering teams, to introducing features that cater to emerging trends like retrieval-augmented generation or small language models, there is much potential. additionally, the long-awaited local zone in t\u00FCrkiye could be a game-changer for meeting data residency requirements in the region. let\u2019s see if our wishes come true in 2025! :)"
},
{
"title":"KubeVirt on AWS EKS: Unifying Containers and VMs in Kubernetes",
"body":"Kubernetes is the de facto standard for container orchestration, enabling developers to easily deploy, scale, and manage containerized applications. However, there are scenarios where running virtual machines (VMs) alongside containers is necessary. This is where KubeVirt comes into play. KubeVirt is a Kubernetes extension that allows you to manage VMs alongside containers within the sam...",
"post_url":"https://www.kloia.com/blog/kubevirt-on-aws-eks-unifying-containers-and-vms-in-kubernetes",
"author":"Ahmet Ayd\u0131n",
"publish_date":"25-<span>Mar<\/span>-2025",
"author_url":"https://www.kloia.com/blog/author/ahmet-aydın",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/KubeVirt%20on%20AWS%20EKS_%20Unifying%20Containers%20and%20VMs%20in%20Kubernetes-1.png",
"topics":{ "aws":"AWS","devops":"DevOps","kubernetes":"Kubernetes","eks":"EKS","containers":"Containers","kubevirt":"KubeVirt" },
"search":"11 <span>apr</span>, 2025kubevirt on aws eks: unifying containers and vms in kubernetes aws,devops,kubernetes,eks,containers,kubevirt ahmet ayd\u0131n kubernetes is the de facto standard for container orchestration, enabling developers to easily deploy, scale, and manage containerized applications. however, there are scenarios where running virtual machines (vms) alongside containers is necessary. this is where kubevirt comes into play. kubevirt is a kubernetes extension that allows you to manage vms alongside containers within the same kubernetes cluster. this blog post will guide you through the process of installing kubevirt on an aws elastic kubernetes service (eks) cluster, complete with examples and best practices. use cases for kubevirt kubevirt is particularly useful in scenarios where: legacy applications: many organizations still rely on legacy applications that are not easily containerized. kubevirt allows these applications to run as vms within a kubernetes cluster, enabling modernization without rewriting or refactoring. specialized workloads: certain workloads, such as those requiring specific kernel versions or custom drivers, are better suited to vms. kubevirt provides the flexibility to run these workloads alongside containerized applications. development and testing: developers can use kubevirt to create isolated environments for testing and development, ensuring consistency across different stages of the software lifecycle. hybrid cloud: organizations with hybrid cloud strategies can use kubevirt to manage vms across on-premises and cloud environments, providing a unified management plane. integration with other kubernetes tools kubevirt can be integrated with other kubernetes tools to enhance its capabilities: istio: by integrating kubevirt with istio, you can manage traffic between vms and containers, enabling advanced networking features like service mesh and traffic routing. prometheus and grafana: kubevirt can be monitored using prometheus and grafana, providing insights into vm performance and health. velero: velero can be used for backup and disaster recovery of vms, ensuring data integrity and availability. argo cd: for gitops workflows, argo cd can manage the deployment and lifecycle of kubevirt resources, ensuring consistency and version control. challenges and considerations while kubevirt offers numerous benefits, there are challenges and considerations to keep in mind: resource management: running vms alongside containers can lead to resource contention. proper resource allocation and monitoring are essential to ensure optimal performance. networking complexity: configuring networking for vms, especially in a multi-tenant environment, can be complex. tools like multus and cni plugins can help, but they require careful planning and configuration. security: vms introduce additional security considerations, such as hypervisor vulnerabilities and vm isolation. implementing robust security practices, including network policies and rbac, is crucial. operational overhead: managing both containers and vms within the same cluster can increase operational complexity. automation and orchestration tools can help mitigate this overhead. community and ecosystem kubevirt is backed by a vibrant community and ecosystem, with contributions from major players in the cloud-native space. the project is part of the cloud native computing foundation (cncf), ensuring ongoing development and support. engaging with the community through forums, mailing lists, and conferences can provide valuable insights and support. prerequisites before we dive into the installation process, ensure that you have the following prerequisites in place: an aws account with sufficient permissions to create and manage eks clusters. aws cli is installed and configured on your local machine. kubectl is installed and configured to interact with your eks cluster. helm is installed for managing kubernetes applications. virtctl is installed for managing virtual machines. setting up aws eks creating an eks cluster to get started, you need to create an eks cluster. you can do this using the aws management console, aws cli, or tools like eksctl. for simplicity, i\u2019ll use eksctl in this guide. 1. install eksctl: if you haven't already, install eksctl by following the official documentation. 2. create an eks cluster: use the following command to create a basic eks cluster: eksctl create cluster \\ --name kubevirt-test \\ --region eu-west-1 \\ --nodegroup-name standard-workers \\ --node-type c5.metal \\ --nodes 2 \\ --nodes-min 1 \\ --nodes-max 4 \\ --managed this command creates a cluster named kubevirt-test in the eu-west-1 region with two c5.metal worker nodes. 3. verify the cluster: once the cluster is created, verify its status using: eksctl get cluster --region eu-west-1 installing kubevirt with the eks cluster up and running, the next step is to install the kubevirt operator. deploying kubevirt operator kubevirt is typically installed using the kubevirt operator, which manages the lifecycle of kubevirt components. 1. add the kubevirt helm repository: helm repo add kubevirt https:\/\/kubevirt.io\/helm-charts helm repo update 2. install kubevirt operator: helm install kubevirt kubevirt\/kubevirt \\ --namespace kubevirt \\ --create-namespace \\ \u00A0--set operator.image.tag=v0.54.0 \\ --set operator.image.pullpolicy=ifnotpresent 3. verify the installation: kubectl get pods -n kubevirt you should see the kubevirt operator pod running. verifying the installation to ensure that kubevirt is installed correctly, deploy a simple vm and check its status. 1. create a virtualmachine instance: apiversion: kubevirt.io\/v1 kind: virtualmachine metadata: name: test-vm spec: running: false template: metadata: labels: kubevirt.io\/vm: test-vm spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio resources: requests: memory: 64m volumes: - name: containerdisk containerdisk: image: kubevirt\/cirros-registry-disk-demo - name: cloudinitdisk cloudinitnocloud: userdata: | #cloud-config password: password chpasswd: { expire: false } save this yaml to a file named test-vm.yaml and apply it: kubectl apply -f test-vm.yaml 2. start the vm: virtctl start test-vm 3. check the vm status: kubectl get vmi managing vm lifecycle kubevirt provides several commands to manage the lifecycle of vms: # access to the virtual machine virtctl console test-vm # stop the virtual machine virtctl stop test-vm # restart the virtual machine : virtctl restart test-vm # delete the virtual machine : virtctl delete test-vm advanced topics live migration: kubevirt supports live migration of vms, allowing you to move a running vm from one node to another without downtime. enable live migration: update the kubevirt cr to enable live migration: apiversion: kubevirt.io\/v1 kind: kubevirt metadata: name: kubevirt namespace: kubevirt spec: \u00A0 configuration: developerconfiguration: featuregates: \u00A0 \u00A0 \u00A0 \u00A0 - livemigration migrate a vm: use the virtctl command to migrate a vm: virtctl migrate test-vm snapshots and cloning kubevirt provides snapshot and cloning capabilities for vms. create a snapshot: virtctl snapshot create test-vm --name test-snapshot clone a vm: virtctl clone create test-vm --name test-clone best practices running kubevirt on aws eks introduces unique challenges due to the hybrid nature of managing both containers and virtual machines (vms) in a single kubernetes cluster. to ensure stability, security, and performance, follow these best practices: security considerations: network security & isolation use kubernetes network policies: restrict vm-to-vm and vm-to-container communication using networkpolicy rules. for example: apiversion: networking.k8s.io\/v1 kind: networkpolicy metadata: name: vm-isolation spec: podselector: matchlabels: kubevirt.io\/vm: \"\" policytypes: - ingress - egress ingress: [] egress: [] this denies all traffic by default, requiring explicit rules for allowed communication. use service meshes for advanced security: integrate istio or cilium to enforce mtls between vms and containers. enable pod security policies (psp) or opa gatekeeper: enforce security policies to prevent unauthorized access to kubevirt resources isolate sensitive vms: run vms with sensitive workloads in dedicated namespaces with restricted rbac permissions. authentication & authorization implement rbac (role-based access control): restrict virtctl and kubevirt api access to only authorized users. example minimal role: apiversion: rbac.authorization.k8s.io\/v1 kind: role metadata: namespace: vm-production rules: - apigroups: [\"kubevirt.io\"] resources: [\"virtualmachines\"] verbs: [\"get\", \"list\", \"start\", \"stop\"] use aws iam roles for service accounts (irsa): ensure secure access to aws services (ebs, ec2) from kubevirt-managed vms. enable kubevirt audit logging: monitor vm creation, modification, and deletion for compliance. vm hardening \u00A0 use read-only root disks: prevent unauthorized modifications to vm images. disable unnecessary services: minimize attack surfaces by disabling unused kernel modules and services inside vms. regularly patch vm images: apply security updates to base vm images (e.g., ubuntu, centos) just as you would for containers. conclusion kubevirt is a powerful tool that extends kubernetes capabilities to manage virtual machines alongside containers. by following this blogpost, you've learned how to install kubevirt on an aws eks cluster, create and manage vms, configure networking and storage, and explore advanced features like live migration and snapshots. with these skills, you can leverage kubevirt to run vm-based workloads in a kubernetes-native environment, unlocking new possibilities for your cloud infrastructure. as kubernetes continues to evolve, the integration of virtual machines through tools like kubevirt is becoming increasingly important. this convergence of containerized and vm-based workloads allows organizations to modernize their infrastructure without abandoning legacy applications. kubevirt bridges the gap between traditional virtualization and cloud-native technologies, enabling a seamless transition to a hybrid environment. as you continue to explore kubevirt, consider experimenting with its advanced features, integrating it with other kubernetes tools, and engaging with the community. whether you're running legacy applications, specialized workloads, or building a hybrid cloud environment, kubevirt on aws eks offers a powerful and flexible solution."
},
{
"title":"Retrieval Augmented Generation (RAG): Boosting AI Accuracy with Contextual Learning",
"body":"Retrieval Augmented Generation (RAG) is a AI methodology designed to enhance the accuracy and relevance of large language models (LLMs) by integrating real-time, external data. Unlike traditional LLMs that rely solely on static training data, RAG dynamically retrieves context from sources like documents, databases, or APIs, ensuring outputs are fact-based, up-to-date, and tailored to use...",
"post_url":"https://www.kloia.com/blog/retrieval-augmented-generation",
"author":"Mert Bozk\u0131r",
"publish_date":"09-<span>Feb<\/span>-2025",
"author_url":"https://www.kloia.com/blog/author/mert-bozkır",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/Retrieval%20Augmented%20Generation%20%28RAG%29-1.png",
"topics":{ },
"search":"09 <span>feb</span>, 2025retrieval augmented generation (rag): boosting ai accuracy with contextual learning mert bozk\u0131r retrieval augmented generation (rag) is a ai methodology designed to enhance the accuracy and relevance of large language models (llms) by integrating real-time, external data. unlike traditional llms that rely solely on static training data, rag dynamically retrieves context from sources like documents, databases, or apis, ensuring outputs are fact-based, up-to-date, and tailored to user needs. this two-step process\u2014retrieving relevant information and generating context-aware responses\u2014addresses critical llm limitations such as hallucinations (fabricated outputs) and outdated knowledge. industries like healthcare, legal, education, and content creation leverage rag to reduce errors, personalize solutions, and accelerate data-driven decision-making. by grounding ai in verified information, rag bridges the gap between raw data and actionable insights, making ai systems more reliable and trustworthy. the evolution of ai with rag artificial intelligence (ai) is advancing rapidly, and retrieval augmented generation (rag) has emerged as a game-changer for enhancing large language models (llms). by integrating external data sources, rag elevates the quality, relevance, and reliability of ai outputs. this guide explores rag\u2019s framework, tools, applications, and optimization techniques for it professionals seeking to harness its potential. what is retrieval augmented generation (rag)? rag is a hybrid ai methodology that combines information retrieval and contextual response generation to improve llm performance. here\u2019s how it works: information retrieval: rag queries external knowledge bases\u2014documents, databases, or apis\u2014to fetch real-time, domain-specific data. this bypasses llms\u2019 static training data limitations. response generation: the retrieved context is fed to the llm, enabling it to craft precise, fact-based answers through in-context learning. this dual process ensures ai outputs are accurate, relevant, and grounded in verified information. the rag ecosystem: core components to leverage rag effectively, it teams must master these key concepts: vector embeddings: convert unstructured data (text, images) into numerical representations using models like bert or gpt. these embeddings capture semantic meaning for efficient retrieval. vector databases: specialized databases (e.g., pinecone, milvus) store embeddings and enable lightning-fast similarity searches. semantic search: transform user queries into vectors to retrieve the most contextually aligned data from vector databases. why rag matters: solving llm limitations llms often struggle with hallucinations (fabricated outputs) and outdated knowledge. rag addresses these gaps by: improving accuracy: anchors responses in real-world data from trusted sources. enhancing relevance: tailors outputs to user-specific contexts (e.g., industry, use case). building trust: citations from retrieved data let users verify outputs, boosting credibility. top tools for implementing rag it teams can deploy rag using these critical tools: orchestration: frameworks like langchain streamline ai workflows. llm providers: apis from openai, anthropic, or hugging face. vector databases: solutions like weaviate or chroma for scalable similarity search. serving & inference: platforms like tensorflow serving for model deployment. llm observability: tools like arize ai monitor model performance and accuracy. rag in action: industry applications healthcare: accelerate diagnosis by retrieving patient history and medical research. legal: rapidly surface case law and compliance documents for litigation support. education: personalize learning paths using student performance analytics. content creation: generate fact-checked articles with cited sources. advanced rag techniques for it teams optimize rag performance with these strategies semantic chunking: split documents into context-rich segments (e.g., parent-child hierarchies) to improve retrieval precision. reranking models: use cross-encoders like cohere rerank to prioritize the most relevant documents post-retrieval. multi-step reasoning: decompose complex queries into sub-tasks routed to specialized models or databases. domain-specific embeddings: fine-tune embeddings on niche datasets (e.g., legal jargon) for better semantic alignment. self-reflection loops: enable ai models to validate outputs against retrieved data, reducing errors. we engineer rag solutions to drive efficiency, innovation, and roi for businesses. whether optimizing legal research or personalizing customer interactions, rag bridges the gap between static llms and dynamic data needs. got questions 5 faqs about agentic ai what is retrieval augmented generation (rag)? rag is an ai technique that combines information retrieval from external sources with contextual response generation using large language models. it enhances llm outputs by dynamically pulling real-time data to improve accuracy and relevance. why is rag important for ai systems? llms often produce outdated or incorrect answers due to their reliance on static training data. rag solves this by fetching real-world, domain-specific information during inference, reducing hallucinations and ensuring responses are grounded in verified facts. how do vector databases support rag? vector databases (e.g., pinecone, milvus) store data as numerical vectors, enabling rapid similarity searches. by converting text, images, or voice into vectors, rag systems efficiently retrieve the most contextually relevant information for llms. which industries benefit most from rag? healthcare: accelerates diagnosis by retrieving patient history and medical research. legal: streamlines case law and compliance document searches. education: personalizes learning materials based on student progress. content creation: generates fact-checked articles with cited sources. how can rag performance be optimized? semantic chunking: split documents into meaningful segments for precise retrieval. reranking: prioritize top results using models like cohere rerank. domain-specific embeddings: train embeddings on industry data (e.g., legal terms). multi-step reasoning: break complex queries into sub-tasks for specialized processing. have anything else to ask? share your question with us."
},
{
"title":"Top 8 AWS Generative AI Applications Driving Business Growth in 2025",
"body":"Generative AI (Gen AI) is no longer a futuristic concept\u2014it\u2019s a business imperative. By leveraging AWS\u2019s AI agents, companies across industries are automating workflows, accelerating innovation, and delivering hyper-personalized experiences. From marketing to healthcare, AWS tools like Amazon SageMaker, AWS Bedrock, and Amazon Rekognition are transforming how businesses operate. Let\u2019s ex...",
"post_url":"https://www.kloia.com/blog/https/www.kloia.com/blog/https/www.kloia.com/blog/retrieval-augmented-generation-0",
"author":"Yasemin Erinan\u00E7 Y\u0131ld\u0131z",
"publish_date":"09-<span>Feb<\/span>-2025",
"author_url":"https://www.kloia.com/blog/author/yasemin-erinanç-yıldız",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/AWS%20AI%20Agents%20in%20Generative%20AI%20for%20Businesses-1%20%281%29.png",
"topics":{ },
"search":"20 <span>mar</span>, 2025top 8 aws generative ai applications driving business growth in 2025 yasemin erinan\u00E7 y\u0131ld\u0131z generative ai (gen ai) is no longer a futuristic concept\u2014it\u2019s a business imperative. by leveraging aws\u2019s ai agents, companies across industries are automating workflows, accelerating innovation, and delivering hyper-personalized experiences. from marketing to healthcare, aws tools like amazon sagemaker, aws bedrock, and amazon rekognition are transforming how businesses operate. let\u2019s explore the top applications and their roi-driven impact. why aws generative ai aws gen ai agents combine scalability, security, and enterprise-grade infrastructure to deliver: cost efficiency: pay-as-you-go models eliminate upfront investments. speed to market: pre-built templates and apis reduce development time by 50%+. compliance: gdpr-ready frameworks ensure data privacy. multimodal flexibility: generate text, images, code, and simulations. top 8 business applications of aws generative ai 1. marketing & advertising: boost roi with personalized campaigns content personalization: aws sagemaker analyzes customer behavior to generate tailored ads, emails, and product descriptions, increasing conversion rates by up to 30%. dynamic visuals: amazon rekognition auto-generates banners and social media visuals, slashing design costs by 40%. 2. product development: accelerate innovation rapid prototyping: create 3d models and simulations with sagemaker, reducing physical prototyping costs by 60%. ai-driven design: optimize product designs using real-world feedback, cutting time-to-market by 25%. 3. healthcare: revolutionize patient outcomes drug discovery: analyze chemical datasets to identify viable drug candidates 10x faster. personalized medicine: generate custom treatment plans using patient genetics, improving recovery rates by 20%. 4. education: transform learning experiences adaptive content: convert static materials into interactive courses with amazon polly, boosting engagement by 35%. virtual training: simulate real-world scenarios for industries like aviation and healthcare, reducing training costs by 50%. 5. finance: enhance security & efficiency fraud detection: amazon fraud detector identifies suspicious transactions with 99% accuracy, saving millions in losses. automated reporting: generate real-time financial insights, cutting manual reporting hours by 70%. 6. supply chain: optimize operations inventory forecasting: amazon forecast predicts demand with 95% accuracy, minimizing stockouts. route optimization: reduce delivery costs by 15% with ai-generated logistics routes. 7. legal: streamline compliance contract automation: draft error-free legal documents in minutes, saving 20+ hours\/month. case research: summarize legal precedents 10x faster, accelerating case preparation. 8. cybersecurity: proactive defense threat simulation: test systems against ai-generated attacks, identifying vulnerabilities 50% faster. fraud prevention: detect anomalies in real-time, reducing breach risks by 40%. how to implement aws generative ai: a 5-step roadmap set up aws environment create an aws account and configure iam roles for sagemaker, lambda, and s3. aws cli installation guide. build & train models launch sagemaker studio and train models using jupyter notebooks. deploy models create sagemaker endpoints for real-time inference. automate workflows integrate aws lambda to trigger ai tasks (e.g., report generation). monitor & scale use amazon cloudwatch for performance tracking. enable auto-scaling to handle traffic spikes. explore aws sagemaker documentation why aws stands out in generative ai end-to-end ecosystem: from data lakes (s3) to ai\/ml tools (sagemaker, bedrock), aws offers unmatched integration. enterprise security: iso-certified infrastructure and encryption at rest. cost control: pay only for resources used, with reserved instances for long-term savings. aws generative ai isn\u2019t just about automation\u2014it\u2019s about strategic growth. by embedding ai into core workflows: 20-40% cost reduction in manual processes. 30% faster innovation cycles. enhanced customer loyalty through personalization. got questions 5 faqs about agentic ai what makes aws generative ai different from other ai platforms? aws gen ai integrates seamlessly with its cloud ecosystem (e.g., sagemaker, lambda) for end-to-end workflows, offers enterprise-grade security (gdpr\/iso compliance), and scales cost-effectively with pay-as-you-go pricing. which industries benefit most from aws generative ai? top use cases include: marketing: personalized campaigns via sagemaker. healthcare: drug discovery & patient-specific treatments. finance: fraud detection & automated reporting. supply chain: demand forecasting with amazon forecast. how does aws ensure data security in generative ai workflows? aws enforces encryption at rest\/in transit, iam role-based access, and compliance certifications (gdpr, hipaa). tools like aws kms safeguard sensitive data. can small businesses afford aws generative ai solutions? yes. aws\u2019s pay-as-you-go model and serverless options (lambda) minimize upfront costs. start with pre-built templates and scale as needed. what roi can businesses expect from aws generative ai? typical outcomes include: 20-40% cost savings from automated workflows. 30% faster product launches with ai-driven design. 50% reduction in fraud losses via real-time detection. have anything else to ask? share your question with us."
},
{
"title":"How to Deploy DeepSeek R1 Distill-Llama 8B on AWS",
"body":"In the ever-evolving landscape of artificial intelligence, DeepSeek R1 has emerged as a powerful contender, offering exceptional performance combined with cost efficiency. Developed by DeepSeek AI, this open-source model has made significant waves in the AI community, reshaping how we approach large language models. In this comprehensive guide, we\u2019ll walk you through the step-by-step pro...",
"post_url":"https://www.kloia.com/blog/how-to-deploy-deepseek-r1-distill-llama-8b-on-aws",
"author":"Ata A\u011Fr\u0131",
"publish_date":"09-<span>Feb<\/span>-2025",
"author_url":"https://www.kloia.com/blog/author/ata-ağrı",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/How%20to%20Deploy%20DeepSeek%20R1%20Distill-Llama%208B%20on%20AWS_%20A%20Comprehensive%20Guide-1.png",
"topics":{ },
"search":"09 <span>feb</span>, 2025how to deploy deepseek r1 distill-llama 8b on aws ata a\u011Fr\u0131 in the ever-evolving landscape of artificial intelligence, deepseek r1 has emerged as a powerful contender, offering exceptional performance combined with cost efficiency. developed by deepseek ai, this open-source model has made significant waves in the ai community, reshaping how we approach large language models. in this comprehensive guide, we\u2019ll walk you through the step-by-step process of deploying the distilled version of deepseek r1, known as deepseek r1 distill-llama 8b, on aws bedrock. but before diving into the technical details, let\u2019s understand a fundamental concept\u2014what is model distillation? what is knowledge distillation? knowledge distillation is transferring the knowledge of a larger model to a smaller model. by doing so, we are able to lower the computational cost with lower costs but without losing the validity. to give you a glimpse into how this is done, today, we\u2019ll be deploying deepseek r1 distill-llama 8b\u2014which is also offered by deepseek ai\u2014that distills llama 3.1 8b parameter model with deepseek r1 685b parameter model, which is a great reduction in size. before we begin, ensure you have the following: an aws account with the necessary iam roles configured (for bedrock and s3 access). the deepseek r1 distill-llama 8b model package. now, let\u2019s get started with the deployment process. step-by-step guide for deploy deepseek r1 distill-llama 8b on aws: step 1: download the deepseek r1 distill-llama 8b model to load the model package into your aws s3 bucket, you first need to download it locally from hugging face. download using git open your terminal. navigate to the directory where you want to download the model. run the following commands: git lfs install git clone git@hf.co:deepseek-ai\/deepseek-r1-distill-llama-8b step 2: upload the model package to aws s3 after downloading the model, it\u2019s time to upload it to an s3 bucket. create an s3 bucket log into the aws management console. in the search bar, type \u201Cs3\u201D and click on the service. click \u201Ccreate bucket\u201D. choose a unique name for your bucket (e.g., deepseek-r1-distill-llama-model-package). keep the default settings and click \u201Ccreate bucket\u201D. upload the model package go to your newly created bucket. click \u201Cupload\u201D. choose \u201Cadd folder\u201D since the model is in a folder. select the folder and click \u201Cupload\u201D. note: the upload process may take a few hours as the model folder is approximately 15gb. alternative method using aws cli if you encounter network issues during upload, use the aws cli: to find your s3 uri, open the s3 console, select the folder, and copy the uri (e.g., s3:\/\/my-bucket\/folder). step 3: import the model into aws bedrock with the model stored in s3, the next step is to import it using aws bedrock. import process in the aws console, search for \u201Cbedrock\u201D and open the service. from the left navigation pane, select \u201Cimported models\u201D. click \u201Cimport model\u201D. enter a model name (e.g., deepseek-r1-distill-llama-8b) and configure the job name if desired. in the \u201Cmodel import settings\u201D, click \u201Cbrowse s3\u201D. select the folder containing your model package (ensure you select the folder, not individual files). click \u201Cimport model\u201D. the import process may take some time. you can refresh the status periodically. once the status shows \u201Ccompleted\u201D, your model is ready. step 4: test the model in aws bedrock playground now that your model is imported, it\u2019s time to test its capabilities. access the playground in the bedrock console, navigate to \u201Cplayground\u201D. select \u201Csingle prompt mode\u201D. click \u201Cselect model\u201D and choose \u201Cimported models\u201D. select your imported deepseek r1 distill-llama 8b model. prompt structure for testing to interact with the model effectively, use the following prompt structure: <|begin\u2581of\u2581sentence|><|user|>your prompt here<|assistant|> this structure ensures the model understands the context correctly, as it requires role tags and sentence indicators for optimal performance. comparison mode if you wish to compare deepseek r1 distill-llama 8b with other models supported by aws: toggle \u201Ccompare mode\u201D in the top-right corner. select another model for side-by-side evaluation. key takeaways in this blog post, we\u2019ve explored the complete process of deploying deepseek r1 distill-llama 8b on aws bedrock, covering everything from downloading the model to importing it into aws and testing it in the bedrock playground. here are the key points to remember: deepseek r1 distill-llama 8b is a highly efficient ai model, distilled from the larger deepseek r1 685b, offering powerful performance with reduced computational costs. aws s3 and aws bedrock provide a seamless environment for storing, importing, and deploying large ai models securely and efficiently. the import process is straightforward, but optimizing your prompt structure is crucial to achieve the best model performance in the aws playground. while the model currently supports single prompt mode, it still delivers impressive results and can be compared with other models within aws bedrock for performance evaluation. by following these steps, you can deploy a production-ready ai solution that balances performance and cost-effectiveness, demonstrating the potential of open-source ai models in business environments. got questions 5 faqs what is deepseek r1 distill-llama 8b, and how is it different from the original deepseek r1? deepseek r1 distill-llama 8b is a distilled version of the original deepseek r1 model. while the original deepseek r1 has 685 billion parameters, the distilled version reduces this to 8 billion parameters through knowledge distillation. this makes it more computationally efficient while retaining much of the original model's performance. why should i use aws bedrock to deploy deepseek r1 distill-llama 8b? aws bedrock offers a fully managed environment that simplifies the deployment, scaling, and management of machine learning models. it provides easy integration with aws services like s3, robust security features, and a user-friendly interface for model testing and deployment. can i also deploy deepseek distill llama 70b instead of 8b with the same technique? yes, you can download the other distilled model and use it as your model package. what should i do if i encounter a network error while uploading the model to s3? if you face network issues during the upload, it\u2019s recommended to use the aws cli for a more reliable transfer. you can use the following command to upload your model package: aws s3 cp path\/to\/local\/model\/package s3:\/\/your-bucket-name --recursive is it possible to use deepseek r1 distill-llama 8b for multi-turn conversations in aws bedrock? currently, when tested in aws bedrock playground, the model operates in single prompt mode, meaning it doesn\u2019t retain context from previous interactions. however, for multi-turn conversations, you can implement custom session management in your application to maintain chat history and pass it as context with each new prompt. have anything else to ask? share your question with us."
},
{
"title":"Harness CI\/CD | Modern Automation for DevOps Excellence",
"body":"Harness is a CI\/CD platform that unifies automation, security, and efficiency in software delivery. Designed for DevOps teams, it accelerates workflows with low-code pipeline design, Kubernetes-native support, and GitOps (Argo CD\/Flux) integration. The platform minimizes risks through automated code quality scans, security testing, and rollback-ready deployments. Tailored for medium-to-l...",
"post_url":"https://www.kloia.com/blog/harness-ci/cd",
"author":"S\u00FCleyman \u0130dinak",
"publish_date":"09-<span>Feb<\/span>-2025",
"author_url":"https://www.kloia.com/blog/author/süleyman-i̇dinak",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/CD%20_%20Modern%20Automation%20for%20DevOps%20Excellence-1.png",
"topics":{ },
"search":"09 <span>feb</span>, 2025harness ci\/cd | modern automation for devops excellence s\u00FCleyman i\u0307dinak harness is a ci\/cd platform that unifies automation, security, and efficiency in software delivery. designed for devops teams, it accelerates workflows with low-code pipeline design, kubernetes-native support, and gitops (argo cd\/flux) integration. the platform minimizes risks through automated code quality scans, security testing, and rollback-ready deployments. tailored for medium-to-large enterprises managing microservices and multi-cloud environments, harness reduces operational overhead while speeding releases. monitor pipelines via a unified dashboard, ai-driven insights for optimization, and scale confidently with enterprise-grade governance. ideal for teams prioritizing speed without compromising compliance or reliability. what is harness? harness is a next-generation ci\/cd platform designed to automate and optimize software delivery pipelines. by combining intuitive automation, robust integrations, and gitops-driven deployments, harness empowers teams to accelerate releases while ensuring code quality, security, and reliability. harness key features low-code pipeline builder: visually design workflows with drag-and-drop simplicity. kubernetes & cloud-native support: integrated with aws eks, azure aks, docker, and hybrid environments. gitops (argo cd\/flux): automate deployments using git as the single source of truth. ai-powered insights: proactively identify risks, optimize pipelines, and reduce downtime. pipeline setup repository configuration: store pipelines in harness\u2019s native repository or a custom git repo (ensure the yaml path starts with .harness\/). create a git connector to link your repository (e.g., github, gitlab). infrastructure selection: choose execution environments: harness cloud, kubernetes clusters (aws eks, azure aks, on-prem), or docker containers. for kubernetes, deploy a harness delegate (lightweight agent) via helm to enable cluster communication. pipeline execution define workflow steps: build: compile code, create docker images, and publish artifacts. test: run unit, integration, and security scans (sonarqube, snyk). custom actions: integrate third-party tools from harness\u2019s extensive catalog. for details click here. run & monitor: execute pipelines via the delegate on your kubernetes cluster. track real-time logs, metrics, and statuses in harness\u2019s unified dashboard. deploying applications with gitops gitops agent setup install argo cd or flux: download the override.yaml file for agent configurations (argocd version, secrets, gitops agent settings). deploy the agent using helm: bash copy helm install gitops-agent -f override.yaml harness\/gitops-agent gitops deployment workflow sync policies: define how often the agent checks git for updates (e.g., every 5 minutes). source: link your git repo containing kubernetes manifests (e.g., helm charts, kustomize). destination: target cluster namespaces (production, staging). automated sync: click sync to deploy the latest git state to your cluster. beyond automation: comprehensive ci\/cd governance harness extends ci\/cd beyond basic automation to ensure end-to-end software excellence: code quality & security: integrate sast\/dast tools (checkmarx, owasp) and enforce governance gates. auto-fail pipelines on critical vulnerabilities. version control & rollbacks: tag artifacts with semantic versioning. one-click rollbacks to stable versions during incidents. observability & monitoring: correlate deployment metrics with apm tools (datadog, new relic). set alerts for performance degradation or errors. feature management: enable canary releases, a\/b testing, and feature flags for risk-free rollouts. why harness for medium & large enterprises? as organizations scale, managing microservices, multi-cloud environments, and compliance becomes complex. harness addresses this by: reducing operational overhead: pre-built templates, ai-driven optimizations, and automated governance. enhancing collaboration: unified visibility for dev, sec, and ops teams. future-proofing workflows: native support for emerging technologies (ai\/ml, serverless) got questions 5 faqs about harness ci\/cd how is harness different from traditional ci\/cd tools like jenkins? harness simplifies pipeline creation with a low-code, drag-and-drop interface, reducing reliance on scripting. it offers built-in ai-driven insights for risk prediction, gitops integration (argo cd\/flux), and enterprise-grade security features like automated vulnerability scanning\u2014capabilities that traditional tools often lack or require extensive plugins to achieve. can harness integrate with kubernetes and cloud-native environments? yes. harness natively supports kubernetes (aws eks, azure aks, on-prem) and cloud-native workflows. its lightweight harness delegate connects to clusters for seamless pipeline execution, while gitops automates deployments using your existing manifests (helm, kustomize). what security measures does harness provide for ci\/cd pipelines? harness enforces security via automated code scans (sonarqube, snyk), secrets management, and governance gates (e.g., blocking deployments on critical vulnerabilities). it also supports role-based access control (rbac) and compliance standards like soc2. is harness suitable for small teams or startups? while harness excels in complex environments (microservices, multi-cloud), its modular design allows scalability. smaller teams can adopt specific features (e.g., pipeline automation), but the platform\u2019s full value shines in medium-to-large enterprises needing advanced governance and scalability. how does harness handle failed deployments or rollbacks? harness enables one-click rollbacks to previous stable versions, minimizing downtime. it also provides real-time monitoring and automated triggers to revert deployments if anomalies (e.g., error spikes) are detected post-release. have anything else to ask? share your question with us."
},
{
"title":"Accelerate .NET Modernization with AWS Porting Assistant",
"body":"Migrating legacy applications built on the.NET Framework to modern.NET Core (or.NET 5+) is crucial for businesses aiming to boost performance, cut costs, and stay ahead in today's competitive landscape. However, this migration often involves extensive code modifications, dependency resolution, and configuration updates. Enter AWS Porting Assistant for.NET\u2014a powerful tool to streamline an...",
"post_url":"https://www.kloia.com/blog/accelerate-.net-modernization-with-aws-porting-assistant",
"author":"Tural Arda",
"publish_date":"03-<span>Jan<\/span>-2025",
"author_url":"https://www.kloia.com/blog/author/tural-arda",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/accelerate_net_modernization_with_aws_porting_assistant.webp",
"topics":{ "net-core":".NET Core","net-5-0":".NET 5.0" },
"search":"07 <span>jan</span>, 2025accelerate .net modernization with aws porting assistant .net core,.net 5.0 tural arda migrating legacy applications built on the.net framework to modern.net core (or.net 5+) is crucial for businesses aiming to boost performance, cut costs, and stay ahead in today's competitive landscape. however, this migration often involves extensive code modifications, dependency resolution, and configuration updates. enter aws porting assistant for.net\u2014a powerful tool to streamline and accelerate migration. by simplifying the transition from .net framework to .net core or .net 5+, aws porting assistant empowers developers to modernize applications with ease. this article explores the tool's features, the challenges it overcomes, and the unparalleled advantages it delivers to businesses embracing modernization. advantages of utilizing .net core for modernization the .net framework once set the standard for robust application development, but in today\u2019s fast-paced technological landscape, its limitations have become more apparent. migrating to .net core or .net 5+ offers transformative benefits, ensuring applications remain competitive, efficient, and cost-effective. enhanced cross-platform support: unlike the windows-exclusive .net framework, .net core allows applications to run seamlessly on linux, macos, and windows. this reduces reliance on windows server environments and eliminates vendor lock-in. improved performance: .net core delivers up to 30\u201350% better throughput for specific applications compared to the legacy .net framework. this improvement ensures faster response times, better scalability, and an optimized user experience. cost reduction: applications on the .net framework depend on costly windows-based virtual machines or servers. .net core supports deployment on linux environments, which significantly cuts cloud infrastructure costs by eliminating windows licensing fees. migrating to .net core not only modernizes applications but also sets the stage for improved performance, cross-platform compatibility, and long-term cost savings. businesses aiming to future-proof their applications will find .net core the ideal solution for navigating today\u2019s dynamic digital environment. advantages of .net modernization migrating legacy applications to .net core or .net 5+ unlocks a host of benefits that elevate performance, scalability, and cost efficiency while preparing businesses for the future. here are the key advantages cross-platform compatibility (dewindowsification): modern .net versions allow applications to run on linux and macos, freeing them from dependence on windows environments. this flexibility prevents vendor lock-in and broadens deployment options. cloud-readiness the lightweight architecture of .net core is tailored for cloud-native deployments and containerization, making it ideal for modern application environments. improved performance applications running on .net core demonstrate significantly higher throughput, reduced latency, and enhanced user experiences. performance benchmarks highlight its capabilities: framework requests per second cpu utilization .net framework 4.8 8,000 70% .net core 3.1 12,500 50% .net 6 15,000 45% .net 7 20,000\u201325,000 30\u201340% .net 8 30,000 20\u201330% cost efficiency by leveraging linux-based infrastructure, businesses can drastically reduce cloud costs. unlike windows environments, linux eliminates licensing fees, further enhancing cost savings. with tools like aws porting assistant, the migration process is simplified, enabling businesses to capitalize on the advantages of modernization with minimal effort. by upgrading to .net core or .net 5+, companies can position themselves for greater efficiency, flexibility, and long-term success in the digital age. challenges of code modernization migrating a large .net framework application to modern .net core or .net 5+ is a complex task. developers often encounter several obstacles that can slow progress and introduce risks. here's a breakdown of the most common challenges: code complexity: legacy applications can have thousands of lines of code, often requiring extensive refactoring to ensure compatibility with .net core. the process can be time-consuming and resource-intensive. nuget package incompatibilities: many third-party libraries or nuget packages used in the .net framework may lack direct replacements or compatibility with modern .net versions. identifying alternatives and updating these dependencies is a critical but challenging step. dependency management: managing external dependencies is another significant hurdle. outdated or mismatched dependencies can create cascading issues, making the migration process error-prone and difficult to manage manually. manual effort and human error: assessing compatibility and making changes manually often leads to inefficiencies. the risk of human error increases as developers handle large-scale applications, which can further delay project timelines. optimizing the process: using tools like aws porting assistant for .net, businesses can overcome these challenges by automating code analysis, identifying dependency issues, and providing actionable recommendations. this streamlined approach reduces the complexity and time involved, enabling faster and more reliable migrations. addressing these challenges with the right strategies and tools ensures a smoother transition to modern .net platforms, unlocking improved performance, cost savings, and scalability for applications. so what's the magic of aws porting assistant? the aws porting assistant for .net revolutionizes the migration process by automating and simplifying the complexities of transitioning legacy .net framework applications to modern .net core or .net 5+. here's how it works: automated code analysis: the tool conducts a comprehensive scan of the .net framework codebase, pinpointing areas that are incompatible with modern .net versions. this automated analysis saves time and reduces the risk of overlooking critical issues. dependency mapping: by identifying dependencies between projects within the application, the tool simplifies dependency management. it provides a clear roadmap, helping developers prioritize tasks and determine the optimal starting point for their modernization efforts. streamlined workflow: the assistant automates much of the migration process, from assessing compatibility to suggesting actionable fixes. this minimizes manual intervention, accelerates timelines, and enhances accuracy. with the aws porting assistant, developers gain a powerful ally in their modernization journey, enabling a seamless transition to modern .net platforms with minimal disruption. this tool transforms a traditionally complex process into an efficient, manageable, and cost-effective solution. detailed recommendations the aws porting assistant for .net simplifies migration by offering actionable insights tailored to modernizing legacy applications. it reduces manual labor by automating key steps in the migration process: actionable code recommendations: the tool identifies incompatible code and provides clear suggestions for modifying or replacing it. these actionable insights minimize guesswork and manual intervention, speeding up the migration. prioritization of changes: developers can focus on the most critical compatibility issues first. the assistant prioritizes changes based on severity, allowing teams to address significant challenges before tackling minor issues. key features of aws porting assistant compatibility assessment: the tool thoroughly evaluates the application's compatibility with modern .net versions. it generates a detailed report highlighting areas that require updates or refactoring. automated dependency analysis: aws porting assistant identifies incompatible nuget packages and third-party libraries, suggesting suitable replacements or alternatives. this eliminates the need for tedious manual searches. migration path suggestions: a clear migration plan is provided, including required code updates and recommended refactorings. this ensures a structured and efficient modernization process. with its intuitive interface, aws porting assistant highlights compatibility issues, streamlining the entire migration process. this powerful tool enables developers to modernize legacy applications confidently, ensuring faster adoption of modern .net core or .net 5+ platforms. here\u2019s an example of the tool\u2019s interface, highlighting compatibility issues: conclusion migrating legacy .net framework applications to modern .net core or .net 5+ offers transformative benefits, including enhanced performance, reduced infrastructure costs, and greater scalability. by automating critical tasks such as code analysis, dependency management, and refactoring, aws porting assistant for .net simplifies the modernization process and minimizes the challenges developers face. with this powerful tool, businesses can unlock the full potential of their applications in the cloud, significantly reduce migration timelines, and streamline their journey toward modernization. by leveraging aws porting assistant, organizations can achieve a cost-effective, efficient, and future-ready application infrastructure. start your migration today with aws porting assistant and accelerate your path to a more scalable, high-performing, and competitive future."
},
{
"title":"Generative AI in Action",
"body":"In today\u2019s rapidly evolving technological landscape, modernization is no longer optional. For businesses to remain competitive, they must embrace scalable, maintainable systems while transitioning away from outdated frameworks. At Kloia, we undertook a Proof of Concept (PoC) to explore the capabilities of Amazon Q Developer, a Generative AI-powered tool, in transforming a legacy applicat...",
"post_url":"https://www.kloia.com/blog/generative-ai-in-action",
"author":"Or\u00E7un Hanay",
"publish_date":"16-<span>Dec<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/orçun-hanay",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/generative_ai_in_action%20%281%29-1.webp",
"topics":{ "genai":"genai","amazon-q":"Amazon Q" },
"search":"04 <span>apr</span>, 2025generative ai in action genai,amazon q or\u00E7un hanay in today\u2019s rapidly evolving technological landscape, modernization is no longer optional. for businesses to remain competitive, they must embrace scalable, maintainable systems while transitioning away from outdated frameworks. at kloia, we undertook a proof of concept (poc) to explore the capabilities of amazon q developer, a generative ai-powered tool, in transforming a legacy application. the results were both promising and eye-opening, demonstrating the immense potential of ai in software modernization while highlighting areas where improvement is needed. modernization: the bridge to future-ready systems legacy systems, often foundational to business operations, pose challenges in agility and scalability. this poc focused on transitioning an application from the legacy.net framework to a modern stack, specifically.net 8, while improving maintainability, testability, and overall performance. amazon q developer in action: the poc breakdown amazon q developer\u2019s features\u2014porting, refactoring, explaining, fixing, and optimizing\u2014were tested on key components of the legacy system. let\u2019s examine the results. 1. porting: bridging the legacy gap one of the primary objectives was to convert a .net framework 4.7.2 library to .net 8. amazon q successfully handled the conversion, ensuring the library could be built and function in the new environment. however, we found that amazon q lacked support for.net standard\u2014a critical framework for maintaining compatibility with legacy systems. modernization projects often require such intermediary steps to facilitate a gradual transition. amazon q handled the transition smoothly but would benefit from broader compatibility options. 2. refactoring: a leap toward cleaner code the refactoring capabilities of amazon q stood out in this poc. it successfully decomposed complex, monolithic methods into smaller, maintainable units. for instance, a background job responsible for importing files was rewritten with proper separation of concerns. extracted error handling logic private void handleerror(fileimportparameters jobparameters, exception exception, linecounter counter) { using (unitofwork.start(unitofworknestingoption.createnewornestunitofwork)) { with.transaction(() => { var header = fileheaderrepository.get(jobparameters.fileimportheaderid); updateheaderwitherror(header, exception, counter); fileheaderrepository.update(header); }); } } private void updateheaderwitherror(fileheader header, exception exception, linecounter counter) { header.status = fileheaderstatusoptions.failed; header.message = \"unexpected error importing file\"; header.systemmessage = formaterrormessage(counter.linenumber, exception); } private string formaterrormessage(int linenumber, exception exception) { var message = $\"line {linenumber}-{exception.message}-{exception.stacktrace}\"; if (exception.innerexception != null) { message += $\"{environment.newline}{environment.newline}innerexception - {exception.innerexception.message}-{exception.innerexception.stacktrace}\"; } return message; } breaking down large methods into focused units not only improved readability but also enhanced testability, making the code more resilient to future changes. 3. explaining complex logic generative ai\u2019s ability to explain intricate code was another area where amazon q shined. it detailed the logic of a multi-threaded installation function, showcasing how it preserved important contexts like security and logging. code snippet: multi-threaded installation with contextual preservation parallel.foreach(apps, (app, loopstate) => { commons.threading.with.iothread(data, culture, principal, log4netproperties, httpcontext, () => setup(app, ioadminemail, address)); }); } explanation by amazon q: data retrievalcaptures application data, including host information and email settings. context capturepreserves security principles, culture settings, and logging configurations. parallel processingutilizes parallel.foreach for concurrent execution of application installations. this level of detail made onboarding developers easier and facilitated a better understanding of legacy code. 4. fixing inefficiencies amazon q also demonstrated its capability to identify and resolve inefficiencies. for example, it improved debugging logic by replacing verbose string concatenations with stringbuilder, optimizing resource management. code snippet: optimized debugging logic catch (exception ex) { logger.error(ex.message, ex); using (unitofwork.start(unitofworknestingoption.createnewornestunitofwork)) { with.transaction(() => { var header = fileheaderrepository.get(jobparameters.fileheaderid); header.status = fileheaderstatusoptions.failed; header.systemmessage = new stringbuilder() .appendformat(\"line {0}-{1}-{2}\", counter.linenumber, exception.message, exception.stacktrace) .appendline() .appendformat(\"innerexception - {0}-{1}\", exception.innerexception?.message, exception.innerexception?.stacktrace) .tostring(); fileheaderrepository.update(header); }); } throw; } this reduced memory consumption and improved error reporting. optimization: maximizing performance lastly, amazon q\u2019s optimization feature addressed performance bottlenecks. by introducing better resource management techniques, it ensured smoother execution of file processing tasks. code snippet: enhanced file processing logic private void processfilerows( ifilehandler handler, fileparameters jobparameters, ifileparser parser, fileitembulkinsertrepository itembulkrepository, linecounter counter, filetheader header) { var datatable = itembulkrepository.createemptyfileitemdatatable(); while (handler.movenext()) { processsinglerow(handler, jobparameters, parser, itembulkrepository, counter, datatable); if (counter.isbatchfull()) { saveandresetbatch(itembulkrepository, ref datatable, counter); } } handleremainingrows(itembulkrepository, datatable, counter); finalize(header, counter, jobparameters, parser); } this modular approach streamlined row-by-row processing, making the system more efficient and easier to debug. challenges faced despite its strengths, the poc revealed key challenges: performance limitationslarger codebases caused high memory usage and stalled processes. compatibility gapslack of support for .net standard restricted its use in scenarios requiring legacy compatibility. regulatory compliancesecurity and privacy policies are needed to address industry-specific requirements. recommendations for amazon q developer to make amazon q more robust for enterprise use: expand compatibilityinclude support for .net standard to facilitate gradual modernization. enhance scalabilityoptimize memory usage to handle larger projects seamlessly. transparent policiesprovide clear security and privacy guidelines tailored for regulated industries. conclusion: unlocking modernization with generative ai the poc with amazon q developer showcased the potential of ai in transforming legacy systems. while it excelled in automating repetitive tasks and offering actionable insights, addressing its limitations will unlock even greater value. at kloia, we see tools like amazon q as the future of software modernization\u2014empowering businesses to innovate faster and stay competitive in an ever-changing world. what\u2019s your experience with ai-driven tools in modernization? share your thoughts below! learn more about our modernization strategies at kloia. got questions others frequently ask\u2026 what is alfred? alfred is an ai-powered faq module for websites. it automatically generates answers to questions based on the information in its knowledge. can i get alfred free? alfred is free to use as a standard faq. paid version is needed to enable all the ai features. to learn more about different tires, visit alfred's pricing page. is there premium version of alfred? a premium version is available that includes ai-powered question-and-answer functionality within the widget based on uploaded knowledge in alfred's admin panel. to learn more visit alfred website. how can i benefit from ai features? to maximize benefits from alfred's ai features, use it for automated customer support to handle routine inquiries, freeing up your team for complex issues. its 24\/7 availability enhances customer satisfaction by providing instant answers anytime. leverage personalized responses for more relevant customer interactions and utilize data-driven insights to inform business strategies. additionally, alfred can reduce operational costs, improve your website's seo, and significantly enhance the overall user experience on your site by making information readily accessible and interactions more efficient. to learn more visit alfred website. have anything else to ask? share your question with us."
},
{
"title":"AWS Aurora DSQL: General Overview",
"body":"Amazon just dropped Aurora DSQL, and if you're like us, the first question is: What exactly is this, and where does it fit? Aurora DSQL is AWS's shiny new distributed SQL database, claiming to bring PostgreSQL compatibility into a serverless, highly scalable environment. Sounds great, right? But let\u2019s dig in\u2014this database has some strengths and a few quirks that you\u2019ll want to know befor...",
"post_url":"https://www.kloia.com/blog/aws-aurora-dsql",
"author":"Emre Kasgur",
"publish_date":"05-<span>Dec<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/emre-kasgur",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws_aurora_dsql_general_overview.webp",
"topics":{ "aws":"AWS","aurora":"Aurora" },
"search":"20 <span>dec</span>, 2024aws aurora dsql: general overview aws,aurora emre kasgur amazon just dropped aurora dsql, and if you're like us, the first question is: what exactly is this, and where does it fit? aurora dsql is aws's shiny new distributed sql database, claiming to bring postgresql compatibility into a serverless, highly scalable environment. sounds great, right? but let\u2019s dig in\u2014this database has some strengths and a few quirks that you\u2019ll want to know before you dive in. what is aurora dsql anyway? aurora dsql is amazon\u2019s innovative take on combining the familiarity of relational databases with the scalability and flexibility of serverless distributed systems. it\u2019s designed to deliver the power of postgresql while shedding the traditional infrastructure management overhead. in short, it\u2019s a database built for modern, cloud-native applications that need to handle unpredictable workloads with ease. here\u2019s the high-level pitch: postgresql-compatible (but not fully)aurora dsql supports postgresql\u2019s wire protocol, so you can use sql queries and familiar tools. however, some advanced features like triggers, views, and nested transactions aren\u2019t supported, so it\u2019s not a one-to-one replacement for postgresql. truly serverlessforget about provisioning, scaling, or patching servers. aurora dsql scales automatically to match your application\u2019s needs, whether you're dealing with low traffic or a massive spike. distributed and fault-tolerantdata is distributed across multiple nodes, providing high availability, fast read performance, and resilience to node failures. aurora dsql is amazon\u2019s answer for teams who want relational database capabilities in a serverless package, making it an attractive option for elastic, high-concurrency workloads. but as with any innovation, it comes with its own set of trade-offs, which we\u2019ll explore further. the features we love to see: serverless you don't manage, scale, and patch any servers, you only get an endpoint. pay-as-you-go you only pay for what you use. it\u2019s postgresql-ishif you\u2019ve ever worked with postgresql, the learning curve for aurora dsql is almost nonexistent. you can use familiar syntax and tools, making migrations relatively straightforward. that said, don\u2019t expect a full postgresql feature set\u2014there are a few \u201Cmissing pieces\u201D we\u2019ll cover in a bit. aws ecosystem integration being part of aws means aurora dsql plays nicely with other aws services. iam authentication, cloudwatch monitoring, and kms for encryption are all baked in, simplifying your devops workflows. but... there are some quirks let\u2019s be real\u2014aurora dsql isn\u2019t perfect - like every product. there are some limitations and \u201Cgotchas\u201D you need to know before jumping in. missing postgresql features: aurora dsql might claim postgresql compatibility, but let's be honest\u2014it\u2019s more like postgre-wire protocol compatible. why? while it supports the syntax and apis for basic sql operations, many of postgresql's powerful features are notably absent. here's the rundown: no viewsif you rely on views for simplifying complex queries, you\u2019ll need to think a different way. no triggers aurora dsql skips one of postgresql's most powerful automation tools. if you want to find an event-driven database logic, you will not find it here. no sequencesauto-incrementing primary keys is a thing of the past. uuids are your new best friend. no nested transactions no jsonb: if you're storing semi-structured data, you\u2019ll need to adapt or look elsewhere. no enforcement on foreign key constraints: it\u2019s the part where you lose one of the core benefits of relational databases: referential integrity. while this helps with scalability, it shifts the responsibility of data integrity to the application layer. almost no extensionsforget adding your favorite postgresql extensions like pgcrypto, postgis, pgvector or hstore. no geospatial or vector support: geospatial data types (geometry, geography) and vector types (useful for machine learning models or embeddings) are completely unsupported. this makes it a poor choice for applications like mapping, logistics, or ai-driven systems. no support for postgre : if your application relies on real-time notifications or pub\/sub messaging through postgresql's listen and notify commands, aurora dsql doesn\u2019t offer any equivalent functionality. indexing trade-offswhile dsql supports basic indexing, it lacks advanced indexing features like partial indexes and covering indices ( include in postgresql). this can lead to less optimized query performance for complex workloads. yes vendor lock-in while not a technical quirk, it\u2019s worth noting that aurora dsql\u2019s heavy reliance on aws services like iam and differences between postgresql means you\u2019re deeply tied to the aws ecosystem. migrating to another provider in the future could be a challenging task. reality check calling aurora dsql \"postgresql-compatible\" feels a bit generous. there\u2019s a more accurate description for that: \u201Cpostgresql wire-protocol compatible.\u201D it can speak postgresql, but it doesn't deliver the full experience of a traditional postgresql database. if your app relies heavily on postgresql\u2019s advanced features, you\u2019ll need to refactor\u2014or rethink entirely. optimistic concurrency control: aurora dsql uses optimistic concurrency control to manage simultaneous updates to the same data. instead of locking rows, it detects conflicts when they happen. if two processes try to update the same data at the same time, one will succeed while the other gets an error and needs to retry. this design removes the risk of deadlocks entirely, but it does mean your application needs to handle these errors and implement retry logic. transaction row limit: aurora dsql enforces a liste. if your app relies on bulk operations, like 10.000, you\u2019ll need to rethink how you structure those tasks. deleting 10,000 rows? you\u2019ll have to break it into smaller chunks and commit them. where aurora dsql fits (and where it doesn\u2019t) aurora dsql shines in scenarios where scalability, elasticity, and integration with aws services are key. it\u2019s a strong contender for: gaming platformshandle low-latency transactions for real-time multiplayer games with ease. e-commercemanage unpredictable traffic spikes during seasonal events like black friday. modern saas applicationssupport multi-tenant environments with complex query requirements. real-time analyticsprocess high-concurrency oltp workloads while supporting olap-style queries. cost-sensitive projectsthe pay-as-you-go model ensures you only pay for what you use, making it attractive for startups or variable workloads. where it falls short there are also clear situations where aurora dsql isn\u2019t the best fit: feature-rich postgresql applicationsapps relying on views, triggers, sequences, or extensions will require heavy refactoring. high-batch processing workloadsthe 10,000-row transaction limit can disrupt workflows like bulk data imports or exports. geospatial and ai-driven applicationswithout geospatial or vector support, it\u2019s not suitable for mapping, logistics, or ml-based recommendation engines. aurora dsql vs. the competition when evaluating aurora dsql, it\u2019s important to see how it compares to other modern database solutions regarding features, scalability, and best-fit use cases. to help you make an informed decision, here\u2019s a quick comparison of aurora dsql alongside google cloud alloydb, batch ,and auroradb serverless. feature aws aurora dsql google cloud alloydb dynamodb auroradb serverless data model relational (postgresql-compatible) relational ( postgresql-comptabile= nosql (key-value\/document) relational (postgresql\/mysql) scaling serverless, distributed sql managed global sql scaling serverless, globally scalable serverless, vertical scaling transaction model acid (10,000-row limit) strong global acid limited acid (25-item limit) acid postgresql compatibility partial full none full extensions partial full none full no connection pooling required yes yes yes no az fault tolerance high (distributed nodes) high (replicated globally) high medium (multi-az replication) best fit oltp + olap complex, distributed oltp nosql workloads general relational workloads geospatial support no yes no yes (via postgis for postgresql) authentication iam-only google iam iam-like iam, native user credentials conclusion: aurora dsql \u2013 a step forward with room to grow aurora dsql represents aws\u2019s vision for a distributed, serverless sql database that blends scalability with simplicity. by combining postgresql compatibility with serverless architecture, it offers an efficient solution for teams looking to focus on development rather than the operational complexities of managing infrastructure. however, aurora dsql is not a drop-in replacement for postgresql. while it supports the postgresql wire protocol and basic sql operations, it lacks advanced features like triggers, views, extensions, and certain data types such as jsonb. these omissions mean it\u2019s better suited for new, cloud-native projects designed with its strengths and limitations in mind, rather than migrating feature-rich postgresql applications. instead of thinking of aurora dsql as \u201Cpostgresql in the cloud,\u201D consider it for its serverless, multipurpose capabilities. it\u2019s an oltp database that can also handle olap workloads, making it a strong alternative to dynamodb for teams that need more powerful querying capabilities but don\u2019t have predefined query patterns. aurora dsql feels like postgres but drives like serverless\u2014and we like that. the serverless design remains its biggest advantage, offering dynamic scaling and cost efficiency. however, the quirks require careful consideration during the design phase. we specialize in helping organizations navigate the complexities of modern databases. whether you\u2019re exploring aurora dsql for a new project or rethinking your database strategy, we\u2019re here to provide practical insights and solutions tailored to your needs. get in touch with us to learn how we can help you unlock the potential of aurora dsql and aws\u2019s database offerings."
},
{
"title":"How to Enable Live Migration in KubeVirt with AWS FSx and OpenShift",
"body":"In today's computing landscape, ensuring live migration of virtual machines (VMs) is essential for maintaining high availability and minimizing downtime during maintenance tasks. KubeVirt, an extension of Kubernetes, integrates VM management into the containerized world, enabling unified control over both virtualized and containerized workloads. In this guide, we\u2019ll walk you through sett...",
"post_url":"https://www.kloia.com/blog/how-to-enable-live-migration-in-kubevirt-with-aws-fsx-and-openshift",
"author":"Bilal Unal",
"publish_date":"17-<span>Oct<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/bilal-unal",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/enabling_live_migration_in_kubevirt_with_aws_fsx_and_openshift.webp",
"topics":{ "aws":"AWS","devops":"DevOps","cloud":"Cloud","kubernetes":"Kubernetes","openshift":"openshift","containers":"Containers","migrationtocloud":"migrationtocloud","aws-fsx":"AWS FSx","kubevirt":"KubeVirt" },
"search":"24 <span>oct</span>, 2024how to enable live migration in kubevirt with aws fsx and openshift aws,devops,cloud,kubernetes,openshift,containers,migrationtocloud,aws fsx,kubevirt bilal unal in today's computing landscape, ensuring live migration of virtual machines (vms) is essential for maintaining high availability and minimizing downtime during maintenance tasks. kubevirt, an extension of kubernetes, integrates vm management into the containerized world, enabling unified control over both virtualized and containerized workloads. in this guide, we\u2019ll walk you through setting up a scalable infrastructure on aws that supports the live migration feature in kubevirt, utilizing aws fsx for netapp ontap and openshift container platform (ocp). by the end of this tutorial, you\u2019ll have the tools to live migrate vms effortlessly within your kubernetes cluster, ensuring high availability and reliability in your cloud environment. tech stack overview the following technologies are crucial to this setup: aws bare metal ec2 instances provide the physical hardware required for running kubevirt, as kubevirt requires deployment on metal instances. aws fsx for netapp ontap offers a fully managed shared file system with readwritemany access, essential for live migration. openshift container platform (ocp) a kubernetes-based container orchestration platform that simplifies application deployment and management. kubevirt extends kubernetes by allowing it to manage virtual machines as native kubernetes resources. trident csi driver a container storage interface (csi) driver from netapp that integrates with kubernetes to manage storage provisioning. prerequisites before proceeding, ensure you have: basic knowledge of kubernetes, openshift, and aws services. a clone of the project repository, which contains all the necessary configuration files and templates: git clone https:\/\/github.com\/kloia\/aws-ocp-kubevirt-fsx.git cd aws-ocp-kubevirt-fsx an aws account with the necessary permissions to create ec2 instances, fsx file systems, and vpc configurations. the openshift installer and kubectl command-line tools installed on your workstation. deployment setting up the environment installation deploying the trident csi driver deploying kubevirt live migration of vms with kubevirtsetting up the environment download the openshift installer first, download the openshift installer for your platform: curl -fsslo https:\/\/mirror.openshift.com\/pub\/openshift-v4\/x86_64\/clients\/ocp\/4.14.4\/openshift-install-mac-arm64-4.14.4.tar.gz tar xzf openshift-install-mac-arm64-4.14.4.tar.gz .\/openshift-install --help create the installation configuration create a directory for your openshift manifests: mkdir -p ocp-manifests-dir\/ first, download the openshift installer for your platform: save the following install-config.yaml file inside ocp-manifests-dir\/: apiversion: v1 basedomain: yourdomain.com compute: - name: worker platform: aws: type: c5.metal replicas: 2 controlplane: name: master platform: {} replicas: 3 metadata: name: ocp-demo networking: networktype: ovnkubernetes platform: aws: region: your-aws-region publish: external pullsecret: 'your-pull-secret' sshkey: 'your-ssh-key' note: replace placeholders like yourdomain.com, your-aws-region, your-pull-secret, and your-ssh-key with your actual values. generate manifests generate the openshift manifests: .\/openshift-install create manifests --dir ocp-manifests-dir installation install the openshift cluster backup your installation configuration: cp -r ocp-manifests-dir\/ ocp-manifests-dir-bkp start the cluster installation: .\/openshift-install create cluster --dir ocp-manifests-dir --log-level debug provision aws fsx for netapp ontap we need a multi-az file system to support readwritemany access. navigate to the fsx directory and create the fsx file system using aws cloudformation: cd fsx aws cloudformation create-stack \\ --stack-name fsxontap \\ --template-body file:\/\/.\/netapp-cf-template.yaml \\ --region your-aws-region \\ --parameters \\ parameterkey=subnet1id,parametervalue=subnet-xxxxxxxx \\ parameterkey=subnet2id,parametervalue=subnet-yyyyyyyy \\ parameterkey=myvpc,parametervalue=vpc-zzzzzzzz \\ parameterkey=fsxontaproutetable,parametervalue=rtb-aaaaaaa,rtb-bbbbbbb \\ parameterkey=filesystemname,parametervalue=myfsxontap \\ parameterkey=throughputcapacity,parametervalue=256 \\ parameterkey=fsxallowedcidr,parametervalue=0.0.0.0\/0 \\ parameterkey=fsxadminpassword,parametervalue=yourfsxadminpassword \\ parameterkey=svmadminpassword,parametervalue=yoursvmadminpassword \\ --capabilities capability_named_iam note: replace the parameter values with your actual aws resource ids and desired passwords. deploying the trident csi driver set kubeconfig environment variable export kubeconfig=$(pwd)\/ocp-manifests-dir\/auth\/kubeconfig kubectl get nodes install trident operator create the trident namespace and install the trident csi driver: oc create ns trident curl -l -o trident-installer.tar.gz https:\/\/github.com\/netapp\/trident\/releases\/download\/v22.10.0\/trident-installer-22.10.0.tar.gz tar -xvf trident-installer.tar.gz cd trident-installer\/helm helm install trident -n trident trident-operator-22.10.0.tgz create secrets for backend access create a svm_secret.yaml file with the following content: apiversion: v1 kind: secret metadata: name: backend-fsx-ontap-nas-secret namespace: trident type: opaque stringdata: username: vsadmin password: yoursvmadminpassword apply the secret: oc apply -f svm_secret.yaml deploy the trident backend configuration edit backend-ontap-nas.yaml in the fsx directory, replacing placeholders with your fsx for ontap details: version: 1 storagedrivername: ontap-nas managementlif: management-dns-name datalif: nfs-dns-name svm: svm-name username: vsadmin password: yoursvmadminpassword apply the backend configuration: oc apply -f fsx\/backend-ontap-nas.yaml verify the backend status: oc get tridentbackends -n trident create a storage class create a storage class by applying storage-class-csi-nas.yaml: oc apply -f fsx\/storage-class-csi-nas.yaml verify the storage class: oc get sc deploying kubevirt install kubevirt in the openshift-cnv namespace: echo ' apiversion: v1 kind: namespace metadata: name: openshift-cnv --- apiversion: operators.coreos.com\/v1 kind: operatorgroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetnamespaces: - openshift-cnv --- apiversion: operators.coreos.com\/v1alpha1 kind: subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourcenamespace: openshift-marketplace name: kubevirt-hyperconverged startingcsv: kubevirt-hyperconverged-operator.v4.14.0 channel: \"stable\"' | k apply -f- wait for all pods in openshift-cnv to be ready, then create the hyperconverged resource: apiversion: hco.kubevirt.io\/v1beta1 kind: hyperconverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:' | k apply -f- verify the installation: oc get csv -n openshift-cnv oc get kubevirt -n openshift-cnv oc get hyperconverged -n openshift-cnv live migration of vms with kubevirt scenario you have two vms running on separate bare-metal worker nodes in your openshift cluster. you need to perform maintenance on workera and want to live-migrate its vm to workerb without downtime. challenge kubevirt vms are essentially kubernetes pods. when a pod moves to a different node, it gets a new ip address, disrupting connectivity. solution to maintain continuous network connectivity during migration, we'll add a second network interface to the vms using a networkattachmentdefinition (nad). this secondary interface will have a static ip, ensuring seamless communication post-migration. create networkattachmentdefinition (nad) create a namespace for your vms oc create ns vm-test apply the nad configuration: apiversion: k8s.cni.cncf.io\/v1 kind: networkattachmentdefinition metadata: name: static-eth1 namespace: vm-test spec: config: '{ \"cniversion\": \"0.3.1\", \"type\": \"bridge\", \"bridge\": \"br1\", \"ipam\": { \"type\": \"static\" } }' apply the nad: oc apply -f virtualization\/nad.yaml create vms with dual nics create two vms, each with two network interfaces: oc apply -f virtualization\/vm-rhel-9-dual-nic.yaml verify the vms are running: oc get vm -n vm-test assign ip addresses to secondary nics access each vm console and assign static ips to eth1: vm a: virtctl console -n vm-test rhel9-dual-nic-a inside the vm: sudo ip addr add 192.168.1.10\/24 dev eth1 vm b: virtctl console -n vm-test rhel9-dual-nic-b inside the vm: sudo ip addr add 192.168.1.11\/24 dev eth1 connectivity test from vm a, ping vm b: ping 192.168.1.11 connectivity test from vm b, ping vm a: ping 192.168.1.10 successful replies confirm network connectivity over the secondary interfaces. live migration now, initiate live migration of vm a to workerb: oc migrate vm rhel9-dual-nic-a -n vm-test monitor the migration status: oc get vmim -n vm-test in conclusion, by integrating aws fsx for netapp ontap with openshift and kubevirt, we've successfully enabled live migration of virtual machines (vms) within a kubernetes cluster. utilizing a secondary network interface with a static ip ensured continuous network connectivity during migrations, allowing for seamless maintenance and scaling operations without disrupting running applications. this robust setup harnesses the power of aws managed services and open-source technologies to deliver a scalable, resilient infrastructure ideal for modern cloud-native workloads, ensuring high availability and operational efficiency."
},
{
"title":"Software Testing Trends To Look Out For In 2025",
"body":"The software development landscape is undergoing rapid transformation, with 2025 introducing promising trends in software testing. What began as a manual, time-consuming process has been revolutionized by automation and AI-driven tools. Testing is no longer just about checking if software works or finding bugs; it\u2019s about enabling software to optimize itself, driving innovation, and empo...",
"post_url":"https://www.kloia.com/blog/software-testing-trends-to-look-out-for-in-2025",
"author":"Acelya Gul",
"publish_date":"08-<span>Oct<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/acelya-gul",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/software_testing_trends_to_look_out_for_in_2024%20%281%29.webp",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","mobile-testing":"mobile testing","qa":"QA","performance-testing":"Performance Testing","qateam":"qateam","testops":"TestOps","low-code-and-no-code-automation":"Low-Code and No-Code Automation","ai-and-machine-learning-in-testing":"AI and Machine Learning in Testing","iot-testing":"IoT Testing","cyber-security-testing":"Cyber Security Testing","shift-left-testing":"Shift-Left Testing" },
"search":"08 <span>oct</span>, 2024software testing trends to look out for in 2025 test automation,software testing,mobile testing,qa,performance testing,qateam,testops,low-code and no-code automation,ai and machine learning in testing,iot testing,cyber security testing,shift-left testing acelya gul the software development landscape is undergoing rapid transformation, with 2025 introducing promising trends in software testing. what began as a manual, time-consuming process has been revolutionized by automation and ai-driven tools. testing is no longer just about checking if software works or finding bugs; it\u2019s about enabling software to optimize itself, driving innovation, and empowering teams to rapidly adapt to changing needs. automation accelerates testing, while ai and machine learning identify bugs earlier , predict system behaviors, and reduce the need for human intervention. tools like diffblue cover (which generates automated test cases for java projects and predicts bugs in code changes), testim.io (which prioritizes high-risk areas using ai), deepcode (snyk code) (which detects potential issues by analyzing code patterns) enable these capabilities. these trends shift the focus from reactive testing to proactive quality assurance, enabling developers to catch potential issues before they impact users. as a result, software testing is becoming an indispensable element in delivering high-quality products faster. what are the latest trends in software testing? as we move into 2025, emerging trends in the world of software testing are highlighting the directions in which the industry's transformation will accelerate. here are ten major trends set to shape software testing in 2025: 1. low-code and no-code automation low-code and no-code automation tools make software development processes more accessible and manageable. these tools accelerate workflows by automating tasks without requiring complex coding knowledge. speed and efficiencythese tools use drag-and-drop interfaces and templates to automate workflows, speeding up task completion quickly. cost savingsby requiring less technical knowledge, they reduce software development costs and enable more efficient budget use. collaboration and flexibilityallowing team members without technical expertise to contribute enhances collaboration and helps businesses adapt quickly to changing needs. 2. ai and machine learning in testing artificial intelligence (ai) and machine learning (ml) have become increasingly popular, enhancing the efficiency of software testing processes. these technologies detect errors in advance, automate test processes, and expand test coverage. data analysis and automationai analyzes large datasets to identify potential errors and automates repetitive tasks. anomaly detectionai identifies errors that might be missed by human testers, improving software quality. enhanced test coverageai and ml broaden test coverage and maintain accuracy even in dynamic environments. proactive maintenance and self-healingai-powered tools help detect and resolve issues early and adapt automatically to changing conditions. visual testingai-powered visual testing tools automatically compare screenshots or ui elements to detect visual changes in layout, design, or appearance. this ensures consistency across different devices and platforms, identifying visual regressions that might impact the user experience. ethical aias ai use in testing expands, it's crucial to ensure these systems operate ethically. this means upholding data privacy, preventing bias, and maintaining security. ethical ai ensures that ai-driven decisions are fair, transparent, and responsible, building trust in testing outcomes. 3. end-to-end testing end-to-end (e2e) testing term has been popularized by the way it audits the application material at different stages, from the beginning to the conclusion, testing it with real-world scenarios. in addition to the least microservices, cloud-native applications, and distributed architectures, e2e testing concerns the automation of synthetic data to represent user interactions and organizational tasks initiatively. also, this trend facilitates the observability practice, which in turn helps teams to monitor the system's performance and reliability during the testing process, thus, ensuring that the software is not only functioning correctly but also meeting the quality and performance standards required to carry out the productions. end-to-end testing ensuring that any system is fully functional, from the user interface to the backend server, is the system validation process we refer to. automation for complex scenariosautomation testing can be done across distributed systems to avoid simulating real-world user interactions that may cause a problem in the system if some components are not working well in intercommunication. improved quality assurance e2e testing is an initiative necessary to oversee performance during testing, it announces that you have integrated observability tools, which monitor the software completely and ensure it is reliable, scalable, and ready for production. if you are interested in e2e testing, you can take a look at our blogpost series called \u201Ccreating end-to-end web test automation project from scratch\u201D 4. testops testops is a test trend in software testing that emphasizes the integration of testing in the broader devops framework. it highlights the automation, management, and optimization of the entire test cycle processes throughout the software development life cycle. the following three elements are the pillars of testops: seamless devops integration: testops is the testing platform that is already integrated into the ci\/cd pipeline, which works in line with devops, allowing faster and more reliable releases. automation and scalability: automates keys such as environment infrastructure management, data preparation up to data loading, and finally, test execution. this is necessary because of the support for scalable and efficient testing processes. continuous feedback and monitoring: continuous feedback occurs at points in the project from testing through to deployment. this is facilitated through real-time monitoring of the system and feedback loops, resulting in better test strategies and responsiveness to changes. 5. api and service test automation according to a world quality report, api tests are the most widely used software component in recent decades, which will have to be performed 80% of the time during this decade. in the constantly progressing field of api testing, software testing services are the most prone to automation, with human intervention reduced to a minimum at the same time. automation and ai ai-driven tools which automatically generate, execute and optimize api test cases improve the efficiency and coverage of the tests. contract testingit checks that the api changes have not resulted in the interruption of the integration, thus ensuring that the old and the new services are compatible. service virtualization this enables the validation of apis without downtime by emulating the dependent services in an isolated environment hence supporting continuous testing. 6. mobile test automation with the rise in mobile device usage, mobile test automation has becoming increasingly crucial for software development teams. these tests ensure that applications work seamlessly across various devices and operating systems. device and platform varietyautomated tests confirm that applications function correctly on different devices and platforms, providing a consistent user experience. adaptation to network conditionsevaluate application performance under different network conditions to ensure consistent functionality across all connection speeds. speed and qualitymobile test automation accelerates development and ensures high-quality releases. 7. shift-left testing there is a single thing more important than finding defects - finding defects earlier! shift-left testing is expected to be a dominant trend in 2025, emphasizing early-stage defect detection and prevention. with shift-left testing, testing phases move to earlier stages of sdlc, so they are integrated tightly with development processes. by 2025, this approach is projected to become a highly automated one, utilizing ai-powered tools to perform static code analysis, security checks, and unit testing at the beginning of the software development lifecycle. let's summarise the advantages of this approach: detecting bugs, removing them early and repairing them is cheaper than it would be later. testing integrated into the development process results in developers receiving instantaneous feedback on their code, which makes it possible for them to fix the issues quickly and efficiently. through early engagement of testers, shift-left testing creates a culture of joint responsibility for the quality term, resulting in more solid and efficient teams. continuation and timely testing will reduce the chance of regressions to a minimal scale. shift-left testing is a concept that perfectly complements agile practices, which demand that testing should be done continually throughout the system's life cycle. 8. iot testing with the increasing number of connected devices across multi-dimensional networks and the necessity for interoperability in iot ecosystems, the iot testing of 2025 will be more complex than ever. instead, the focal point is placed on the conditionality, safety, and operation of diverse iot devices and the way they interact within a complex network. security must be paramount and addressed at the physical and virtual levels of setting up the system. the testing strategies have in the past been relying on the manual tireless testing of every device or circuit that comes to the lab but now we can clone the real device by the use of digital twins\u2014a virtual representation of the physical device. the process is to get a simulated scenario which allows for adequate testing before field integration. by doing so, the failure risk concentrating on the production stage is minimized. digital twins: this is achieved by simply developing the virtual prototypes of real-world iot scenarios wireless access points and core networks. security focus: the main point falls on vigorous security testing for iot devices and the maintenance of the network as a whole, jeopardised originating from the covid-19 pandemic. 9. performance testing as applications become more complex, performance testing is essential in software development. these tests assess how applications perform under increased user loads and ensure adaptability to dynamic user demands. realistic user scenariosperformance tests simulate real user behaviors and load conditions to measure system stability and performance. cloud and distributed systemsevaluate scalability and reliability in cloud and distributed environments. continuous monitoringperform continuous monitoring in live environments to detect performance issues early and resolve them promptly. for more detailed information on performance testing, you can explore our blog post on a beginner's guide to performance testing and find more insights on our blog page. these resources provide deeper insights into best practices and methodologies for ensuring application scalability and reliability. 10. cyber security testing as cyberattacks become more frequent and sophisticated, cybersecurity testing has become a critical component of software development. this type of testing focuses on identifying and mitigating vulnerabilities that could be exploited by malicious actors. vulnerability scanningregularly scanning software for potential security weaknesses helps identify areas that could be targeted by hackers. penetration testingthis involves simulating cyberattacks to assess how a system might fare under a real threat, providing insights into vulnerabilities that might not be obvious during routine checks. continuous security monitoringfocuses on the constant surveillance of the software environment to detect and respond to potential threats in real time. this proactive approach helps prevent breaches by addressing vulnerabilities as soon as they arise. security auditsperform detailed assessments of a software\u2019s security controls and practices to verify alignment with industry standards and regulatory requirements. these audits aim to uncover weaknesses in security protocols and enhance the overall security posture by addressing any vulnerabilities found. get ahead with confidence: start using 2025\u2019s top testing trends as 2025 approaches, adopting advanced software testing practices is more critical than ever. the rapidly evolving demands of software development require businesses to leverage automation tools and ai-driven systems while rethinking their entire approach to quality assurance. each emerging trend\u2014whether focused on improving functionality, security, or scalability\u2014represents a significant shift towards building more resilient and high-performing software. these trends are not just about keeping up with technology; they represent a transformation in ensuring software quality. by embracing these innovative testing strategies, businesses can deliver applications that meet and exceed user expectations in terms of security, performance, and reliability. staying up to date with these trends is vital for organizations looking to stay competitive and innovative in the fast-evolving digital world. for more insights on software testing trends and how to stay ahead in 2025, be sure to read more on the kloia blog!"
},
{
"title":"Creating End-to-End Web Test Automation Project from Scratch \u2014 Part 6",
"body":"Welcome to the 6th part of the blog post series called \u201CCreating an End-to-End Web Test Automation Project from Scratch.\u201D It was a long series, I know. But we are in the final round! So please, just bear with me! \uD83D\uDC3B So far, we have covered many topics, from the creation of a project to integrating it with CI\/CD pipeline. If you need to review the previous chapters, you can find the artic...",
"post_url":"https://www.kloia.com/blog/creating-end-to-end-web-test-automation-project-from-scratch-part-6",
"author":"Muhammet Topcu",
"publish_date":"04-<span>Oct<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/end_to_end_web_test_automation_blog.webp",
"topics":{ "test-automation":"Test Automation","kubernetes":"Kubernetes","qa":"QA","ci-cd":"CI\/CD","keda":"keda","qateam":"qateam","endtoend":"endtoend","ci-cd-pipeline-integration":"CI\/CD Pipeline Integration","web":"web" },
"search":"04 <span>oct</span>, 2024creating end-to-end web test automation project from scratch \u2014 part 6 test automation,kubernetes,qa,ci\/cd,keda,qateam,endtoend,ci\/cd pipeline integration,web muhammet topcu welcome to the 6th part of the blog post series called \u201Ccreating an end-to-end web test automation project from scratch.\u201D it was a long series, i know. but we are in the final round! so please, just bear with me! \uD83D\uDC3B so far, we have covered many topics, from the creation of a project to integrating it with ci\/cd pipeline. if you need to review the previous chapters, you can find the articles at the links below. let\u2019s create and configure our web test automation project! let\u2019s write our test scenarios! bonus: recording failed scenario runs in ruby let\u2019s configure our web test automation project for remote browsers and parallel execution let\u2019s dockerize our web test automation project bonus: recording scenario runs on docker with selenium video! let\u2019s integrate our dockerized web test automation project with ci\/cd pipeline! auto-scaling and kubernetes integration with keda auto-scaling and kubernetes integration with keda you will make a magic touch, which will enable you to auto-scale your selenium grid with the help of keda, kubernetes event-driven autoscaling. keda will help you scale your nodes not according to cpu or ram usage but according to the queue size of our tests to be run! now, if you are ready, let's get down to it! minikube installation first you are going to install minikube to create a kubernetes instance in your local machine. let\u2019s install minikube with brew: brew install minikube if you do not want to use brew or if you have another os, you can go to the official minikube site and install it by choosing our os and cpu architecture type. now let\u2019s start minikube: minikube start --vm-driver=docker here, the \u2013vm-driver is the driver type of which a minikube can be deployed. for macos, docker is recommended. see the full list of drivers for minikube. after successfully starting your minikube, browse your dashboard with: minikube dashboard it will automatically open the dashboard on your default browser: now since you have your kubernetes up and running, let\u2019s add your selenium grid and your nodes to your cluster! grid and node yaml for your grid and node configurations, you will use example yaml files for these in the official kubernetes github repository. let\u2019s navigate to this repository. you need to download these three yaml files: - selenium-hub-deployment.yaml => to deploy selenium hub. - selenium-hub-svc.yaml => a service for nodes to connect to the hub. - selenium-node-chrome-deployment.yaml => to deploy chrome node. in selenium-node-chrome-deployment.yaml, let\u2019s change replica count to `1`. note: if you have a computer which has apple silicon with arm64 architecture, you need to make these small changes in the following files as well: in selenium-hub-deployment.yaml file, change the container image to `seleniarm\/hub`. in selenium-node-chrome-deployment.yaml file, change the container image to `seleniarm\/node-chromium`. now let\u2019s connect them to your k8s cluster. \u00A0 1. hub deployment: kubectl create -f selenium-hub-deployment.yaml 2. service deployment: kubectl create -f selenium-hub-svc.yaml 3. node deployment: kubectl create -f selenium-node-chrome-deployment.yaml now let\u2019s check if your deployments are successful with the command below. kubectl get deploy if your deployments were successful, you should see something like below. now let\u2019s check your service with: kubectl get services you can see your service as well: now let\u2019s find out the url of your grid with the command below: minikube service selenium-hub --url your grid is on one of these ports below. usually the first one: after entering correct port, you should see something like this: next, you will configure keda. keda yaml configuration from the keda releases page, download the latest keda version. at the moment of writing this blogpost, the latest version is 2.10.1. and you need to create a scaled-object for your chrome node. let\u2019s create a file named scaled-object-chrome.yaml and populate it with the code below: apiversion: keda.sh\/v1alpha1 kind: scaledobject metadata: name: selenium-chrome-scaledobject namespace: default labels: deploymentname: selenium-node-chrome spec: minreplicacount: 0 maxreplicacount: 4 scaletargetref: name: selenium-node-chrome pollinginterval: 1 triggers: - type: selenium-grid metadata: url: 'http:\/\/selenium-hub.default:4444\/graphql' browsername: 'chrome' in here: - minreplicacount is the number of replicas when there are not any tests in the queue. - maxreplicacount is the max number of replicas when the activation threshold is reached. in this instance, this means that you won\u2019t have more than 4 chrome nodes no matter how many tests are in queue. - activationthreshold is the threshold to scale up your chrome node. if the queue reaches 5, it initiates a replication. now let\u2019s integrate them into your cluster. kubectl apply -f keda-2.10.1.yaml after that, let\u2019s apply your scaled-object. since your chrome deployment in the default namespace, you will apply your scaled-object to there too. kubectl apply -f .\/scaled-object-chrome.yaml --namespace=default now give it a few moments and check your selenium grid again: no nodes!? why? because you configured your minreplicacount as 0. so when there aren\u2019t any tests in the queue, keda scales down your nodes to zero. now i call this resource management! okay, let\u2019s change the remote_url variable in the test automation project to our current selenium grid port, which is 64824 in this instance, and run 8 tests in parallel with the `parallel_cucumber -n 8` command. and let\u2019s see what happens. now that\u2019s it! you have a grid that is auto-scaled with the help of kubernetes and keda! this is the end of a long journey. thanks for bearing with me with this eight-chapter-long blogpost series. before closing up, let\u2019s summarise what you have accomplished so far: \u2705you created a test automation project using ruby - capybara - cucumber and installed all dependencies. \u2705you learned how to write css selectors. \u2705you wrote test scenarios to be used on your test automation project. \u2705you recorded failed test scenarios as video files in ruby. \u2705you used remote connections to run your tests in different machines. \u2705you used selenium grid and ran your tests in parallel. \u2705you dockerized your test automation project so that it can be run in any environment. \u2705you recorded failed test runs in a docker container using selenium video image. \u2705you integrated ci\/cd pipeline to your project by using jenkins. \u2705you configured auto-scaling for your project by using keda. hope to see you in upcoming sprints! \u00A0"
},
{
"title":"Managing Kubernetes Clusters with the GitOps",
"body":"Kubernetes is a container management platform that includes many components. There are many problems to be solved under different headings such as installation, configuration, maintenance, and observability in K8S management. In this article, we will talk about how the necessary components such as add-ons, tools, etc. can be managed with GitOps management for the cluster to become produc...",
"post_url":"https://www.kloia.com/blog/managing-kubernetes-clusters-with-the-gitops",
"author":"Omer Faruk Urhan",
"publish_date":"04-<span>Oct<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/omer-faruk-urhan",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/managing_kubernetes_clusters_with_the_gitops-1.webp",
"topics":{ "devops":"DevOps","cloud":"Cloud","kubernetes":"Kubernetes","cluster":"cluster","argo-cd":"argo cd","gitops":"gitops","kustomize":"kustomize" },
"search":"20 <span>dec</span>, 2024managing kubernetes clusters with the gitops devops,cloud,kubernetes,cluster,argo cd,gitops,kustomize omer faruk urhan kubernetes is a container management platform that includes many components. there are many problems to be solved under different headings such as installation, configuration, maintenance, and observability in k8s management. in this article, we will talk about how the necessary components such as add-ons, tools, etc. can be managed with gitops management for the cluster to become production-ready after installation. what is gitops? gitops uses git repositories as a single source of truth to deliver applications. submitted code checks the ci process. all code changes are tracked, making updates easy while also providing version control should a rollback be needed. gitops delivers: a standard workflow for application development increased security for setting application requirements upfront improved reliability with visibility and version control through git consistency across any cluster, any cloud, and any on-premise environment gitops tools constantly check the git repos you define and ensure that the relevant environments are synchronized with the git repos. why do we need to manage kubernetes with gitops patterns? if the scale you are working on is small, you can choose any method for kubernetes add-on installation, maintenance, and configuration. you can even manage them manually. however, when the scale grows and the systems you have to manage start to be expressed in 10s and 100s, things start to change. at this point, you can start to benefit from the blessings provided by gitops. thanks to the gitops tool, you can simultaneously deploy the same code or configuration to n clusters. environment provisioning operations can be done with just a few code changes. you can follow the status of the systems you work with and all changes via git. since the gitops tool will constantly synchronize, you will be able to make sure that the production configuration is stable\/reliable. since you can create all components with bootstrap logic, you can manage all installation and maintenance processes automatically. setting up gitops repository directory structure first of all, argocd was preferred as a gitops tool. argocd is a kubernetes-native continuous deployment (cd) tool. unlike other cd tools that only enable push-based deployments, argocd can pull updated code from git repositories and deploy it directly to kubernetes resources. argocd was preferred for many reasons such as having a user interface, easy gitops implementation, being a scalable and stable product, etc. two issues need to be decided to determine the repository structure. these are: how to position gitops tool? monorepo or multirepo? 1. how to position gitops tool? 2 different positions can be used when using argocd: one argocd to rule them all clusters with this architecture, it is possible to control all clusters with a central argocd. advantages: single view for deployment activity across all clusters. single control plane, simplifying the installation and maintenance. single server for easy api\/cli integration. great integration with the applicationset cluster generator. disadvantages: scaling requires tuning of the individual components. single point of failure for deployments. admin credentials for all clusters in one place. significant network traffic between argo cd and clusters. argocd instance per cluster (the separation of concern) in this architecture, a new argocd is configured on each cluster to be managed. advantages: distributes load per cluster. no direct external access is required. eliminates the argo cd traffic leaving the cluster. an outage in one cluster won't affect other clusters. credentials are scoped per cluster disadvantages: requires maintaining multiple instances and duplicating configuration. api\/cli integrations need to specify which instance. limited integration with the applicationset cluster generator. 2. monorepo and multirepo monorepo monorepo means using a single git repository for clusters or environments. it becomes difficult to manage when systems are scaled. when you want to make changes to any environment, argocd will re-render the entire structure, which can cause performance problems. on the other hand, the advantage is that this pattern provides a centralized location for all your configuration changes and deployment. multirepo in the multirepo model, different git repos are used for clusters or environments. in the meantime, repositories can be divided for each cluster and environment, as well as for the separation of concerns and organizational boundaries. in addition, the multirepo approach can be preferred in multi-tenant structures. the main drawback of using this pattern is that it creates a large number of git repositories, with each having its release process that needs to be coordinated. this makes it a challenge to manage, and deployments can become complex. however, this pattern is flexible and scales incredibly well. as a result, the answer to the question of what type of git repo to choose is simple: it depends. solutions can be produced in mono-repo or multi-repo structures by considering criteria such as customer, business, performance, management, etc. in our study, each cluster is configured to be managed by the argocd instances on it. in addition, a single git repo and directory structure is created for clusters in different environments using the mono-repo approach. implementation in the implementation to be made, the installation and management of cluster tools will be done declaratively using helm templates and kustomize. argocd support helm and kustomize. standardized deployments of a wide range of kubernetes add-ons and tools will be made using helm charts. with the use of kustomize, the structure created by the dry (do not repeat yourself) principles can be easily implemented for each environment\/cluster. in this context, it becomes important to set up the correct directory structure in the git repo when managing kubernetes with argocd. directory structure in the mono-repo approach, when creating a directory structure using kustomize, two main folders are created. these are the bootstrap and components files. . \u251C\u2500\u2500 readme.md \u251C\u2500\u2500 bootstrap \u2502 \u251C\u2500\u2500 app-of-apps \u2502 \u2514\u2500\u2500 initial \u2514\u2500\u2500 components \u251C\u2500\u2500 argocd \u251C\u2500\u2500 backing-services \u251C\u2500\u2500 cluster-certificates \u251C\u2500\u2500 cluster-core \u251C\u2500\u2500 cluster-logging \u251C\u2500\u2500 cluster-repo-creds \u2514\u2500\u2500 namespaces - the bootstrap folder contains the initial folder, which contains the resources that need to be installed in advance for the installation of components into the cluster. the resources in this folder are the resources that need to be deployed manually and once, in a specific order. the app-of-apps folder contains the resources that will trigger the installations like dominoes, so to speak, and then continuously monitor and synchronize them. - the components folder is the file that contains all the components that will be installed in the clusters. each topic heading in cluster management is grouped under subfolders under this folder according to the scope it is located in. for example, the argocd folder customizes the argocd application, which is first installed manually and without customization. the cluster-core file installs the components that each cluster may need after the initial installation. for example, components such as ingress controller, secret operator, monitoring, and logging tools are installed and managed with this structure. (the subdirectories of this file will be explained when appropriate.) folders such as cluster-certificates have been created for the certificate management that needs of the applications in the cluster, cluster-repo-creds for the credential templates needed to access git repos, and backing-services for the backing-services (databases, cache tools, queues, etc.) that the applications running in the cluster will need. as many components as desired can be added to this structure according to needs. for example, let's consider the argocd folder to examine the folder structure of a component. . \u251C\u2500\u2500 base \u2502 \u251C\u2500\u2500 kustomization.yaml \u2502 \u2514\u2500\u2500 patches \u2502 \u2514\u2500\u2500 argocd-cm.yaml \u2514\u2500\u2500 overlays \u2514\u2500\u2500 prod \u251C\u2500\u2500 ingress.yaml \u251C\u2500\u2500 kustomization.yaml \u251C\u2500\u2500 patches \u2502 \u251C\u2500\u2500 argocd-cm.yaml \u2502 \u251C\u2500\u2500 argocd-cmd-params-cm.yaml \u2502 \u2514\u2500\u2500 argocd-rbac-cm.yaml \u2514\u2500\u2500 service-monitors.yaml the component structure is completely organized according to kustomize\u2019s declarative folder structure. while the resources are defined in the base folder, all in yaml format, only the fields that will patch according to the environment\/cluster can be defined under the overlays folder. with kustomize, files can be referenced at the directory level paths as well as url-based. the kustomization.yaml file under the base file in the structure above shows this example. apiversion: kustomize.config.k8s.io\/v1beta1 kind: kustomization namespace: argocd resources: - https:\/\/raw.githubusercontent.com\/argoproj\/argo-cd\/v2.11.0\/manifests\/install.yaml patches: - path: patches\/argocd-cm.yaml the argocd installation is defined in the structure above. since argocd can be customized with configmaps, the resource to be customized while installing argocd is defined in the patches section. by opening a file according to the environment or cluster to be established under overlays, the patches to be made specific to the relevant environment are added to the files under the patches directory. for example, the patches to be made for argocd to be deployed to the production environment are defined in overlays\/prod\/kustomization.yaml. # components\/argocd\/overlays\/prod\/kustomization.yaml apiversion: kustomize.config.k8s.io\/v1beta1 kind: kustomization namespace: argocd resources: - ..\/..\/base\/ - ingress.yaml - service-monitors.yaml patches: - path: patches\/argocd-cm.yaml - path: patches\/argocd-cmd-params-cm.yaml - path: patches\/argocd-rbac-cm.yaml thus, the relevant component can be deployed by making customizations to n environments with the folder structure created for the component to be installed. bootstrapping installations begin with a one-time manual installation of argocd and dependent components that will manage everything with the initial folder located under the bootstrap folder. . \u251C\u2500\u2500 readme.md \u251C\u2500\u2500 bootstrap \u2502 \u2514\u2500\u2500 initial \u2502 \u251C\u2500\u2500 00-namespace \u2502 \u2502 \u2514\u2500\u2500 kustomization.yaml \u2502 \u251C\u2500\u2500 01-argocd \u2502 \u2502 \u2514\u2500\u2500 kustomization.yaml \u2502 \u251C\u2500\u2500 02-cluster-core \u2502 \u2502 \u2514\u2500\u2500 kustomization.yaml \u2502 \u251C\u2500\u2500 03-cluster-repo-creds \u2502 \u2502 \u2514\u2500\u2500 kustomization.yaml \u2502 \u2514\u2500\u2500 04-app-of-apps \u2502 \u2514\u2500\u2500 kustomization.yaml we have to do the installations here manually, one-time. because argocd and the tools it requires, which will later manage everything, must first be deployed once. installations will be carried out one by one according to the numbered order in the structure shown above. for example, firstly, kustomization.yaml under 00-namespace folder will be deployed to create the relevant namespaces. # bootstrap\/initial\/00-namespace\/kustomization.yaml apiversion: kustomize.config.k8s.io\/v1beta1 kind: kustomization resources: - ..\/..\/..\/components\/namespaces\/base --- # components\/namespaces\/base\/kustomization.yaml. apiversion: kustomize.config.k8s.io\/v1beta1 kind: kustomization resources: - argocd-ns.yaml - external-secrets-ns.yaml - ingress-nginx-ns.yaml the first kustomization file above references the namespace resources in the namespace folder under components. by applying this file, the namespaces will be installed. kubectl apply -k bootstrap\/initial\/00-namespace note: kustomization resources can be deployed using kubectl with the \u201C-k\u201D parameter without the need for a different tool. all resources in the initial folder are deployed in order and the installation of the requirements required to automate the structure is completed. kubectl apply -k bootstrap\/initial\/00-namespace kubectl apply -k bootstrap\/initial\/01-argocd kubectl apply -k bootstrap\/initial\/02-cluster-core kubectl apply -k bootstrap\/initial\/03-cluster-repo-creds in summary, to install argocd and make the created git repository accessible to argocd, setting up namespaces installing argocd (in its most basic form) the external-secret operator will generate the secret that argocd will use to access the private git repo. cluster-repo-creds and external-secret templates that will generate private git repo secrets for argocd. after manual installations, app-of-apps deployment should be done which will automate everything. let's examine the app-of-apps structure. # bootstrap\/app-of-apps . \u251C\u2500\u2500 base \u2502 \u251C\u2500\u2500 argocd.yaml \u2502 \u251C\u2500\u2500 backing-services.yaml \u2502 \u251C\u2500\u2500 bootstrap.yaml \u2502 \u251C\u2500\u2500 cluster-certificates.yaml \u2502 \u251C\u2500\u2500 cluster-core.yaml \u2502 \u251C\u2500\u2500 cluster-logging.yaml \u2502 \u251C\u2500\u2500 cluster-repo-creds.yaml \u2502 \u251C\u2500\u2500 cluster-traffic.yaml \u2502 \u251C\u2500\u2500 kustomization.yaml \u2502 \u2514\u2500\u2500 namespaces.yaml \u2514\u2500\u2500 overlays \u251C\u2500\u2500 preprod \u2502 \u2514\u2500\u2500 kustomization.yaml \u2514\u2500\u2500 prod \u2514\u2500\u2500 kustomization.yaml a directory structure was created under the app-of-apps file using kustomize. under the base file, argocd applications are defined for each component to be deployed in the cluster. for example, let's examine the cluster-core.yaml file. # bootstrap\/app-of-apps\/base\/argocd.yaml apiversion: argoproj.io\/v1alpha1 kind: application metadata: name: cluster-core namespace: argocd annotations: argocd.argoproj.io\/sync-wave: \"-1000\" spec: destination: namespace: argocd server: \"https:\/\/kubernetes.default.svc\" source: repourl: \"https:\/\/github.com\/kloia\/platform-gitops.git\" targetrevision: \"head\" path: \"components\/cluster-core\/patch_me\" project: default syncpolicy: automated: prune: true selfheal: true syncoptions: - createnamespace=true with the argocd application defined in this file, the cluster-core components in the components\/cluster-core path in the platform-gitops repo are deployed and then continuously monitored by argocd. thus, the relevant component will be continuously synchronized with the repo. in a similar logic, an argocd application is defined for each component under the base folder that will manage them. another application has been defined that will manage all applications of the system (app of apps) and also control itself. this application is the argocd bootstrap application and is defined in bootstrap.yaml under the base directory. # base\/bootstrap.yaml apiversion: argoproj.io\/v1alpha1 kind: application metadata: name: bootstrap namespace: argocd annotations: argocd.argoproj.io\/sync-wave: \"-997\" spec: destination: namespace: argocd server: \"https:\/\/kubernetes.default.svc\" source: repourl: \"https:\/\/github.com\/kloia\/platform-gitops.git\" targetrevision: \"head\" path: \"bootstrap\/app-of-apps\/patch_me\" project: default syncpolicy: automated: prune: true selfheal: true syncoptions: - createnamespace=true this application will ensure that all applications located under the bootstrap\/app-of-apps directory are deployed and then automatically synchronized with the auto sync feature. since this application is located in the same directory, it also synchronizes itself. a mechanism has been developed here to be able to deploy applications to multiple environments\/clusters without repeating code. to explain with an example, for the cluster-core component to be deployed to the pre-prod environment, the expression indicating the component path in the form of \"components\/cluster-core\/patch_me\" in the argocd application must be customized specifically for each environment\/cluster under the overlay folder. in the application to be deployed to the preprod environment, the path must be \"components\/cluster-core\/preprod\". to perform the mentioned changes in the application yaml files, the bootstrap\/app-of-apps\/overlays\/preprod\/customization.yaml file was created. let's examine this file: apiversion: kustomize.config.k8s.io\/v1beta1 kind: kustomization namespace: argocd resources: - ..\/..\/base\/ commonannotations: gitops.kloia\/overlay-path: \"overlays\/preprod\" replacements: - source: group: argoproj.io version: v1alpha1 kind: application name: bootstrap namespace: argocd fieldpath: \"metadata.annotations.[gitops.kloia\/overlay-path]\" targets: - select: group: argoproj.io version: v1alpha1 kind: application fieldpaths: - \"spec.source.path\" options: delimiter: \"\/\" index: 2 in this file, \u201Ccommonannotation\u201D is added as an annotation to each application located under base directory and annotation defines the environment (for example overlays\/preprod). afterward, using the \u201Creplacements\u201D feature of kustomize, the last part of the \u201Cspec.source.path\u201D field of the bootstraper application is replaced with the \u201Coverlay\/preprod\u201D expression in the previously added annotation. as a result, the expression \"bootstrap\/app-of-apps\/patch_me\" in bootstrapper.yaml as the path where the applications will be deployed becomes \"bootstrap\/app-of-apps\/overlays\/preprod\". thus, we can trigger the environment you will initialize with a single command. after making all these adjustments, for example, to start the configuration of the preprod cluster, it will be sufficient to set the context to be preprod with kubectl and apply the bootstrapper application with a single command. kubectl apply -k bootstrap\/app-of-apps\/overlays\/preprod to trigger the installation of the prod environment, apply the prod overlay with kubectl after running previously mentioned manual steps. kubectl apply -k bootstrap\/app-of-apps\/overlays\/prod the bootstrapper application will deploy all applications in order. and the triggered applications will trigger the installation of the components they are responsible for. in a short time, the entire structure will rise to govern itself. once the installation is complete, the applications will appear on the argocd ui as follows. for example, other applications installed with the cluster-core application will look like this. conclusion in this study, how kubernetes addon and tool management can be done using the gitops method, how gitops tools can be positioned in the application architecture and git repository structures are discussed. afterwards, it was shown how a self-initiating and self-managing structure can be implemented on argocd using helm and kustomize with the created directory structure. you can access all the codes used in this article at https:\/\/github.com\/kloia\/platform-gitops. references 1. gitops best practices 2. app of apps"
},
{
"title":"AWS Site-to-Site VPN vs. Transit VPC for Multi-VPC Connectivity",
"body":"By design, securely connecting more than one VPC and on-premise network is a basic need for enterprise architectures in cloud networking. Within AWS, you can do this with AWS Site-to-Site VPN automatically or with Transit VPC manually. While both solutions enable communication between the on-prem center and the VPC, there are mild discrepancies between them in terms of design, management...",
"post_url":"https://www.kloia.com/blog/aws-site-to-site-vpn-vs-transit-vpc-for-multi-vpc-connectivity",
"author":"Ahmet Arif \u00D6z\u00E7elik",
"publish_date":"04-<span>Oct<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/ahmet-arif-özçelik",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws_site_to_site_vpn_vs_transit_vpc_for_multi_vpc_connectivity.webp",
"topics":{ "aws":"AWS","devops":"DevOps","gateway":"gateway","vpc":"vpc","site-to-site":"Site to Site","vpn":"VPN","aws-site-to-site-vpn":"AWS Site-to-Site VPN","transit-vpc":"Transit VPC" },
"search":"20 <span>dec</span>, 2024aws site-to-site vpn vs. transit vpc for multi-vpc connectivity aws,devops,gateway,vpc,site to site,vpn,aws site-to-site vpn,transit vpc ahmet arif \u00F6z\u00E7elik by design, securely connecting more than one vpc and on-premise network is a basic need for enterprise architectures in cloud networking. within aws, you can do this with aws site-to-site vpn automatically or with transit vpc manually. while both solutions enable communication between the on-prem center and the vpc, there are mild discrepancies between them in terms of design, management, scalability, and costs. i will discuss the pros and cons of each method in this blog and highlight some of the factors that you would need to consider to make an informed decision about which one better fits into your architecture. tl;dr; aws site-to-site vpn is a managed solution offering: easy scaling across multiple vpcs and regions, built-in high availability (ha) with automatic failover, lower management overhead and cost compared to third-party solutions, simplified routing and network management via transit gateway transit vpc makes use of third-party vpn appliances: manual setup and management, complicated and costly due to the requirements set for licensing and automation, flexible but brings operational complexity, security or some compliance requirements. conclusion: for most modern architectures, aws site-to-site vpn is simpler, more scalable, and cost-effective while transit vpc is better suited for specialized or legacy setups. what is aws site-to-site vpn? aws site-to-site vpn enables you to set up an encrypted ipsec connection among your on-premises network (or any other cloud provider) and an aws vpc. this connection is configured at the vpc degree. site-to-site vpn is designed for environments where you need to join an on-premises network to aws, or maybe connect to more than one aws region. in other words, aws site-to-site vpn is a managed solution that provides secure, scalable, and cost-effective cloud connectivity between your on-premises networks and aws. it includes the following: - encrypted ipsec tunnels for secure communication. - flexibility in routing choices, including static and dynamic routing. - simplified management when integrated with aws transit gateway. - built-in high availability for reliable, fault-tolerant connections. - dead peer detection (dpd) for automatic failover in case of network issues. - cloudwatch monitoring for real-time visibility and alerts. - cost savings related to removing third-party appliances and simplifying management. i mentioned only one-to-one vpn connection but you can manage multiple customer data centers to the same vpc or make your aws vpc a main router for routing between multiple customer sites (vpn cloudhub). what is transit vpc? the transit vpc is an architectural pattern that was developed to route traffic between multiple vpcs and on-premises networks through a central vpc hub. all of these hubs use third-party vpn appliances like cisco or palo alto to manage network traffic. looking closely, the basic site-to-site vpn architecture involves the connection of one vpc with multiple data centers. in the real world, there may also be several vpcs connecting to one data center or even multiple data centers. using site-to-site vpn in such cases introduces additional management complexities in maintaining connectivity and routing between these networks. we are looking for a simplified solution where only one vpc hosts an on-premises vpn application on an ec2 instance. all vpn connections will need to terminate at this ec2 instance, effectively combining an aws managed vpn and an ec2-based vpn solution. additionally, you would maintain vpn connections with your customer gateway. this setup represents a transit vpc architecture, which centrally connects multiple networks. transit vpc provides; - transit vpc uses a hub-and-spoke architecture where a central vpc routes traffic between multiple vpcs and on-premises networks, simplifying connectivity. - it relies on third-party vpn appliances like cisco or palo alto, which provide advanced routing and security features not natively available in aws. you can: * install and use advanced threat protection softwares where traffic is routed. * use a different vpn protocol than ipsec like gre or dmvpn. - by centralizing routing, transit vpc reduces the complexity of managing multiple vpn connections through a single, central vpc for all traffic. - you can use overlapping network addresses with transit vpc which acts as a nat translation ips to different ranges to enable communication. - you can create a client-to-site vpn where client devices can connect to transit vpc ec2 instances by establishing vpn connection. feature aws site-to-site vpn transit vpc architecture managed by aws, no need for third-party appliances. requires third-party vpn appliances in a hub vpc. scalability it can scale, but it would be better with transit gateway, as it supports thousands of vpcs. scaling is manual and complex, often requiring lambda automation. cost costs include vpn data transfer fees and transit gateway charges (if applicable). high cost due to vpn appliances (license fees) and data transfer charges. high availability (ha) built-in ha through multiple aws availability zones (azs). ha requires setting up redundant vpn appliances, adding complexity. performance managed vpn performance, typically 1.25 gbps per vpn tunnel. dependent on third-party appliances, which can limit performance. security fully managed by aws with ipsec encryption. security is managed by the third-party appliance (firewall\/vpn). management easy to manage through aws console and transit gateway. requires managing third-party appliances, routing, and complex automation scripts. routing flexibility supports dynamic routing (bgp) provides flexible routing but requires manual configuration. cross-region support natively supported with aws inter-region peering. supported, but manual configuration and appliance setup are required. key considerations management aws site-to-site vpn: when integrated with aws transit gateway, the management overhead is significantly reduced. aws handles much of the complexity, and vpn connections can be easily monitored, scaled, and automated using native aws tools like the console, cloudwatch, and route 53. transit vpc: transit vpc is more manual in nature. you\u2019ll need to deploy and manage third-party vpn appliances, which often require complex automation scripts (like aws lambda) to create and tear down vpn connections as needed. if your vpn appliance experiences issues or goes down, you\u2019ll need to troubleshoot both aws and third-party services. cost efficiency aws site-to-site vpn: costs for aws site-to-site vpn primarily come from data transfer charges and the fees associated with aws transit gateway (if used). since there\u2019s no need to maintain third-party vpn appliances, overall management and licensing costs are lower. transit gateway also reduces cross-vpc data transfer costs compared to vpc peering. transit vpc: transit vpc architecture can be significantly more expensive, especially with vpn appliances that require additional licensing, maintenance, and support. performance aws site-to-site vpn: aws vpn performance is relatively straightforward\u2014each vpn tunnel offers up to 1.25 gbps of throughput. transit vpc: the performance in transit vpc architecture depends heavily on the performance and capabilities of the third-party vpn appliances. high availability aws site-to-site vpn: high availability is a native feature of aws site-to-site vpn when used with aws transit gateway. vpn connections automatically failover between availability zones if an issue arises, providing seamless availability. transit vpc: you must manually configure high availability by deploying redundant vpn appliances across multiple azs. this adds complexity to the architecture and requires additional automation to ensure failover works properly. when to use what? aws site-to-site vpn, especially when used with aws transit gateway, is ideal if you\u2019re looking for a fully managed solution with built-in scalability and availability. it is the best choice for enterprises that need to scale vpn connections across multiple vpcs and regions, without the overhead of managing third-party appliances. the transit vpc architecture might still be a good choice in specialized cases where third-party vpn appliances are necessary due to specific security requirements, compliance needs, or if you already have significant investments in third-party networking tools, or you need to accept overlapping network addresses for requirment. conclusion while both aws site-to-site vpn and transit vpc can be used for connecting multiple vpcs and on-premises networks, aws site-to-site vpn\u2014particularly when paired with aws transit gateway\u2014offers a more streamlined, scalable, and cost-effective solution for most scenarios. transit vpc, while flexible, often brings increased complexity and higher costs due to the need for third-party appliances and manual configuration. choosing the right solution depends on your specific networking requirements, performance needs, and operational preferences. for most modern cloud architectures, aws site-to-site vpn with transit gateway provides a simpler, more cost-efficient way to manage multi-vpc connectivity at scale."
},
{
"title":"What's New in Kubernetes v1.31: Key Updates, Deprecations,and Features",
"body":"Kubernetes continues its rapid evolution as the leading container orchestration platform, with each release bringing enhancements that refine its performance, security, and user experience. The latest version, Kubernetes v1.31, builds on this progress by introducing a series of removals, deprecations, and significant updates designed to streamline container management. In this post, we\u2019l...",
"post_url":"https://www.kloia.com/blog/whats-new-in-kubernetes-v1.31-key-updates-deprecations-and-features",
"author":"Enes Cetinkaya",
"publish_date":"10-<span>Sep<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/enes-cetinkaya",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/what_s_new_in_kubernetes_v1_31_key_updates_deprecations_and_features.webp",
"topics":{ "devops":"DevOps","cloud":"Cloud","kubernetes":"Kubernetes","k8s":"k8s","containers":"Containers","kubernetes-v1-31":"Kubernetes v1.31","flag":"Flag","cephfs-csi-driver":"CephFS CSI driver" },
"search":"11 <span>sep</span>, 2024what's new in kubernetes v1.31: key updates, deprecations,and features devops,cloud,kubernetes,k8s,containers,kubernetes v1.31,flag,cephfs csi driver enes cetinkaya kubernetes continues its rapid evolution as the leading container orchestration platform, with each release bringing enhancements that refine its performance, security, and user experience. the latest version, kubernetes v1.31, builds on this progress by introducing a series of removals, deprecations, and significant updates designed to streamline container management. in this post, we\u2019ll explore the major changes in kubernetes v1.31, comparing them with previous versions to highlight key updates, new features, and what\u2019s been deprecated or removed. if you\u2019re looking to stay ahead of the curve in kubernetes developments, read on for everything you need to know about v1.31 editor's highlights of kubernetes v1.31 - deprecations and removals: complete phase-out of older storage plugins like cephfs and ceph rbd in favor of csi drivers; deprecation of apis and security features such as sha-1 signatures to enhance security. - enhanced security: transition to more robust cryptographic standards, aiming to strengthen the security infrastructure of kubernetes clusters. - vendor-neutral cloud integration: final removal of all in-tree cloud provider integrations, supporting kubernetes' goal to maintain a vendor-neutral platform. 1. overview of kubernetes v1.31 updates kubernetes v1.31 brings crucial updates that affect key components such as apis, storage plugins, and cloud integrations. these changes are designed to enhance the platform\u2019s scalability, security, and functionality. staying informed about these updates is critical for maintaining a modern and efficient kubernetes environment. in this post, we\u2019ll highlight the most impactful changes in kubernetes v1.31, comparing them with previous versions to help you plan for necessary upgrades, migrations, and optimizations 2. the kubernetes api removal and deprecation process kubernetes follows a strict deprecation policy to manage the lifecycle of its apis and features. - stable (ga) apis: these can be marked as deprecated only when a newer, stable version is available. once deprecated, they remain functional for at least one year but will eventually be removed. - beta apis: supported for three releases after deprecation. if not promoted to stable, they will be removed. - alpha apis: these can be removed at any time without prior deprecation. comparison with previous versions: - v1.30 and earlier: the approach was similar, but v1.31 brings a stronger emphasis on the timely removal of deprecated apis, ensuring kubernetes evolves towards more efficient and secure implementations. 3. major removals and deprecations in kubernetes v1.31 kubernetes v1.31 sees several significant removals and deprecations. here\u2019s a closer look at these changes compared to v1.30: 3.1. deprecation of status.nodeinfo.kubeproxyversion field - v1.30 and earlier: this field was present but was recognized as unreliable since the kubelet lacked accurate information about kube-proxy versions. - v1.31: the status.nodeinfo.kubeproxyversion field has been deprecated and will be removed in future releases. the disablenodekubeproxyversion feature gate is enabled by default to avoid setting this field. impact: users should stop relying on this field for monitoring or configuration. 3.2. removal of all in-tree integrations with cloud providers - v1.30: partial removal of in-tree cloud provider integrations, with the recommendation to use external integrations. - v1.31: the final removal of all in-tree integrations marks the completion of this externalization process. kubernetes aims to be a fully vendor-neutral platform. action required: users must migrate to external cloud provider integrations, following kubernetes' cloud provider integrations guide. 3.3. removal of kubelet --keep-terminated-pod-volumes flag - v1.30 and earlier: this flag was deprecated for a long time (since 2017), but still existed. - v1.31: the flag has been removed entirely. impact: users should ensure that their configurations do not depend on this flag. further details can be found in the pull request #122082. 4. changes in storage plugins and recommendations storage management in kubernetes is undergoing significant transformations with v1.31. the removal of non-csi storage plugins and the push towards container storage interface (csi) drivers are central to this. 4.1. removal of cephfs volume plugin - v1.30 and earlier: cephfs was marked as deprecated, but still functional. - v1.31: cephfs is completely removed, making the type non-functional. users must migrate to the cephfs csi driver, a third-party storage solution. action required: applications using cephfs need to be re-deployed using the new csi driver. 4.2. removal of ceph rbd volume plugin - v1.30 and earlier: similar to cephfs, ceph rbd was marked as deprecated. - v1.31: ceph rbd volume plugin and its csi migration support have been removed. migration to the rbd csi driver is necessary. impact: clusters using ceph rbd must reconfigure to use the updated storage solution. 4.3. deprecation of non-csi volume limit plugins in kube-scheduler - v1.30 and earlier: non-csi plugins like azuredisklimits, cinderlimits, ebslimits, gcepdlimits were still part of the default scheduler plugins. - v1.31: these plugins are deprecated. the nodevolumelimits plugin is recommended as it supports csi functionality. action required: replace deprecated plugins in the scheduler config with nodevolumelimits. 5. important security changes: sha-1 signature deprecation kubernetes v1.31 introduces a critical security change regarding sha-1 signatures: - v1.30 and earlier: sha-1 support existed but was not recommended due to security vulnerabilities. - v1.31: the support for sha-1 is being deprecated, and it will be fully removed in go 1.24, expected in 2025. action required: migrate to stronger cryptographic standards. check kubernetes issue #125689 for more details. 6. upcoming changes in kubernetes v1.32: preparing ahead looking forward, kubernetes v1.32 will continue the trend of refining and optimizing its apis and integrations: - flowschema and prioritylevelconfiguration removals: users are encouraged to update their manifests to use the flowcontrol.apiserver.k8s.io\/v1 api version, which has been available since v1.29. preparing for v1.32: - v1.31 and earlier: users should start transitioning to the newer api versions to avoid any service interruptions. - v1.32: removal of older apis will require that all systems be updated to comply with the newer standards. 7. action steps for kubernetes users and administrators to ensure a smooth transition to kubernetes v1.31, follow these steps: review deprecations: identify any deprecated fields, apis, or plugins in use and plan for their removal or replacement. migrate to external integrations: ensure all in-tree cloud provider integrations are replaced with the recommended external integrations. update storage solutions: migrate from deprecated volume plugins to the corresponding csi drivers. adopt stronger security practices: replace sha-1 certificates and implement stronger cryptographic standards. 8. conclusion: embracing change for a robust kubernetes experience kubernetes v1.31 represents a major milestone in enhancing the platform\u2019s security, functionality, and vendor neutrality. by adapting to these updates, such as api deprecations, storage changes, and security enhancements, kubernetes users can continue to leverage a flexible and powerful container orchestration system. the proactive adoption of these improvements ensures that your infrastructure remains modern, secure, and optimized for the future. stay ahead by continuously refining your kubernetes environment, and embrace these changes to maintain a seamless and robust experience. for more expert insights and the latest updates on kubernetes, follow the kloia blog and stay informed on industry trends and best practices! 9. faq what are the major deprecations in kubernetes v1.31? major deprecations include the status.nodeinfo.kubeproxyversion field, sha-1 signature support, and several non-csi volume limit plugins. how can i migrate to the cephfs csi driver? refer to the official kubernetes csi documentation for steps on migrating from the cephfs volume plugin to the csi driver. what is the recommended replacement for the kubelet --keep-terminated-pod-volumes flag? users should remove any dependencies on this flag, as it has been fully removed in v1.31."
},
{
"title":"Benchmarking Java Virtual Threads: A Comprehensive Analysis",
"body":"A groundbreaking innovation called Java Virtual Threads (Project Loom) was designed to improve the concurrency mechanism and boost Java application performance. Since virtual threads are more lightweight than traditional threads, a larger number of concurrent threads can be managed more effectively. What Are Java Virtual Threads? Project Loom includes Java Virtual Threads in an effort to...",
"post_url":"https://www.kloia.com/blog/benchmarking-java-virtual-threads-a-comprehensive-analysis",
"author":"Baran Gayretli",
"publish_date":"19-<span>Aug<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/barangayretli",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/benchmarking_java_virtual_threads_a_comprehensive_analysis_blog.webp",
"topics":{ "java":"java","performance":"performance","software":"Software","benchmarking":"benchmarking","resource-utilization":"Resource Utilization","scalability":"Scalability","java-virtual-threads":"java virtual threads","project-loom":"Project Loom" },
"search":"19 <span>aug</span>, 2024benchmarking java virtual threads: a comprehensive analysis java,performance,software,benchmarking,resource utilization,scalability,java virtual threads,project loom baran gayretli a groundbreaking innovation called java virtual threads (project loom) was designed to improve the concurrency mechanism and boost java application performance. since virtual threads are more lightweight than traditional threads, a larger number of concurrent threads can be managed more effectively. what are java virtual threads? project loom includes java virtual threads in an effort to simplify the development, maintenance, and observation of high-throughput concurrent applications. virtual threads are far lighter than traditional threads and can be created in large numbers without the overhead of system threads. why use virtual threads? - scalability: the amount of concurrent operations your program is able to manage can be greatly increased by using virtual threads. - simplicity: virtual threads simplify the concurrency model, which makes it easier to develop and maintain code. - performance: virtual threads allow for better performance as a result of less resource consumption and context switching. benchmarking java virtual threads i will compare virtual threads with traditional threads to demonstrate their advantages. here's how the benchmark can be set up. setting up the benchmark environment: - jdk version supporting virtual threads (jdk 17+ with project loom enabled). - a benchmarking library like jmh (java microbenchmark harness). benchmark code: i will build a benchmarking test that compares the performance and resource efficiency of tasks carried out with traditional threads and virtual threads by measuring execution time, cpu load, and memory usage. sample benchmark code import org.openjdk.jmh.annotations.*; import java.lang.management.managementfactory; import java.lang.management.operatingsystemmxbean; import java.util.concurrent.*; @state(scope.benchmark) public class extendedvirtualthreadsbenchmark { private static final int light_load_task_count = 1000; private static final int heavy_load_task_count = 100000; private operatingsystemmxbean osbean = managementfactory.getoperatingsystemmxbean(); @benchmark public void traditionalthreadslightload() throws interruptedexception { runbenchmark(light_load_task_count, executors.newfixedthreadpool(100)); } @benchmark public void virtualthreadslightload() throws interruptedexception { runbenchmark(light_load_task_count, executors.newvirtualthreadpertaskexecutor()); } @benchmark public void traditionalthreadsheavyload() throws interruptedexception { runbenchmark(heavy_load_task_count, executors.newfixedthreadpool(100)); } @benchmark public void virtualthreadsheavyload() throws interruptedexception { runbenchmark(heavy_load_task_count, executors.newvirtualthreadpertaskexecutor()); } private void runbenchmark(int taskcount, executorservice executorservice) throws interruptedexception { countdownlatch latch = new countdownlatch(taskcount); long starttime = system.nanotime(); double startcpuload = osbean.getsystemloadaverage(); long startmemoryusage = runtime.getruntime().totalmemory() - runtime.getruntime().freememory(); for (int i = 0; i < taskcount; i++) { executorservice.submit(() -> { \/\/ simulate work performtask(); latch.countdown(); }); } latch.await(); executorservice.shutdown(); long endtime = system.nanotime(); double endcpuload = osbean.getsystemloadaverage(); long endmemoryusage = runtime.getruntime().totalmemory() - runtime.getruntime().freememory(); system.out.println(\"execution time: \" + (endtime - starttime) \/ 1_000_000 + \" ms\"); system.out.println(\"cpu load: \" + (endcpuload - startcpuload)); system.out.println(\"memory usage: \" + (endmemoryusage - startmemoryusage) \/ (1024 * 1024) + \" mb\"); } private void performtask() { \/\/ simulate a task by sleeping try { thread.sleep(10); } catch (interruptedexception e) { thread.currentthread().interrupt(); } } } use jmh to run the benchmarks: java -jar target\/benchmarks.jar virtualthreadsbenchmark visualization of the results benchmark comparison: traditional threads vs virtual threads the benchmark results highlight the performance benefits of virtual threads, particularly under heavy load conditions. - execution time: when compared to traditional threads, virtual threads offer a significant decrease in execution time, particularly in situations with high loads this demonstrates the efficiency and speed of virtual threads in handling a large number of concurrent tasks. - cpu load: virtual threads use less cpu power than traditional threads do. this lower cpu load means more efficient processing and lower overhead related to thread management. memory usage: virtual threads require a significantly lower share of memory than traditional threads. this is particularly evident under heavy load, where virtual threads use nearly half the memory of traditional threads. key takeaways and practical implications for java applications, java virtual threads indicate an important advancement in concurrent programming. they simplify the development process and provide better performance and resource utilization. in terms of execution time, cpu load, and memory utilization, our benchmarks clearly demonstrate the benefits of virtual threads over traditional threads. the findings have significant practical results: - simplified concurrency management: without having to worry about the overhead and complexity of using traditional threads, developers may design code that is easier to understand and maintain. - improved application performance: this is important for apps that depend on performance since it allows for greater throughput and reduced latency. - resource efficiency: improved resource utilization translates to cost savings in both development and production environments, making virtual threads an economically attractive choice. by adopting java virtual threads, developers can create highly scalable and efficient applications with reduced complexity. this benchmark provides concrete evidence of the benefits of virtual threads, making a strong case for their use in modern java development. references project loom documentation jmh (java microbenchmark harness)"
},
{
"title":"Manage Terraform AWS resources with Ease: Scalr.io",
"body":"The cloud computing landscape has witnessed exponential growth, with Amazon Web Services (AWS) emerging as the undisputed leader. While AWS offers unparalleled flexibility and scalability, managing its complexity can be a daunting task. This is where cloud management platforms like scalr.io come into play. In this comprehensive guide, we will delve into scalr.io, exploring its features, ...",
"post_url":"https://www.kloia.com/blog/manage-terraform-aws-resources-with-ease-scalr-io",
"author":"Ahmet Ayd\u0131n",
"publish_date":"05-<span>Aug<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/ahmet-aydın",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/terraform_aws_scalr_io.webp",
"topics":{ "aws":"AWS","devops":"DevOps","cloud":"Cloud","terraform":"terraform","infrastructure":"infrastructure","cloud-computing":"Cloud Computing","scalr-io":"Scalr.io","security-group-management":"Security group management","aws-management":"AWS management","iam-management":"IAM management" },
"search":"06 <span>aug</span>, 2024manage terraform aws resources with ease: scalr.io aws,devops,cloud,terraform,infrastructure,cloud computing,scalr.io,security group management,aws management,iam management ahmet ayd\u0131n the cloud computing landscape has witnessed exponential growth, with amazon web services (aws) emerging as the undisputed leader. while aws offers unparalleled flexibility and scalability, managing its complexity can be a daunting task. this is where cloud management platforms like scalr.io come into play. in this comprehensive guide, we will delve into scalr.io, exploring its features, benefits, and practical use cases specifically tailored for aws environments with terraform. understanding scalr.io scalr.io is a cloud management platform that specializes in simplifying the management of aws infrastructure. centralizing run execution and state storage, scalr.io enhances collaboration, accelerates provisioning, and offers unmatched flexibility. whether you\u2019re a seasoned infrastructure engineer or just getting started, scalr.io empowers you to confidently build and manage your infrastructure. scalr structure the scalr organizational model is broken up into the three components listed below. the model allows for object inheritance and assignment as well as visibility of terraform or opentofu workspaces and runs from an admin perspective. account scope: primarily an administrative control plane where environments, iam, policies, modules, variables, integrations, and more are maintained. administrators have the ability to create objects at the account scope and then assign them to underlying environments to provide an inheritance model. the account scope also serves as a global dashboard with views of all workspaces, run operations, and reports across environments. environment scope: environments enable self-service for platform teams looking to decentralize their operations and enable development teams. environments are isolated groupings of workspaces, teams, policies, variables, and more. users and teams can only access environments they have been given explicit permissions to. workspaces: workspaces are the child of an environment. this is where terraform and opentofu runs are executed, the state is stored, and all objects related to the deployments are linked. each workspace is linked to a single state file. deep dive into scalr.io features infrastructure as code (iac) define and manage your aws infrastructure using code, ensuring consistency and reproducibility. consistency: by defining infrastructure as code, you ensure that the same configurations are applied every time you deploy, reducing human errors and discrepancies. reproducibility: infrastructure as code allows you to recreate environments seamlessly, which is particularly useful for scaling applications, testing, and disaster recovery. scalr.io supports iac principles, allowing you to define your infrastructure using code. this approach offers several advantages: version control: track changes to your infrastructure over time. by using version control systems like git, you can keep a history of changes, roll back to previous versions, and collaborate more effectively. collaboration: multiple team members can work on the same infrastructure code. this fosters collaboration and ensures that changes are reviewed and tested before being deployed. testing: validate infrastructure changes before deployment. by testing changes in a staging environment, you can catch issues before they affect production. automation: automate infrastructure provisioning and updates. this reduces the time and effort required to manage your infrastructure and ensures that changes are applied consistently. automation automate routine tasks such as provisioning, configuration, and deployment, freeing up valuable time for strategic initiatives. task automation: automate mundane and repetitive tasks such as server provisioning, software installations, and updates. workflow automation: define workflows that streamline complex processes, ensuring tasks are executed in the correct sequence. scheduled automation: set up tasks to run on a regular schedule, ensuring that maintenance tasks are performed consistently. scalr.io provides powerful automation capabilities to streamline your workflows: custom scripts: create custom scripts to automate tasks. this allows you to tailor automation to your specific needs and workflows. scheduled tasks: run tasks on a regular schedule. this ensures that routine maintenance tasks are performed consistently and on time. event-driven automation: trigger actions based on specific events. for example, you can automatically scale resources in response to changes in demand. configuration management maintain desired state configurations for your aws resources, ensuring compliance and reducing errors. desired state configuration: ensure that all resources are in the desired state, and automatically correct deviations. configuration templates: use pre-defined templates to ensure consistency across environments. drift management: detect and correct configuration drift, ensuring that your environments remain consistent over time. ensure your aws resources are always in the desired state with scalr.io\u2019s configuration management features: configuration drift detection: identify discrepancies between the desired and actual state. this helps you detect and correct configuration drift before it causes issues. automatic remediation: automatically fix configuration issues. scalr.io can automatically apply corrections to ensure that resources remain in the desired state. compliance checks: verify compliance with organizational policies and industry standards. this ensures that your configurations meet required standards and helps you pass audits. security and compliance implement robust security measures and compliance frameworks to protect your sensitive data. security policies: define and enforce security policies across your aws environment. compliance frameworks: implement industry-standard compliance frameworks like hipaa, gdpr, and pci-dss. real-time monitoring: continuously monitor your environment for security threats and compliance violations. protect your aws environment with scalr.io\u2019s robust security features: iam role management: create and manage iam roles with granular permissions. this ensures that users have the appropriate level of access and reduces the risk of unauthorized access. security group management: define and manage security groups to control network traffic. this helps you protect your instances from unauthorized access and attacks. vulnerability scanning: identify and address vulnerabilities. scalr.io helps you detect vulnerabilities in your environment and provides tools to remediate them. compliance reporting: generate reports on compliance status. these reports provide insights into your compliance posture and help you demonstrate compliance to auditors and stakeholders. cost management track and optimize cloud spending, identifying cost-saving opportunities. cost tracking: monitor your cloud spending in real-time, and get detailed reports on usage and costs. budget alerts: set up alerts to notify you when spending exceeds predefined thresholds. optimization recommendations: receive recommendations for optimizing your cloud spending, such as rightsizing instances and eliminating unused resources. optimize your cloud spending with scalr.io\u2019s cost management capabilities: cost tracking: monitor cloud usage and costs. scalr.io provides detailed reports on your cloud spending, helping you understand where your money is going. cost allocation: allocate costs to different departments or projects. this helps you understand the cost of different initiatives and manage budgets more effectively. cost optimization recommendations: identify opportunities to reduce costs. scalr.io provides recommendations for optimizing your cloud spending, such as rightsizing instances and eliminating unused resources. monitoring and alerting monitor the health and performance of your aws resources, receiving timely notifications for critical issues. custom metrics: define and track custom metrics that are critical to your business. real-time alerts: set up real-time alerts for critical issues, ensuring that you can respond quickly. performance dashboards: use dashboards to visualize the health and performance of your infrastructure. keep track of your aws resources\u2019 health and performance with scalr.io\u2019s monitoring and alerting features: custom metrics: define custom metrics to monitor specific aspects of your infrastructure. this allows you to track the metrics that matter most to your business. real-time monitoring: monitor resources in real time. this helps you detect issues as they occur and respond quickly use cases for scalr.io in aws environments accelerating development and deployment infrastructure provisioning: rapidly create and provision aws resources (ec2 instances, vpcs, s3 buckets, etc.) using pre-defined templates. this allows teams to get environments up and running quickly, reducing the time to market for new features. environment management: manage multiple development, testing, and production environments consistently. scalr.io ensures that all environments are configured the same way, reducing discrepancies and the \u201Cit works on my machine\u201D problem. continuous integration and continuous delivery (ci\/cd): integrate scalr.io with ci\/cd pipelines to automate the deployment process. this integration allows for faster and more reliable deployments, reducing downtime and increasing productivity. infrastructure as code (iac): define infrastructure as code using tools like terraform or cloudformation, and manage it efficiently through scalr.io. this practice allows teams to manage and provision resources consistently and at scale. enhancing operational efficiency configuration management: maintain consistent configurations across multiple aws environments. this reduces configuration drift and ensures that environments remain stable and predictable. patch management: automate the application of security patches and updates. this ensures that all systems are up to date and reduces the risk of security vulnerabilities. cost optimization: analyze cloud usage patterns and identify cost-saving opportunities. scalr.io provides insights into usage patterns and suggests ways to optimize costs, such as shutting down unused instances or choosing cost-effective storage options. capacity planning: forecast resource needs and optimize resource allocation. this helps prevent resource shortages or over-provisioning, ensuring that you only pay for what you need. compliance management: ensure adherence to industry regulations and standards. scalr.io helps you implement and monitor compliance with various regulatory frameworks, reducing the risk of non-compliance penalties. improving security and compliance iam management: manage iam roles and policies effectively to control access to aws resources. this ensures that users only have access to the resources they need, reducing the risk of unauthorized access. security group management: define and manage security groups to protect your network. security groups act as virtual firewalls, controlling inbound and outbound traffic to your instances. vulnerability management: scan for vulnerabilities and remediate them promptly. scalr.io helps you identify and fix security vulnerabilities before they can be exploited. compliance reporting: generate reports on compliance status. these reports provide insights into your compliance posture and help you demonstrate compliance to auditors and stakeholders. enabling devops practices collaboration: foster collaboration among development, operations, and security teams. scalr.io provides tools and workflows that facilitate collaboration and ensure that everyone is on the same page. self-service provisioning: empower developers to provision resources independently. this reduces bottlenecks and allows developers to work more efficiently. conclusion scalr.io is a valuable tool for managing aws environments, offering a range of features to simplify operations, enhance efficiency, and improve security. by leveraging scalr.io\u2019s capabilities, organizations can optimize their cloud investments and achieve greater business agility. whether you\u2019re looking to accelerate development, enhance operational efficiency, improve security, or enable devops practices, scalr.io provides the tools and insights you need to succeed. strategic benefits of using scalr.io increased agility scalr.io enables organizations to respond quickly to changing business requirements. by automating infrastructure provisioning and management, development teams can rapidly spin up new environments, deploy updates, and scale resources in response to demand. this agility helps businesses stay competitive in a fast-paced market. enhanced collaboration scalr.io fosters collaboration across development, operations, and security teams. by providing a centralized platform for managing infrastructure as code, automating workflows, and monitoring resources, scalr.io ensures that all team members have access to the same information and tools. this collaboration leads to more efficient workflows, fewer miscommunications, and faster problem resolution. improved governance and compliance maintaining compliance with industry regulations and organizational policies is crucial for many businesses. scalr.io\u2019s robust security and compliance features help organizations implement and enforce compliance frameworks, conduct regular audits, and generate compliance reports. this not only reduces the risk of non-compliance penalties but also builds trust with customers and stakeholders. cost efficiency cloud spending can quickly spiral out of control without proper management. scalr.io\u2019s cost management capabilities provide detailed insights into cloud usage and spending, helping organizations identify cost-saving opportunities and optimize resource allocation. by implementing cost tracking, budget alerts, and optimization recommendations, businesses can achieve significant savings and allocate resources more effectively. reliability and performance ensuring the reliability and performance of aws resources is essential for maintaining business continuity and delivering a positive user experience. scalr.io\u2019s monitoring and alerting features allow organizations to track the health and performance of their infrastructure in real-time, receive timely notifications of issues, and take proactive measures to prevent downtime. this focus on reliability and performance helps maintain high levels of customer satisfaction and operational efficiency. final thoughts in today\u2019s digital age, managing cloud infrastructure effectively is a critical component of business success. scalar.io offers a comprehensive solution that addresses the complexities of aws management, empowering organizations to harness the full potential of their cloud investments. by adopting scalar.io, businesses can streamline operations, enhance security, optimize costs, and drive innovation. investing in scalar.io is not just about managing your aws resources more efficiently\u2014it\u2019s about transforming your approach to cloud management, enabling your organization to achieve its strategic objectives, and positioning yourself for long-term success in an increasingly competitive market. scalar.io is more than just a cloud management tool; it\u2019s a strategic asset that can propel your business forward."
},
{
"title":"Experience the Power of a Managed Service: Customize Your Oracle Database with AWS RDS Custom",
"body":"In the evolving landscape of cloud computing, Amazon Web Services (AWS) continues to innovate, offering solutions tailored to specific needs. One such solution is AWS RDS Custom for Oracle, which provides users with greater flexibility and control over their Oracle databases. This post will explore AWS RDS Custom for Oracle in depth, covering its features, benefits, use cases, and provid...",
"post_url":"https://www.kloia.com/blog/experience-the-power-of-a-managed-service-customize-your-oracle-database-with-aws-rds-custom",
"author":"Ahmet Ayd\u0131n",
"publish_date":"26-<span>Jul<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/ahmet-aydın",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/blog.png",
"topics":{ "aws":"AWS","rds-custom":"RDS Custom","performance-optimization":"Performance Optimization","oracle-database":"Oracle Database","cloud-computing":"Cloud Computing","managed-service":"Managed Service","database-management":"Database Management" },
"search":"30 <span>jul</span>, 2024experience the power of a managed service: customize your oracle database with aws rds custom aws,rds custom,performance optimization,oracle database,cloud computing,managed service,database management ahmet ayd\u0131n in the evolving landscape of cloud computing, amazon web services (aws) continues to innovate, offering solutions tailored to specific needs. one such solution is aws rds custom for oracle, which provides users with greater flexibility and control over their oracle databases. this post will explore aws rds custom for oracle in depth, covering its features, benefits, use cases, and providing practical examples, including how to create and use a custom engine version (cev). understanding aws rds custom for oracle aws rds custom for oracle is a managed database service that combines the ease of use and scalability of amazon rds with the flexibility and control of self-managed oracle databases. it allows customers to run oracle databases with custom configurations and applications that require elevated privileges, which are not possible with standard rds offerings. key features of aws rds custom for oracle elevated privileges: rds custom allows users to have root access to the underlying ec2 instances, providing the ability to customize the database environment. custom configurations: users can install and configure third-party software and custom scripts, enabling a more tailored database environment. full oracle database features: rds custom supports all oracle database features, including oracle rac, data guard, and others, which might not be fully supported in standard rds. automated backups and patching: while users have more control, rds custom still offers automated backups and patching, simplifying management tasks. integration with aws services: rds custom integrates seamlessly with other aws services such as aws cloudwatch, aws iam, and aws cloudtrail, providing comprehensive monitoring and security. key differences between standard rds and rds custom customization: rds custom allows for deeper customization of the database environment, including the ability to install custom software and make changes to the os and database settings. access: users have ssh access to the underlying ec2 instances, providing greater control and flexibility. management: while rds custom provides more control, it also requires more responsibility from the user for tasks such as patching, backups, and high availability configurations. benefits of using aws rds custom for oracle flexibility: rds custom provides unparalleled flexibility, allowing users to tailor the database environment to their specific needs. this is particularly beneficial for applications that require custom configurations or third-party integrations. control: with root access to the underlying instances, users have complete control over the database environment. this includes the ability to apply patches, install software, and make configuration changes that are not possible with standard rds instances. scalability: like all aws services, rds custom is designed to scale. users can easily scale their database instances up or down based on demand, ensuring optimal performance and cost-efficiency. reliability: aws's robust infrastructure ensures high availability and durability for rds custom instances. additionally, users can configure their own high availability and disaster recovery solutions using oracle features such as data guard. cost efficiency: rds custom can be more cost-effective for users who require advanced configurations and customizations that would otherwise require self-managed environments. by leveraging aws's infrastructure, users can reduce the overhead associated with managing hardware and data centers. use cases legacy application support: many legacy applications rely on specific oracle configurations and customizations. rds custom enables organizations to migrate these applications to the cloud without extensive refactoring, preserving their existing investments. high-performance applications: applications that require high performance and custom optimizations can benefit from the control and flexibility provided by rds custom. users can fine-tune the database environment to meet their specific performance requirements. compliance and security: organizations with strict compliance and security requirements can leverage rds custom to implement custom security measures and compliance configurations. this is particularly useful for industries such as finance and healthcare, where regulatory requirements are stringent. development and testing: rds custom is ideal for development and testing environments where developers need the ability to experiment with different configurations and software installations. it provides a controlled environment that can be easily reset or reconfigured as needed. getting started with aws rds custom for oracle creating a custom engine version (cev) for aws rds custom for oracle creating and using a custom engine version (cev) allows you to specify the exact oracle database version you need, including custom patches or configurations. prepare oracle installation files download the oracle installation files and any required patches. package these files into a zip file. upload installation files to s3 upload the zip file to an s3 bucket in your aws account. ensure the s3 bucket permissions allow access to the rds service. create an iam role for s3 access create an iam role that grants access to the s3 bucket. attach the amazons3readonlyaccess policy to this role. create the custom engine version use the aws management console, aws cli, or aws sdk to create the cev. using the aws management console for cev creation. go to the rds section in the aws management console. select \"custom engine versions\" from the sidebar. click on \"create custom engine version\". fill in the details: name: provide a name for your cev. description: optionally, add a description. engine type: select oracle. engine version: specify the oracle version. s3 bucket: provide the s3 bucket name and path to the installation files. iam role: select the iam role created for s3 access. cev manifest: generate related manifest file creating an rds custom for oracle instance with cev log in to your aws management console and navigate to the rds dashboard. click on \"create database\" and select \"rds custom\" as the database creation method. choose oracle as the database engine and select your custom engine version. configure the instance specifications, including instance class, storage, and network settings. configure the database settings such as db name, master username, and password. customize additional settings such as backup retention, maintenance windows, and monitoring. customizing and managing your rds custom instance once the rds custom instance is created with the cev, you can connect via ssh and perform additional customizations as needed, such as applying patches, installing additional software, and tuning the database settings. best practices security: implement robust security measures, including vpc isolation, security groups, and iam roles. monitoring: continuously monitor the performance and health of your rds custom instances. backup and recovery: regularly back up your data and test your recovery procedures. optimization: regularly review and optimize your database configurations and queries. documentation: maintain thorough documentation of your custom configurations and procedures. challenges and considerations management overhead: rds custom requires more hands-on management compared to standard rds. cost: the flexibility of rds custom may come with higher costs due to the need for larger instances and additional storage. complexity: customizing and managing an rds custom environment can be complex and may require specialized knowledge of oracle databases and aws services. conclusion aws rds custom for oracle provides a robust, flexible, and scalable solution for businesses with advanced oracle database requirements. by adopting rds custom, organizations can achieve a balance between control and convenience, optimizing their database environments to meet their specific needs while benefiting from the reliability and scalability of aws's infrastructure. with proper management and adherence to best practices, rds custom for oracle can significantly enhance your database operations, driving business growth and efficiency. as aws continues to innovate, we can expect further enhancements and new features for rds custom for oracle. staying updated with aws announcements and participating in aws training and certification programs can help businesses stay ahead of the curve and fully utilize the capabilities of their rds custom instances."
},
{
"title":"AI Code Generators Comparison",
"body":"Introduction In recent years, software development has become increasingly complex and time-consuming due to the ever-increasing demand for high-quality, scalable, and efficient applications. To address this challenge, many developers turn to artificial intelligence (AI) and machine learning (ML) technologies to automate various aspects of software development, such as coding tasks, test...",
"post_url":"https://www.kloia.com/blog/ai-code-generators-comparison",
"author":"Bilal Unal",
"publish_date":"23-<span>Jul<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/bilal-unal",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/AI-Code-Generators-Comparison-blog.webp",
"topics":{ "aws":"AWS","infrastructure":"infrastructure","ai":"ai","mistral-ai":"Mistral AI","aws-codewhisperer":"AWS CodeWhisperer","github-copilot":"GitHub Copilot","google-bard":"Google Bard","chatgpt":"ChatGPT","ai-code-generators":"AI Code Generators" },
"search":"23 <span>jul</span>, 2024ai code generators comparison aws,infrastructure,ai,mistral ai,aws codewhisperer,github copilot,google bard,chatgpt,ai code generators bilal unal introduction in recent years, software development has become increasingly complex and time-consuming due to the ever-increasing demand for high-quality, scalable, and efficient applications. to address this challenge, many developers turn to artificial intelligence (ai) and machine learning (ml) technologies to automate various aspects of software development, such as coding tasks, testing, and debugging. one specific area where ai is gaining popularity is code generation. in this report, we will compare and contrast different code generator ai tools available in the market and evaluate their key features, functionalities, performance metrics, ease of use, integration capabilities, security and privacy considerations, cost and pricing structure, customer support and technical resources, and future outlook. by doing so, we aim to help readers make informed decisions when selecting the best code generator ai tool for their needs. background and overview of code generator ai tools code generators powered by ai or ml technologies have been around for several decades now. however, the advent of cloud computing and big data analytics has accelerated the adoption of such tools among software developers. these days, various types of code generation tools exist, including low-code tools, no-code tools, code completion tools, code refactoring tools, and auto-generated apis. each of these tools uses different techniques and algorithms to generate code automatically, but they all share one common goal: to speed up the software development process while ensuring quality, maintainability, and compatibility across multiple programming languages, frameworks, and environments. the rise of ai-powered code generators has sparked significant interest from both academia and industry. researchers continue to explore new ways to improve the accuracy, efficiency, robustness, and creativity of these tools, often using deep learning architectures like recurrent neural networks (rnn), transformer models, or generative adversarial networks (gan). meanwhile, companies such as microsoft, google, ibm, amazon, and sap offer their own code generation services or integrations key features and functionalities comparison one of the most important factors to consider when comparing code generator ai tools is their key features and functionalities. below are some of the most commonly evaluated criteria: code completion: this feature allows users to type partial code snippets and get suggestions for completing them automatically. code completion is particularly useful when working with frequently used constructs, libraries, or modules. code generation: this feature generates entire pieces of code based on user input, usually in response to a prompt or query. code generation can be achieved through templates, patterns, or rules that define how the generated code should look and behave. code translation: this feature translates code written in one language to another language automatically. code translation is especially helpful when working with cross-platform or multi-lingual projects. language support: different code generation ai tools may have varying levels of support for different programming languages, frameworks, libraries, and platforms. it is essential to choose a tool that supports the languages and technologies you plan to work with. for instance, if you are building a web application using javascript and react, you would want to select a code generator tool that provides excellent support for those technologies. integrations: many code generator ai tools integrate with other development tools, cloud platforms, and collaboration platforms to enhance their usability and functionality. integrating with external tools and services can streamline the development process, facilitate teamwork, and ensure consistency and reliability across different stages of development. overall, the choice of code generator ai tools depends on your specific requirements, preferences, constraints, and priorities. by evaluating the key features and functionalities of different tools against your criteria, you can identify the ones that best fit your needs and goals, and prioritize them accordingly. by conducting a thorough comparison of code generator ai tools based on their key features and functionalities, you can gain insights into their strengths, weaknesses, trade-offs, and opportunities, and make informed decisions about which tools to adopt and leverage for your projects. in the following sections, we will dive deeper into each of these criteria and analyze them in more detail. chatgpt chatgpt is a model from openai and is based on a large language model trained on diverse text from the internet. it was introduced in late 2021 and quickly gained popularity due to its ability to generate human-like responses. while it wasn't designed specifically for coding, it can assist with various tasks such as code snippets generation, explaining concepts, debugging, and more. capabilities syntax assistance: chatgpt can suggest complete or partial lines of code to fill in the gaps when writing a program. it understands different programming paradigms and syntaxes. debugging: if you provide an error message, it can help you identify possible causes and propose solutions. explanation of concepts: when asked about complex coding topics or algorithms, chatgpt provides detailed explanations that are easy to understand for learners and developers alike. multilingual support: chatgpt can generate code in various languages including python, javascript, c++, etc. interactive learning: with conversational capabilities, it makes the learning experience interactive and engaging. limitations context-dependent: chatgpt may require more context to provide accurate suggestions or explanations compared to other code generation ai tools like github copilot. dependency on prompts: it relies heavily on how the question is framed, and it might not fully understand the context of a given programming problem without clear instructions. scalability: while chatgpt has shown remarkable progress in generating human-like text, it may struggle with larger codebases or complex projects. github copilot github copilot, developed by microsoft, uses a deep learning model for generating code suggestions in real-time as you type. it was released in 2021 in preview form and has since gained significant attention among developers due to its potential for enhancing productivity. github copilot is integrated into popular editors like visual studio code, atom, and sublime text. capabilities intelligent autocompletion: github copilot offers more accurate code suggestions based on the context of your project and the surrounding code. multilingual support: it supports multiple programming languages such as javascript, typescript, python, c++, go, ruby, rust, html\/css, and php. machine learning model: github copilot leverages machine learning to learn from millions of public repositories on github, making its suggestions more contextually relevant. real-time integration: it is seamlessly integrated with popular ides like visual studio code, allowing you to directly utilize its suggestions in your development workflow. continuous learning: as you use it, the ai continually learns and adapts to your coding style and preferences, making its suggestions increasingly accurate over time. limitations dependence on github: since github copilot is an extension of github, it requires an active internet connection to function properly. this might not be ideal for developers working in areas with limited connectivity or those who prefer offline development. ethical and privacy concerns: github copilot's deep learning model is trained on a vast dataset of open-source projects, raising questions about intellectual property and the ethical implications of using others' work for generating suggestions. google bard google bard is an experimental conversational ai tool developed by google research, and it uses the large language model (llm) named gemini to generate human-like responses to text inputs. google bard was released in 2023, and its primary focus is on natural language understanding and generation rather than code generation specifically. capabilities syntax assistance: although not its main focus, gemini can help generate code snippets or fill in missing parts of a line for python programming. it may have limited support for other languages. however, it does not have any integration with popular ides or visual studio code (vscode). debugging and explaining concepts: similar to chatgpt, google bard can debug simple issues and explain various coding concepts in detail. interactive learning: with its conversational capabilities, google bard makes the learning experience interactive and engaging. continuous improvement: as a research project, google bard is continually evolving, with updates and improvements being made regularly to enhance its capabilities. limitations limited coding-specific functionality: while google bard can generate code snippets or assist with simple coding tasks for python, it may not be as effective as specialized ai tools like github copilot or mistral when working with other programming languages or directly within ides and vscode. contextual understanding: although gemini is a powerful llm, it might struggle to fully understand complex programming problems and provide accurate solutions without clear instructions. dependence on prompts: similar to chatgpt, google bard relies heavily on how the question is framed to generate appropriate responses. lack of integration: google bard does not have any official integration with popular ides or vscode, limiting its accessibility for developers working in these environments. mistral mistral is a french company specializing in artificial intelligence, founded by researchers arthur mensch, timoth\u00E9e lacroix, and guillaume lample in april 2023. previously employed at meta and google, they have experience in the development of large language models (llms). capabilities multilingual support: mistral supports various popular programming languages such as java, python, javascript, and c++. integration with popular ides and vscode: mistral can be easily integrated into popular ides (integrated development environments) and visual studio code (vscode), making it an accessible option for developers working in these environments. context awareness: mistral is designed to understand the context of your codebase, enabling it to generate relevant and accurate code snippets suggestions based on your project's requirements. continuous improvement: being a company with dedicated resources, mistral can invest heavily in research, development, and updates to ensure its capabilities remain cutting-edge. limitations dependence on correct prompts: similar to other ai tools like chatgpt and google bard, mistral relies heavily on the correctness and specificity of prompts for generating appropriate code suggestions. potential integration challenges: mistral might face challenges integrating with specific ides or niche coding environments due to their unique features and requirements. aws codewhisperer aws codewhisperer, developed by amazon web services (aws), is an ai-driven tool that aims to enhance the process of developing applications specifically for the aws cloud environment. it assists developers in writing aws-specific code, optimizing performance, and ensuring best practices. capabilities aws integration: codewhisperer is deeply integrated with aws services, providing context-aware suggestions and code snippets tailored to aws cloud development, making it an ideal choice for developers working in the aws ecosystem. architecture recommendations: the tool suggests architecture patterns that align with aws best practices, helping developers create scalable, secure, and optimized applications. performance optimization: codewhisperer can identify potential performance bottlenecks in your code and recommend optimizations to enhance application efficiency. security best practices: the tool offers guidance on security-related code implementations, ensuring that applications adhere to aws security standards, maintaining the security of your aws infrastructure and data. infrastructure as code (iac): codewhisperer assists in writing infrastructure as code templates for provisioning aws resources, making it easier to manage and deploy infrastructure using popular iac tools like terraform, cloudformation, or serverless application model (sam). conclusion in this comparison, we evaluated ai-powered code generators including chatgpt, github copilot, google bard, and mistral. each tool has unique capabilities, strengths, and limitations. when selecting a code generator ai tool, consider factors such as code completion, generation, translation, language support, and integrations. thoroughly test and experiment to assess performance and compatibility with your coding style. while ai-powered code generators offer benefits, be aware of their limitations and ethical considerations. as ai advances, expect further improvements in code generation tools. carefully evaluate and select the appropriate tool for your needs, and exercise critical thinking when using ai-generated code."
},
{
"title":"Creating End-to-End Web Test Automation Project from Scratch \u2014 Part 5",
"body":"Let\u2019s Integrate Our Dockerized Web Project with CI\/CD Pipeline! Welcome to the 5th part of the blog post series called \u201CCreating an End-to-End Web Test Automation Project from Scratch.\u201D So far, we have covered many topics, from the creation of a project to containerizing it with Docker. If you need to review the previous chapters, you can find the articles at the links below. Let\u2019s Creat...",
"post_url":"https://www.kloia.com/blog/creating-end-to-end-web-test-automation-project-from-scratch-part-5",
"author":"Muhammet Topcu",
"publish_date":"11-<span>Jul<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/creating-end-to-end-web-test-automation-project-from-scratch.webp",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","docker":"Docker","continuous-integration":"Continuous Integration","continuous-delivery":"Continuous Delivery","quality-assurance":"Quality Assurance","selenium":"Selenium","ruby":"Ruby","hooks":"hooks","qa":"QA","qateam":"qateam","endtoend":"endtoend","webhook-configuration":"Webhook Configuration","jenkins-plugins":"Jenkins Plugins","jenkinsfile":"Jenkinsfile","ci-cd-pipeline-integration":"CI\/CD Pipeline Integration" },
"search":"12 <span>jul</span>, 2024creating end-to-end web test automation project from scratch \u2014 part 5 test automation,software testing,docker,continuous integration,continuous delivery,quality assurance,selenium,ruby,hooks,qa,qateam,endtoend,webhook configuration,jenkins plugins,jenkinsfile,ci\/cd pipeline integration muhammet topcu let\u2019s integrate our dockerized web project with ci\/cd pipeline! welcome to the 5th part of the blog post series called \u201Ccreating an end-to-end web test automation project from scratch.\u201D so far, we have covered many topics, from the creation of a project to containerizing it with docker. if you need to review the previous chapters, you can find the articles at the links below. let\u2019s create and configure our web test automation project! let\u2019s write our test scenarios! bonus: recording failed scenario runs in ruby let\u2019s configure our web test automation project for remote browsers and parallel execution let\u2019s dockerize our web test automation project bonus: recording scenario runs on docker with selenium video! let\u2019s integrate our dockerized web test automation project with ci\/cd pipeline! auto-scaling and kubernetes integration with keda in this chapter, you will integrate ci\/cd into your project with jenkins and execute your test in a docker container on a scheduled basis. also, you will create a job that rebuilds and pushes your docker image to the hub! let\u2019s start! jenkins installation first, install jenkins via homebrew: brew install jenkins-lts after the installation, start jenkins: brew services start jenkins-lts the default port for jenkins is 8080. go to the jenkins page at \u201Chttp:\/\/localhost:8080\u201D jenkins will create a password for you in a file and state its location. go to that location copy the password paste it into the input field on jenkins page and continue. then create a username\/password pair for yourself. for the official installation guide, please refer to jenkins documentation. configurations environment before creating jobs for jenkins, you need to configure it first. go to dashboard -> manage jenkins -> configure system. under the global properties, check environment variables. by stating your path here, you enable jenkins to run ruby, docker and other applications on your machine. you can get all your path variables via the command below: echo $path note that you need to replace blanks with colons. for example: your path: \/users\/muhammettopcu\/library\/android\/sdk\/tools \/users\/muhammettopcu\/library\/android\/sdk\/platform-tools \/users\/muhammettopcu\/.jbang\/bin \/opt\/homebrew\/bin \/opt\/homebrew\/sbi your path for jenkins: \/users\/muhammettopcu\/library\/android\/sdk\/tools:\/users\/muhammettopcu\/library\/android\/sdk\/platform-tools:\/users\/muhammettopcu\/.jbang\/bin:\/opt\/homebrew\/bin note: if you are on windows, concatenate them with semicolons. plug-ins now go to dashboard -> manage jenkins -> manage plugins. in this screen, you will install the plugins that you need: cucumber reports: this enables you to have test reports visually. git parameter: this lets you pick a branch on your project and run the code inside. github (if not installed already): this one enables you to integrate with github features such as webhooks. cloudbees docker build and publish: this is to build a docker image and publish it. credentials now you will configure credentials for both github and dockerhub. you need to navigate to dashboard -> manage jenkins -> credentials. then click \u201C(global)\u201D domain. click +add credentials button to add your credentials. for dockerhub: kind: username with password. username: username of your dockerhub account. password: access token generated on dockerhub. to create your access token, log in to your dockerhub account. go to account settings: then go to the security panel. click on the new access token button. give your access token a name and r-w-d rights. then click on generate. id: id to be used in jenkinsfile or for other configurations: dockerhub. description: description of the credential. here, dockerhub again. for github: kind: username with password. username: username of your github account. password: access token generated on github. to create your access token, log in to your github account. go to settings -> developer settings -> personal access tokens and click the generate new token button. give a name to your token and generate it. id: id to be used in jenkinsfile or for other configurations: github. description: description of the credential. here, github again. ruby web project also, let\u2019s make a small addition to your code to generate cucumber reports: first, create a new folder named \u201Creports\u201D under your project directory. then in the cucumber.yml file (if you don\u2019t have yet, create one under the project directory), add `--format json --out reports\/reports.json` to your default profile. this will let you create test execution reports under the reports folder in json format, named reports.json. `default: \"--format progress --format json --out reports\/reports.json\"` dockerfile this time, you will not build your docker image with the files residing in your local. instead, you will get the files directly from the github repository. so you need to alter your dockerfile! the first couple of lines are the same: from ruby:3.0 run apt update && apt install git #run apk update && apk add --no-cache build-base && apk add git run apt-get install -y ffmpeg && apt-get install bc workdir \/usr\/src\/app you will make changes starting from this line: run curl -o \/usr\/src\/app\/gemfile https:\/\/raw.githubusercontent.com\/kloia\/dockerize-ruby-web-project\/master\/gemfile with the `curl` command, you download your gemfile from your remote directory to the workdir. note that https:\/\/raw.githubusercontent.com enables you to download individual files as plain text. run gem install bundler && bundle install first, install your gems. run cd \/usr\/src\/app && git init && git remote add origin https:\/\/github.com\/kloia\/dockerize-ruby-web-project.git && git fetch && git checkout -f origin\/master now let\u2019s look at the above run command bit by bit: - cd \/usr\/src\/app` navigates to workdir. - `git init` initializes git. - `git remote add origin https:\/\/github.com\/kloia\/dockerize-ruby-web-project.git` adds your existing github project as a remote repository to git. - `git fetch` fetches everything from the remote repository. - `git checkout -f origin\/master` forces checkout to origin\/master. \u201Cwhy force it?\u201D otherwise, you can\u2019t checkout since the gemfile you downloaded in the earlier step conflicts with this command. the rest of the file is the same. cmd parallel_cucumber -n 2 expose 5000:5000 so your final dockerfile looks like this below: from ruby:3.0 run apt update && apt install git #run apk update && apk add --no-cache build-base && apk add git run apt-get install -y ffmpeg && apt-get install bc workdir \/usr\/src\/app run curl -o \/usr\/src\/app\/gemfile https:\/\/raw.githubusercontent.com\/kloia\/dockerize-ruby-web-project\/master\/gemfile run gem install bundler && bundle install run cd \/usr\/src\/app && git init && git remote add origin https:\/\/github.com\/kloia\/dockerize-ruby-web-project.git && git fetch && git checkout -f origin\/master cmd parallel_cucumber -n 2 expose 5000:5000 jenkins job creations test run job with parameters let\u2019s create a jenkinsfile with your favourite file editor. the first part of your file is: pipeline { agent any parameters { choice(name: 'headless', choices: ['true', 'false'], description: '') choice(name: 'browser', choices: ['remote-chrome', 'remote-firefox'], description: '') string(name: 'threadcount', defaultvalue: '1', description: '') string(name: 'retry', defaultvalue: '1', description: '') gitparameter name: 'branch_tag', type: 'pt_branch', defaultvalue: 'master', selectedvalue: 'default', quickfilterenabled: true, sortmode: 'descending_smart', tagfilter: '*', branchfilter: 'origin\/(.*)', userepository: '.*.git', description: 'select your branch' } - in the parameters section, you create your sections for the parameters, which you will use while running parallel_cucumber command in the command line. - the gitparameter lets you choose different branches on which you run your code. note that you also have a video_recording branch with the configurations of chapter 2.1. now next, you are going to define your stages. stages { stage('checkout & run tests') { steps { sh \"docker run muhammettopcu\/dockerize-ruby-web:latest git checkout ${params.branch_tag} && parallel_cucumber -n ${params.threadcount.tointeger()} -o 'headless=${headless} browser=${browser} --retry ${params.retry.tointeger()}'\" } } } } you use the `docker run` command with two parts: - `git checkout ${params.branch_tag}` checkouts to a specified branch. - `parallel_cucumber -n ${params.threadcount.tointeger()} -o 'headless=${headless} browser=${browser} --retry ${params.retry.tointeger()}'` runs cucumber with the specified arguments in jenkins build. and last, let\u2019s define your post actions. post { always { cucumber([ buildstatus: 'null', customcssfiles: '', customjsfiles: '', failedfeaturesnumber: -1, failedscenariosnumber: -1, failedstepsnumber: -1, fileincludepattern: '**\/*.json', jsonreportdirectory: 'reports', pendingstepsnumber: -1, reporttitle: 'cucumber report', skippedstepsnumber: -1, sortingmethod: 'alphabetical', undefinedstepsnumber: -1 ]) } success{ script{ sh \"echo successful\" } } failure{ script{ sh \"echo failed\" } } aborted{ script{ sh \"echo aborted\" } } } } in the always action, you define your cucumber plug-in. note that `jsonreportdirectory: 'reports'` is the directory that you created for the report files in your project. you might ask \u201Cbut i am running my tests in a docker container. the reports would be generated inside the container. how will jenkins reach these?\u201D a valid point, indeed! it works because docker plugin automatically syncs jenkins\u2019 workspace and the container\u2019s workdir. if it didn\u2019t, you would have needed to do it manually with volumes when executing `docker run` command. for the success, failure and aborted events, there is only a simple echo command. you may want to implement a notification script for your favourite messaging applications such as discord or slack. if you are interested in doing this, me and my colleagues have covered this topic in a recent webinar, here is the link for the recording. this is it. let\u2019s save and move it to your project directory. it is named jenkins_cucumber without any extensions. do not forget to push it to your github repository since you\u2019ll make jenkins pull it from there. now go to jenkins and create your first job with the jenkinsfile that you created. first, you need to click the new item button on the dashboard. now write your job name and choose pipeline option. now configure your job. if you want it to run every hour such as 07:00, 08:00, you need to use `0 * * * *` this is called \u201Ccron expression\u201D. you can configure it according to your own needs. for detailed information, please refer to jenkins documentation. now in the pipeline section, choose pipeline script from scm and as scm choose git. now state your github repository and credentials. and finally, you need to give the path of your jenkinsfile here: note that this path is of your `jenkinsfile` residing in your github repository. if you did not put it in the root directory, then change the path accordingly. now you can see your job in the dashboard! let\u2019s spin up your selenium server with docker compose and run your job manually, just for this time. choose your parameters and click \u201Cbuild\u201D button. the tests are running: let\u2019s see your results: some of them failed. open the build on the left side by clicking on it: on the opened panel, click cucumber reports: hey, your test results are visualized! automated docker image build with github webhook now let\u2019s create a jenkins job that rebuilds your project image and pushes it to dockerhub. you will trigger this job with github webhook, which pings jenkins only when there is a push action in your repository. pipeline { agent any options { builddiscarder(logrotator(numtokeepstr: '5')) } environment { dockerhub_credentials = credentials('dockerhub') } stages { stage('build') { steps { sh \"docker build -t muhammettopcu\/dockerize-ruby-web:latest https:\/\/github.com\/kloia\/dockerize-ruby-web-project.git\" } } stage('login') { steps { sh \"echo $dockerhub_credentials_psw | docker login -u $dockerhub_credentials_usr --password-stdin\" } } stage('push') { steps { sh \"docker push muhammettopcu\/dockerize-ruby-web:latest\" } } } post { always { sh 'docker logout' } } } 1. in the options block, you restrict your job to keep only the latest five builds. the older ones are discarded. 2. in the environment block, you assign dockerhub credentials to a variable named dockerhub_credentials. 3. you have three stages this time: - build: it builds your project with a tag number. (in this instance, i just wanted to overwrite my builds, so i used `latest` as a tag number. you may want to implement an algorithm to track your version numbers.) - login: it logins to dockerhub account with the credentials you created in the previous steps. - push: it pushes your newly created image to your dockerhub repository. 4. and in the always block, you log out from docker. now let\u2019s create your job. you know the drill this time. the only thing different from the previous job is the build triggers section. choose \u201Cgithub hook trigger for gitscm polling\u201D option. note: the jenkins file name for this job should be different from the first one. now go to the directory of your project on github. in there, navigate to settings -> webhooks. on this page, click `add webhook` button: the payload url should be in this form: https:\/\/your-jenkins-domain\/github-webhook\/ since i am testing jenkins on my local machine, i need to expose my 8080 port to the internet. either i use \u201Cport forwarding\u201D or \u201Creverse proxy\u201D for this. my internet provider uses cgnat, so the first one is not an option for me. therefore i chose localhost.run service. why did i prefer this service? ngrok is another option but the free version puts an intermediate layer that gives warning to the user who wants to reach the website. it is for phishing protection, but because of this layer webhook can not reach jenkins. if you already own a premium ngrok service or if you run your jenkins in any other public server, you may not need to use this option. so, you need to create an ssh key pair for this: type `ssh-keygen -b 4096 -t rsa` and press enter. a file name prompt will be shown. simply press enter to use the default value. you will enter the passphrase for the ssh key pair in the upcoming step. now expose your port with the command below: `ssh -r 80:localhost:8080 nokey@localhost.run` type your passphrase and press enter. it gave me the \u201Chttps:\/\/a82b03292e9fbd.lhr.life\u201D url to reach my port. write your own to webhook\u2019s payload section: then choose \u201Cjust the push event.\u201D option and click `add webhook` button. note that you can specify your webhook\u2019s triggers with the \u201Clet me select individual events.\u201D option. now you have your webhook ready. let\u2019s make a change in your project file and push it to your repository! for demonstration purposes, i changed readme.md file by adding just a whitespace character. and jenkins job got triggered automatically! now let\u2019s check the dockerhub repository: the new build went through as well! with this, we concluded ci\/cd integration. so far, we covered how to install and configure jenkins, create jenkinsfiles, create and configure webhooks, create pipelines, and set up reverse proxying for your port. in the next and final part of this blog post series, you will integrate kubernetes with your project! see you :)"
},
{
"title":"AI Assistant for Enterprise: My Experience with Amazon Q Connectors",
"body":"In today\u2019s dynamic business environment, quick and easy access to information is essential for maintaining efficiency. Companies often rely on multiple data sources\u2014like public websites, Google Drive, Slack, and GitHub\u2014to store and manage documents. But what if you could bring these sources together into a single, centralized AI assistant? Amazon Q, an innovative tool, brings together mu...",
"post_url":"https://www.kloia.com/blog/ai-assistant-for-enterprise-my-experience-with-amazon-q-connectors",
"author":"Derya (Dorian) Sezen",
"publish_date":"30-<span>Jun<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/derya-dorian-sezen",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/ai_assistant.webp",
"topics":{ "genai":"genai" },
"search":"21 <span>aug</span>, 2024ai assistant for enterprise: my experience with amazon q connectors genai derya (dorian) sezen in today\u2019s dynamic business environment, quick and easy access to information is essential for maintaining efficiency. companies often rely on multiple data sources\u2014like public websites, google drive, slack, and github\u2014to store and manage documents. but what if you could bring these sources together into a single, centralized ai assistant? amazon q, an innovative tool, brings together multiple data sources into a single, centralized ai assistant, revolutionizing the way enterprises manage and access information. an ai assistant, also known as a chatbot, is a software application that uses artificial intelligence (ai) to simulate human-like conversations with users. these assistants are becoming indispensable in enterprise settings, streamlining processes and enhancing user experiences. today, i\u2019m sharing my journey with amazon q, an ai assistant announced during re:invent 2023, focusing on its connectors to create an enterprise-specific chatbot. what is amazon q? amazon q is an ai-powered assistant designed to simulate human-like conversations, streamlining processes and enhancing user experiences in enterprise settings. during my journey with amazon q, i explored its ability to integrate with various data sources to create a highly efficient, enterprise-specific chatbot. public website google drive slack github why centralization matters: the amazon q advantage amazon q serves as more than just a chatbot\u2014it\u2019s a replacement for traditional intranets, offering a more user-friendly and interactive way to access company data. by bringing everything into one place, your organization can achieve greater efficiency and productivity. my journey with amazon q: setting up your enterprise chatbot step 1: creating amazon q and setting up authentication to get started, i created amazon q and defined the authentication mechanism, opting for google login to streamline access for employees. step 2: connecting data sources integrating various data sources required unique authentication methods. despite some challenges due to limited documentation, i managed to securely store credentials in aws secrets manager for smooth integration. credentials for each data source were securely stored in aws secrets manager. step 3: integrating github github integration was straightforward with the right token, simplifying the process.. step 4: tackling google drive google drive integration posed some challenges, requiring the creation of an application under the google admin console with a read-only role. credentials were securely managed using the aws secrets manager. store the credentials again under aws secrets manager: step 5: adding slack finally, i integrated slack, ensuring careful handling of tokens and permissions to complete the setup. overcoming challenges: key insights and resolutions while working with amazon q, i encountered a few hurdles, particularly around synchronization and connector limitations. here\u2019s what i learned: interval management: keep the sync interval narrow. google api and slack api may block high-volume document crawls. connector limitations: current connectors are not optimized for high-volume crawling, something that will need to be addressed in future updates. the chatbot experience: a glimpse of the future once the documents were indexed, the chatbot started taking shape, offering a seamless experience for employees seeking information across various platforms. conclusion: is amazon q right for your enterprise? amazon q shows great promise as a tool for centralizing enterprise data, potentially replacing traditional intranet portals. however, it\u2019s important to be aware of certain limitations: language support: amazon q business is currently optimized for english. connector documentation: the documentation could be more comprehensive, especially for high-volume environments. despite these challenges, amazon q is a powerful tool for enterprise ai assistants, and with future updates, it\u2019s likely to become even more robust and user-friendly. stay tuned for more updates and insights on how to leverage ai assistants like amazon q to boost your enterprise\u2019s productivity."
},
{
"title":"Load Test Game-Changer: k6 Browser",
"body":"You are reading the fourth post in the performance testing series. In case you missed the previous post, here they are: A Beginner\u2019s Guide to Performance Testing New Era in Performance Testing: k6 k6 Report Guideline: Understanding Metrics and Output Types k6 is one of the first performance testing programs to enable both protocol and browser testing. You may now execute your web applica...",
"post_url":"https://www.kloia.com/blog/load-test-game-changer-k6-browser",
"author":"Elif Ozcan",
"publish_date":"27-<span>Jun<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/elif-ozcan",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/load-test-game-changer-k6-browser-blog.webp",
"topics":{ "test-automation":"Test Automation","k6":"k6","qa":"QA","performance-testing":"Performance Testing","load-testing":"Load Testing","qateam":"qateam","browser-testing":"Browser Testing","javascript-testing":"JavaScript Testing","user-experience":"user experience","load-test-game":"Load Test Game" },
"search":"27 <span>jun</span>, 2024load test game-changer: k6 browser test automation,k6,qa,performance testing,load testing,qateam,browser testing,javascript testing,user experience,load test game elif ozcan you are reading the fourth post in the performance testing series. in case you missed the previous post, here they are: a beginner\u2019s guide to performance testing new era in performance testing: k6 k6 report guideline: understanding metrics and output types k6 is one of the first performance testing programs to enable both protocol and browser testing. you may now execute your web application's performance test in the web browser, just like any other user. it is a game-changer in the load testing arena. this functionality allows for simulating real user scenarios more accurately, capturing performance metrics, and analysing web application behaviour comprehensively. why should you use this feature? using this feature, you can receive browser-specific metrics like total page load time. it makes sure that all elements are interactive, checks for loading spinners that take a long time to disappear, and monitors how the front end responds to thousands of simultaneous protocol-level requests. \u2139take notice! this functionality is currently in the experimental stage, and k6 says it is still working to make the module stable. to use this feature, make sure you are using the latest k6 version and have installed a chromium-based browser. knowing this, let's move on to exploration. a simple browser test before anything else, create a new js file and import the browser module. in the options topic, the executor and browser type are mandatory, and the browser type must be set to 'chromium'. you can select a variety of executors; for the first example, i used 'shared-iterations'. const page = browser.newpage(); if you want to change the size of the browser window, you can add the following piece of code. page.setviewportsize({ width: 1425, height: 1818 }); after getting the page, you can interact with it. in this example, i visit a url, take a screenshot of the page, and close the page. export default async function(){ const page = browser.newpage(); page.setviewportsize({ width: 1425, height: 1818 }); await page.goto('https:\/\/hotel.testplanisphere.dev\/en-us\/login.html'); page.screenshot({path: 'screenshot.png'}); page.close(); } let's run the script with this command: k6 run script.js you didn\u2019t see the browser, did you? k6 has default arguments when launching the browser and the headless default value is true. if you want to change this you can use this command: k6_browser_headless=false k6 run script.js it\u2019s time to improve the script to interact with elements on the page such as click, select, and type. the browser module supports css and xpath selectors. page.locator(\"input[name='email']\").type('clark@example.com'); page.locator(\"#password\").type('pasword'); page.locator('#login-button').click(); pass the selector of the element you want to find on the page to page.locator(). page.locator() will create and return a locator object, which you can later use to interact with the element. example of the above, it finds the email element and writes \u2018clark@example.com\u2019, finds the password element and writes \u2018password\u2019 and clicks the login button. you can also write these scripts using the syntax below. the functionality is the same, but i find the markup simpler and more reusable. const emailtextbox = page.locator(\"input[name='email']\"); emailtextbox.type(\"clark@example.com\"); const passwordtextbox = page.locator(\"#password\"); passwordtextbox.type(\"pasword\"); const loginbutton = page.locator('#login-button'); loginbutton.click(); let\u2019s complete the script by adding some validations. k6 provides an assertion structure with 'check' that is similar to other framework assertions. unlike others, failed checks do not cause the test to abort or end with a failed status. k6 keeps track of the number of failed checks as the test runs. check(page, { 'login is successful': page.locator(\"#logout-form\").isvisible() === true, 'page title is correct': page.title().includes('mypage') }) after running the script, you can see the results of the logout element being visible on the page and the assertions about the page title among its outputs. \/\\ |\u203E\u203E| \/\u203E\u203E\/ \/\u203E\u203E\/ \/\\ \/ \\ | |\/ \/ \/ \/ \/ \\\/ \\ | ( \/ \u203E\u203E\\ \/ \\ | |\\ \\ | (\u203E) | \/ __________ \\ |__| \\__\\ \\_____\/ .io execution: local script: script.js output: - scenarios: (100.00%) 1 scenario, 1 max vus, 10m30s max duration (incl. graceful stop): * ui: 1 iterations shared among 1 vus (maxduration: 10m0s, gracefulstop: 30s) \u2713 login is successful \u2713 page title is correct if the checker gets an error, you can see it in the terminal as follows. \/\\ |\u203E\u203E| \/\u203E\u203E\/ \/\u203E\u203E\/ \/\\ \/ \\ | |\/ \/ \/ \/ \/ \\\/ \\ | ( \/ \u203E\u203E\\ \/ \\ | |\\ \\ | (\u203E) | \/ __________ \\ |__| \\__\\ \\_____\/ .io execution: local script: scripts\/browsertest.js output: - scenarios: (100.00%) 1 scenario, 1 max vus, 10m30s max duration (incl. graceful stop): * ui: 1 iterations shared among 1 vus (maxduration: 10m0s, gracefulstop: 30s) \u2713 login is successful \u2717 page title is correct \u21B3 0% -- \u2713 0 \/ \u2717 1 combine tests with api and browser you can run both browser-level and protocol-level tests in a single script by using the same steps. in this way, you can track the condition of your apis under load and learn about how the front end performs. import { browser } from 'k6\/experimental\/browser'; import { check } from 'k6'; import http from 'k6\/http'; export const options = { scenarios: { browser: { executor: 'constant-vus', exec: 'browsertest', vus: 3, duration: '10s', options: { browser: { type: 'chromium', }, }, }, api: { executor: 'constant-vus', exec: 'api', vus: 20, duration: '1m', }, }, }; export async function browsertest() { const page = browser.newpage(); try { await page.goto('https:\/\/test.k6.io\/browser.php'); page.locator('#checkbox1').check(); check(page, { 'checkbox is checked': page.locator('#checkbox-info-display').textcontent() === 'thanks for checking the box', }); } finally { page.close(); } } export function api() { const res = http.get('https:\/\/test.k6.io\/news.php'); check(res, { 'status is 200': (r) => r.status === 200, }); } understanding the browser metrics in the previous blog post, i examined the metrics in detail. now, i will explain the metrics specific to the browser. browser_data_received.......: 712 kb 281 kb\/s browser_data_sent...........: 4.4 kb 1.7 kb\/s browser_http_req_duration...: avg=46ms min=4.01ms med=26.11ms max=138.57ms p(90)=129.53ms p(95)=134.05ms browser_http_req_failed.....: 9.09% \u2713 1 \u2717 10 browser_web_vital_cls.......: avg=0.042967 min=0.042967 med=0.042967 max=0.042967 p(90)=0.042967 p(95)=0.042967 browser_web_vital_fcp.......: avg=188.75ms min=168.8ms med=188.75ms max=208.7ms p(90)=204.71ms p(95)=206.7ms browser_web_vital_fid.......: avg=1.7ms min=1.7ms med=1.7ms max=1.7ms p(90)=1.7ms p(95)=1.7ms browser_web_vital_lcp.......: avg=188.75ms min=168.8ms med=188.75ms max=208.7ms p(90)=204.71ms p(95)=206.7ms browser_web_vital_ttfb......: avg=55.1ms min=26.29ms med=55.1ms max=83.9ms p(90)=78.14ms p(95)=81.02ms checks......................: 100.00% \u2713 2 \u2717 0 data_received...............: 0 b 0 b\/s data_sent...................: 0 b 0 b\/s iteration_duration..........: avg=1.53s min=1.53s med=1.53s max=1.53s p(90)=1.53s p(95)=1.53s iterations..................: 1 0.394187\/s vus.........................: 1 min=1 max=1 vus_max.....................: 1 min=1 max=1 - browser_data_received: the amount of data received by the browser. - browser_data_sent: the amount of data sent by the browser. - browser_http_req_duration: average duration of http requests - browser_http_req_failed: the percentage of failed http requests. - browser_web_vital_cls: cumulative layout shift, a measure of visual stability. - browser_web_vital_fcp: first contentful paint, the time it takes for the first piece of content to be rendered. - browser_web_vital_fid: first input delay, the time it takes for the page to respond to the first user interaction. - browser_web_vital_lcp: largest contentful paint, the time it takes for the largest content element to be rendered. - browser_web_vital_ttfb: time to first byte, the time it takes for the browser to receive the first byte of the response from the server. conclusion k6 represents a new stage in the field of load testing with the browser module. by bridging the gap between protocol-level and browser-level testing, k6 enables organisations to gain insight into the end-user experience. you can collect browser-specific metrics and ensure that everything is performed properly on the frontend, even under load. while the k6 browser module is still in its early stages, it is possible to perform comprehensive load testing while also combining protocol-level and browser-level tests into a single script. as the demand for high-performance, user-friendly web applications continues to grow, the k6 browser module positions itself as a toolkit that equips developers, performance engineers and qa professionals with the tools necessary to deliver digital ecosystems."
},
{
"title":"k6 Report Guideline: Understanding Metrics and Output Types",
"body":"k6 is a powerful open-source tool that offers robust reporting capabilities. One of its strengths is the ability to enhance performance reports and convert them into various formats based on your specific needs. Depending on the requirement\/preference, ways to export include CSV, JSON, or even sending them directly to the cloud. In this blog post, I will go over the rich set of metrics p...",
"post_url":"https://www.kloia.com/blog/k6-report-guideline-understanding-metrics-and-output-types",
"author":"Elif Ozcan",
"publish_date":"27-<span>Jun<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/elif-ozcan",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/k6-report-guideline-blog.webp",
"topics":{ "test-automation":"Test Automation","cloud":"Cloud","k6":"k6","qa":"QA","open-source-framework":"Open Source Framework","json":"JSON","qateam":"qateam","csv":"CSV","protocol-level-metrics":"Protocol-Level Metrics","browser-level-metrics":"Browser-Level Metrics" },
"search":"24 <span>sep</span>, 2024k6 report guideline: understanding metrics and output types test automation,cloud,k6,qa,open source framework,json,qateam,csv,protocol-level metrics,browser-level metrics elif ozcan k6 is a powerful open-source tool that offers robust reporting capabilities. one of its strengths is the ability to enhance performance reports and convert them into various formats based on your specific needs. depending on the requirement\/preference, ways to export include csv, json, or even sending them directly to the cloud. in this blog post, i will go over the rich set of metrics provided by k6 and explore these features in detail. this is the third post in our performance testing series. if you are new to performance testing, you can learn more about why performance testing matters, and you can find out about the basics of k6 in the second post. explaining metrics after performing tests, understanding the reports is an important step. k6 offers a rich set of metrics as a report that helps provide insights into performance at the protocol-level and browser-level. details about response times, throughput, error rates, and many other relevant metrics offer a detailed understanding of how the system behaves. first, i will look at the metrics in the protocol-level, and in the next blog post, i will go over browser specific metrics. data_received..................: 823 kb 27 kb\/s data_sent......................: 102 kb 3.4 kb\/s http_req_blocked...............: avg=2.45ms min=0s med=1\u00B5s max=549.97ms p(90)=2\u00B5s p(95)=2\u00B5s http_req_connecting............: avg=635.74\u00B5s min=0s med=0s max=138.95ms p(90)=0s p(95)=0s http_req_duration..............: avg=138.37ms min=133.07ms med=138.43ms max=187.68ms p(90)=140.87ms p(95)=142.4ms { expected_response:true }...: avg=138.37ms min=133.07ms med=138.43ms max=187.68ms p(90)=140.87ms p(95)=142.4ms http_req_failed................: 0.00% \u2713 0 \u2717 2132 http_req_receiving.............: avg=119.57\u00B5s min=8\u00B5s med=63\u00B5s max=2.97ms p(90)=134\u00B5s p(95)=707.69\u00B5s http_req_sending...............: avg=83.91\u00B5s min=13\u00B5s med=79\u00B5s max=2.56ms p(90)=119\u00B5s p(95)=137.44\u00B5s http_req_tls_handshaking.......: avg=1.61ms min=0s med=0s max=370.14ms p(90)=0s p(95)=0s http_req_waiting...............: avg=138.16ms min=132.89ms med=138.26ms max=187.55ms p(90)=140.69ms p(95)=142.12ms http_reqs......................: 2132 70.735814\/s iteration_duration.............: avg=140.97ms min=133.11ms med=138.56ms max=690.81ms p(90)=141.08ms p(95)=143.11ms iterations.....................: 2132 70.735814\/s vus............................: 10 min=10 max=10 vus_max........................: 10 min=10 max=10 - data_received: the amount of data received. - data_sent: the amount of data sent. - http_req_blocked: the amount of time requests were blocked before starting. this metric indicates delays before the request is sent, which can include time taken for setting up a network connection or waiting due to prior requests. - http_req_connecting: time spent establishing a tcp connection. it specifically measures the time taken to establish a tcp connection between the test machine and the server. - http_req_duration: duration of http requests. this metric measures the time it takes from initiating an http request to when the response is fully received. - http_req_failed: percentage of the requests that failed due to errors. this is helpful to see how many requests were unsuccessful during the test. - http_req_receiving: time spent receiving the http response. this measures the duration of receiving the response data from the server. - http_req_sending: time spent sending the http request. this includes the time taken to send all the http request data to the server, including headers and body. - http_req_tls_handshaking: time spent in tls\/ssl handshaking. this metric reports the duration of the cryptographic handshake process used to establish a secure communication channel. - http_req_waiting: time spent waiting for a response from the server. this is often the largest portion of the request duration and measures the time from the end of sending the request to the beginning of receiving the response. - http_reqs: total number of http requests. this metric represents the total number of http requests made during the test. - iteration_duration: total time taken for one complete iteration of the test script, including setups and teardowns. this reflects the time from the start to the end of each user scenario executed. - vus: number of virtual users. this metric shows the number of concurrent virtual users active at any point during the test. - vus_max: maximum number of virtual users. this indicates the peak number of virtual users that were active during the test. this helps in understanding the scale of the test. adding specific metrics i have mentioned that k6 already has a rich set of metrics. however, in some cases, you may need to add custom metrics. this allows you to measure details about system behavior that might be critical for certain test scenarios. in this way, you can even get some deeper insights about your system's performance. in the following example, you can see custom-added metrics using trend, rate, and counter. you can refer to the k6 documentation to get more information about k6\/metrics module. import {trend, rate, counter} from 'k6\/metrics'; trend is used to track the trend of the benchmark over a certain period of time. in the example, i added a metric called \u2018customtrend\u2019 that tracks the waiting time of the request defined with the \u2018custom_waiting_time\u2019 variable (this name represents the name that will appear in the report). const customtrend = new trend('custom_waiting_time'); customtrend.add(response.timings.waiting); rate is used to track the ratio of a particular event to total events. in the example, i added a metric called \u2018failurerate\u2019, which shows the ratio of failed requests to total requests, defined with the \u2018custom_failure_rate\u2019 variable (this name represents the name that will appear in the report). const failurerate = new rate('custom_failure_rate'); failurerate.add(response.status !== 200); counter is used to track how many times a particular event occurs. in the example, i added a metric called \u2018requestcount\u2019 that shows the number of requests with the \u2018custom_request_count\u2019 variable (this name represents the name that will appear in the report). const requestcount = new counter('custom_request_count'); requestcount.add(1); the final version of the script is as follows: import http from 'k6\/http'; import {trend, rate, counter} from 'k6\/metrics'; export const options = { vus: 3, duration: '5s' }; const customtrend = new trend('custom_waiting_time'); const requestcount = new counter('custom_request_count'); const failurerate = new rate('custom_failure_rate'); export default function () { const url = 'https:\/\/petstore.swagger.io\/v2\/store\/inventory' const header = { headers: {accept: 'application\/json'} } const response = http.get(url, header); customtrend.add(response.timings.waiting); requestcount.add(1); failurerate.add(response.status !== 200); } custom_failure_rate............: 0.00% \u2713 0 \u2717 100 custom_request_count...........: 100 19.635491\/s custom_waiting_time............: avg=137.43243 min=134.138 med=137.5885 max=143.734 p(90)=139.3735 p(95)=139.93505 data_received..................: 56 kb 11 kb\/s data_sent......................: 6.4 kb 1.2 kb\/s http_req_blocked...............: avg=13.87ms min=0s med=1\u00B5s max=464.79ms p(90)=2\u00B5s p(95)=3.04\u00B5s http_req_connecting............: avg=4.02ms min=0s med=0s max=135.71ms p(90)=0s p(95)=0s http_req_duration..............: avg=137.68ms min=134.34ms med=137.88ms max=143.84ms p(90)=139.55ms p(95)=140.1ms { expected_response:true }...: avg=137.68ms min=134.34ms med=137.88ms max=143.84ms p(90)=139.55ms p(95)=140.1ms http_req_failed................: 0.00% \u2713 0 \u2717 100 http_req_receiving.............: avg=108.75\u00B5s min=14\u00B5s med=100.5\u00B5s max=842\u00B5s p(90)=134.1\u00B5s p(95)=155.94\u00B5s http_req_sending...............: avg=143.84\u00B5s min=23\u00B5s med=109.5\u00B5s max=2.27ms p(90)=158.29\u00B5s p(95)=186.49\u00B5s http_req_tls_handshaking.......: avg=9.09ms min=0s med=0s max=305ms p(90)=0s p(95)=0s http_req_waiting...............: avg=137.43ms min=134.13ms med=137.58ms max=143.73ms p(90)=139.37ms p(95)=139.93ms http_reqs......................: 100 19.635491\/s iteration_duration.............: avg=151.8ms min=134.61ms med=138.2ms max=603.24ms p(90)=140.1ms p(95)=141.72ms iterations.....................: 100 19.635491\/s vus............................: 3 min=3 max=3 vus_max........................: 3 min=3 max=3 different output as you can see in the examples above, k6 reports are output as terminal output on the local machine. when you want to share these reports with your teammates, k6\u2019s flexible structure allows you to create reports in the format you need. to receive reports in json format, it will be sufficient to add \u2013out json=reportname.json to the run command. with this command, a json report with the desired name will be created at the root level of the project. k6 run --out json=results.json check.js for report outputs in csv format, it will be sufficient to follow the same way as json. k6 run --out csv=results.csv check.js apart from this, k6 v0.49.0 provides a web dashboard feature that allows for real-time monitoring. when you run the tests and set the k6_web_dashboard environment variable to true, you will be able to see the results in real-time on the dashboard. k6_web_dashboard=true k6 run check.js after running this command, you can access the url for the web dashboard from the terminal. \/\\ |\u203E\u203E| \/\u203E\u203E\/ \/\u203E\u203E\/ \/\\ \/ \\ | |\/ \/ \/ \/ \/ \\\/ \\ | ( \/ \u203E\u203E\\ \/ \\ | |\\ \\ | (\u203E) | \/ __________ \\ |__| \\__\\ \\_____\/ .io execution: local script: scripts\/thresholds.js web dashboard: http:\/\/127.0.0.1:5665 output: - scenarios: (100.00%) 1 scenario, 10 max vus, 1m30s max duration (incl. graceful stop): * default: 10 looping vus for 1m0s (gracefulstop: 30s) you can access the environment variables with which you can configure the web dashboard and the default values of these variables here. web dashboard consists of 3 tabs, overview, timings and summary. overview the overview tab provides a high-level summary of the test execution. it contains metrics such as the total number of requests, average response time. this part also includes a trend chart that shows performance over time during the test run. timings the timings tab provides detailed timing metrics for various aspects of the test. this detailed timing analysis helps in identifying specific problems in network or server activities. summary the summary tab shows the key results of the test in a simple format. it presents aggregated metrics such as the total number of requests, the number of failed requests, and response time distribution metrics(percentiles). this part provides a quick snapshot of the overall performance. you can obtain the report containing these results by clicking the report button at the top right of the page. also, you can generate the same report at the root level of the project by specifying it through the k6_web_dashboard_export environment variable. k6_web_dashboard=true k6_web_dashboard_export=report.html k6 run check.js the steps i have to take to get the reports in html format are a bit more than json or csv. you can completely customize the output with k6 handlesummary(). in this example, i will proceed with an example of creating an html report using the k6 html report exporter v2 extension. import { htmlreport } from \"https:\/\/raw.githubusercontent.com\/benc-uk\/k6-reporter\/main\/dist\/bundle.js\"; after importing this extension, i create a function that will use handlesummary(). export function handlesummary(data) { console.log('preparing the end-of-test summary...'); return { 'test-summary-report.html': htmlreport(data), }; } after running the script, an html file named test-summary-report is created at the root level of my project, as i specified in the handlesummary function. \u00A0 \u00A0 when you open this resulting html with your browser, you will see a report consisting of 3 tabs (request metrics, other stats, checks & groups) as in the example below. \u00A0 \u00A0 cloud output with k6, you can run the tests locally and store the results in the cloud. k6 has many options in this regard, you can examine the options in the stream to service list. let's examine the cloud in this list with a simple example. k6 cloud offers the opportunity to run these tests in the cloud as well as store the test results in the cloud. after completing the registration process, you can use the token you have and upload your results to the cloud and see them. k6_cloud_token= k6 run --out cloud script.js \u00A0 k6 cloud has a powerful and customizable dashboard. it is possible to create customizable charts to compare and analyze metrics with a single click. while it can keep all reports historical, it offers the opportunity to compare between these reports. comparing the reports throughout the process is already part of the process, but this can be done very easily thanks to this ui. conclusion k6's ability to develop and customize reports is one of its most powerful features. rich set of metrics: k6 provides a rich set of metrics, and you can gain valuable insights into various aspects of your system\u2019s performance. custom-added metrics: k6 allows you to add custom metrics to track specific aspects of your system. flexible output formats: k6 offers flexible output formats such as csv, json, and html, making it easy to share and analyze reports. real-time monitoring with web-dashboard: k6 provides its own web dashboard, which has a real-time monitoring feature without the need for additional tools. this intuitive interface allows you to visualize and analyze performance data as it's being generated, enabling rapid identification and resolution of performance bottlenecks. easily stream metrics: k6 can easily stream metrics to various external systems such as influxdb, prometheus, k6 cloud, and more, allowing for easy integration with your existing monitoring infrastructure.this integration ensures that your performance data is readily available within your preferred monitoring and analysis tools."
},
{
"title":"Creating End-to-End Web Test Automation Project from Scratch\u2014Part 4.1",
"body":"In the previous blog post, you have dockerized your test automation project and Selenium Grid. In this bonus chapter, you are going to record scenario runs on Docker with Selenium Video image! Let\u2019s Create and Configure Our Web Test Automation Project! Let\u2019s Write Our Test Scenarios! Bonus: Recording Failed Scenario Runs in Ruby Let\u2019s Configure Our Web Test Automation Project for Remote ...",
"post_url":"https://www.kloia.com/blog/creating-end-to-end-web-test-automation-project-from-scratch-part-4.1",
"author":"Muhammet Topcu",
"publish_date":"13-<span>Jun<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/end-to-end-web-test-automation-blog%20%285%29.png",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","docker":"Docker","selenium":"Selenium","qa":"QA","test-driven-development":"Test Driven Development","qateam":"qateam","endtoend":"endtoend" },
"search":"01 <span>oct</span>, 2024creating end-to-end web test automation project from scratch\u2014part 4.1 test automation,software testing,docker,selenium,qa,test driven development,qateam,endtoend muhammet topcu in the previous blog post, you have dockerized your test automation project and selenium grid. in this bonus chapter, you are going to record scenario runs on docker with selenium video image! let\u2019s create and configure our web test automation project! let\u2019s write our test scenarios! bonus: recording failed scenario runs in ruby let\u2019s configure our web test automation project for remote browsers and parallel execution let\u2019s dockerize our web test automation project bonus: recording scenario runs on docker with selenium video! let\u2019s integrate our dockerized web test automation project with ci\/cd pipeline! auto-scaling and kubernetes integration with keda recording scenario runs on docker with selenium video! selenium provides a docker image to record test executions on selenium nodes in docker. you are going to configure your docker compose file to utilize this feature. note that selenium video docker image is pretty new, so there are a few setbacks: selenium video image supports amd64 architecture only. so you can not use it on macbooks with apple silicon or on raspberry pi, for example. the experimental seleniarm images do not have selenium video support currently, unfortunately. for each selenium node, there has to be one and only one video container. so the mapping is 1:1. every node needs to have only one browser instance, since video image records the desktop. you can not specifically record an application separately. the whole test run is recorded as one video. so you can not have individual videos of each test scenario. the recording starts right after the browser node is connected to the grid and it stops when the containers are terminated. it may result in long recordings having mostly empty desktop scenes. i am going to use my windows machine throughout this walkthrough since its cpu supports selenium video image. since you are going to use official selenium images, you need to change the following lines in your seleniarm based compose file, which you created in part 4: changing selenium-hub image: seleniarm\/hub => selenium\/hub changing chrome image: seleniarm\/node-chromium => selenium\/node-chrome and adding video image to your file as shown below: chrome_video: image: selenium\/video volumes: - .\/videos:\/videos depends_on: - chrome environment: - display_container_name=chrome - file_name=chrome_video.mp4 here, `volumes:` section lets you have a copy of the files created in the docker container in your host machine. the first directory is the host directory and the second one is the container\u2019s directory. so when the video container is terminated, the video file in the \/videos directory of the container will be copied to your specified directory. in this case, it will create a video folder in the directory of your compose file. so your final compose file looks like this: version: \"3\" services: selenium-hub: image: selenium\/hub container_name: selenium-hub ports: - \"4442:4442\" - \"4443:4443\" - \"4444:4444\" networks: - dockerize-network chrome: image: selenium\/node-chrome shm_size: 2gb depends_on: - selenium-hub environment: - se_event_bus_host=selenium-hub - se_event_bus_publish_port=4442 - se_event_bus_subscribe_port=4443 - se_node_max_instances=4 - se_node_max_sessions=4 - se_node_session_timeout=180 networks: - dockerize-network chrome_video: image: selenium\/video volumes: - \/tmp\/videos:\/videos depends_on: - chrome environment: - display_container_name=chrome - file_name=chrome_video.mp4 dockerize-network: name: dockerize-network driver: bridge note that i have removed firefox node from the compose file since my windows machine is not powerful enough :d you can leave it as it is if you want. now you can start execution! start the grid with `docker compose -f docker-compose-selenium.yml up` command. run your web project via `docker run --network dockerize-network muhammettopcu\/dockerize-ruby-web:1.0` after the execution, terminate your grid with the `docker compose -f docker-compose-selenium.yml down` command. then you will see that your video is now available in the folder you stated! now the thing is, as the number of scenarios in your project grows, this video will also grow and it would be hard to find individual scenarios. and i have a solution for this. you are going to split this video with ffmpeg software! first, you are going to create a bash file for this purpose, and then dockerize it! splitting video to individual parts with ffmpeg first let\u2019s examine your video: the video starts with an empty desktop like below: then, when a scenario run starts, your browser shows up. and between the scenario runs, you see this black screen again. so here is the algorithm to find the beginning and the ending of all test scenarios: take a screenshot of the first frame of your video, which is basically a screenshot of your container\u2019s desktop. find every frame resembling this frame throughout the video. compare every frame with the next frame. if the time difference between these two frames is bigger than 1 second, then it means the first frame is the beginning of a scenario and the second one is the ending of it. get a list of all beginning and ending frames. split the video according to these timestamps. first things first: you are going to use ffmpeg to process your video. install it. brew install ffmpeg brew!? why brew? wasn\u2019t i working on a windows machine? yes, i am going to run this docker image on my windows machine but i will develop it on my mac! that\u2019s the power of docker! now, make sure that your bash is updated. for installing the newer bash version, follow this article. and finally, install bc (basic calculator) to make arithmetic operations in bash. you are going to need it. brew install bc before you start, you might want to grab your favourite beverage. it will be a long walkthrough :) let\u2019s create an empty file with your preferred text editor. you are going to write a shell script! #!\/bin\/bash #take the screenshot of the first frame of the video ffmpeg -i \/users\/muhammettopcu\/desktop\/video_edit\/chrome_video.mp4 -vframes 1 \/users\/muhammettopcu\/desktop\/video_edit\/screenshot.png fi the above code let\u2019s us take a screenshot of the first frame of your video: #!\/bin\/bash makes your system regard this file as a shell script. \/users\/muhammettopcu\/desktop\/video_edit\/chrome_video.mp4 is the path of your video file. \/users\/muhammettopcu\/desktop\/video_edit\/screenshot.png is the path where the screenshot will be saved. #get the list of all frames resembling the image with a %85 ratio ffmpeg -i \/users\/muhammettopcu\/desktop\/video_edit\/chrome_video.mp4 -loop 1 -i \/users\/muhammettopcu\/desktop\/video_edit\/screenshot.png -an -filter_complex \"blend=difference:shortest=1,blackframe=85:32\" -f null - > \/users\/muhammettopcu\/desktop\/video_edit\/output.txt 2>&1 this is one of the most critical lines. it gets every frame of your video file and blends it with the screenshot that you have just taken. when two frames are blended together, if two identical pixels overlap, the result is a black pixel. there are two variables for this comparison: amount: the percentage of the pixels that have to be below the threshold; it is 85 for in this instance. threshold: the threshold below which a pixel value is considered black; it is 32 in this example. by changing these values, you can make more precise comparisons. after filtering these similar pixels, you save them to file with \/users\/muhammettopcu\/desktop\/video_edit\/output.txt path. here is a sample of what output.txt: [parsed_blackframe_1 @ 0x600000f2c0b0] frame:243 pblack:97 pts:1244160 t:16.200000 type:p last_keyframe:0 [parsed_blackframe_1 @ 0x600000f2c0b0] frame:244 pblack:97 pts:1249280 t:16.266667 type:p last_keyframe:0 frame= 291 fps=193 q=-0.0 size=n\/a time=00:00:19.33 bitrate=n\/a speed=12.8x frame= 397 fps=197 q=-0.0 size=n\/a time=00:00:26.46 bitrate=n\/a speed=13.2x [parsed_blackframe_1 @ 0x600000f2c0b0] frame:490 pblack:97 pts:2508800 t:32.666667 type:p last_keyframe:250 [parsed_blackframe_1 @ 0x600000f2c0b0] frame:491 pblack:97 pts:2513920 t:32.733333 type:p last_keyframe:250 here the \u201Ct:\u201D is the second that a frame is shown. \"frame\" indicates which frame it is. so in this example, as you can see, there is not any blackframe between the 244th and 490th frames. which basically means that between them, a scenario is executed! so now, you need to process this file and get the timestamp of these blackframe pairs which have more than 1-second time difference between them! file='\/users\/muhammettopcu\/desktop\/video_edit\/output.txt' regex='t:([0-9\\.]+)' detected='\/users\/muhammettopcu\/desktop\/video_edit\/detected.txt' global_rematch() { local file=$1 regex=$2 timestamps=() while ifs= read -r line; do [[ $line =~ $regex ]] if [[ \"${bash_rematch[1]}\" == \"\" ]]; then : else timestamps+=(\"${bash_rematch[1]}\") fi done < $file } global_rematch \"$file\" \"$regex\" note: in this blogpost, we had to use screenshots of the code snippets since some bash functions behaved unexpectedly in our website. you can copy the whole script from this link. let\u2019s examine this code bit together: - `file` is the location of your output.txt file. - `regex` is the pattern of your frame time. it will catch every value after `t:` value of each frame. check it here. - `detected` is the file in which you will store these blackframe pairs. - `global_rematch()` is a function that you try to match every line in your output.txt file with the regex pattern you have. if there is a match, you store the value of `t:` in an array called timestamps. so now you have an array filled with timestamps of these blackframes. the values inside timestamps look like this => (... 16.200000 16.266667 32.666667 32.733333 \u2026) now you need to compare these values and cherry-pick the pairs which have more than 1-second difference! # function to convert seconds to hh:mm:ss format using awk seconds_to_hms() { local seconds=$1 local hours=$(awk -v secs=\"$seconds\" 'begin { printf \"%02d\", int(secs \/ 3600) }') local minutes=$(awk -v secs=\"$seconds\" 'begin { printf \"%02d\", int((secs \/ 60) % 60) }') local seconds=$(awk -v secs=\"$seconds\" 'begin { printf \"%06.3f\", secs % 60 }') echo \"$hours:$minutes:$seconds\" } before comparison, you need to convert the seconds to hh:mm:ss format. so you are going to use the above function for the elected timestamps to convert them. now let\u2019s compare! - here you have a new array named newtime. - you loop through every element (which are timestamps of each blackframe) of timestamps array and compare them with the next frame\u2019s timestamp by using bc. if the difference between them is more than 1, then it converts both time stamps into hh:mm:ss format by using seconds_to_hms function and append them to the newtime array! # write array elements to file for element in \"${newtime[@]}\"; do echo \"$element\" >> \"$detected\" done with the above code, you write each element of newtime array into the file named \u201Cdetected.txt\u201D the file looks like this: 00:00:16.267 00:00:32.667 00:00:33.467 00:01:12.000 00:01:13.133 00:02:00.000 00:02:00.733 00:02:47.667 00:02:48.333 00:03:04.200 00:03:04.800 00:03:19.600 00:03:20.267 00:03:33.867 00:03:34.600 00:03:45.667 and your bash script so far looks like this: now here, you are going to use two scripts created by a github user with a username napoleonwils0n. the scripts you are going to use are: scene-time: adds duration to the timestamps and creates a new file with this. scene-cut: cuts the video according to the above-mentioned file. you can download them from his directory. so by running scene time, you will convert your time stamps , duos. run the script with the command below: scene-time -i \/users\/muhammettopcu\/desktop\/video_edit\/detected.txt -o \/users\/muhammettopcu\/desktop\/video_edit\/cutlist.txt in here, * -i is input file\u2019s path. * -o is output files path. after the execution, the cutlist file which is created by scene-time looks like below: 00:00:16.267,00:00:16.4 00:00:32.667,00:00:00.8 00:00:33.467,00:00:38.533 00:01:12,00:00:01.133 00:01:13.133,00:00:46.867 00:02:00,00:00:00.733 00:02:00.733,00:00:46.934 00:02:47.667,00:00:00.666 00:02:48.333,00:00:15.867 00:03:04.2,00:00:00.6 00:03:04.8,00:00:14.8 00:03:19.6,00:00:00.667 00:03:20.267,00:00:13.6 00:03:33.867,00:00:00.733 00:03:34.6,00:00:11.067 now let\u2019s run scene-cut script with the cutlist.txt file that you have created: bash \/usr\/local\/bin\/scene-cut -i \/users\/muhammettopcu\/desktop\/video_edit\/chrome_video.mp4 -c \/users\/muhammettopcu\/desktop\/video_edit\/cutlist.txt here: \u00A0 * -i is input file\u2019s path which is your video. * -c is cutlist file\u2019s path. now you can see that main video is split and separate videos are created according to time stamps: but the thing is, as you can see from thumbnails, while every odd-numbered videos are actual test runs, the even-named ones are short videos containing only the desktop screen. why is that? because the script named scene-cut creates videos for each time stamp, not for each pair of them. then you need to tweak the scene-cut script a little bit. let\u2019s open the script with a text editor. below is the code creating videos. #=============================================================================== # read file and set ifs=, read = input before , duration = input after , #=============================================================================== count=1 while ifs=, read -r start duration; do trim_video count=\"$((count+1))\" done < \"${cutfile}\" let\u2019s modify it like below: count=1 name=1 while ifs=, read -r start duration; do if [ $((count%2)) -eq 1 ];then trim_video name=\"$((name+1))\" fi count=\"$((count+1))\" done < \"${cutfile}\" with this modification, you skip all the lines that are multiples of two. and replace the `count` below with `name`, so even though you skip some of them, the videos are named correctly. before: output=\"${input_name}-${count}.mp4\" after: output=\"${input_name}-${name}.mp4\" done. let\u2019s run again! now you can see that only 8 videos are created. everything works as intended! you might have realised up to this point that i installed and configured everything on macos. and the video that i created via selenium video was on my host machine. do i need to repeat everything on every device to accomplish this? not necessarily. all we need to do is dockerize this script. then i can run it on my windows machine without all this hassle! dockerizing video split script let\u2019s create a dockerfile. from ubuntu:latest run apt update && apt-get install -y ffmpeg && apt-get install -y bc workdir \/usr\/src\/app copy . \/usr\/src\/app env path \"$path:\/usr\/src\/app\" cmd \/usr\/src\/app\/split-video-docker expose 5000:5000 - using ubuntu:latest base image for your script. you can use lightweight one if you want. - updating apt, install ffmpeg and bc with -y flags so that any prompt are answered as \u201Cyes\u201D. - adding \"\/usr\/src\/app\" to your path environment, so that you can run your scripts there. - the first thing that will run when your container spins up is the \u201Csplit-video-docker\u201D script, which is your bash script. but before building your image, you need to configure your \u201C split-video-docker \u201D according to container\u2019s structure: #!\/bin\/bash #find the video file video_path=$(find \/usr\/src\/app\/video -type f -name \"*.mp4\") #take the screenshot of the first frame of the video ffmpeg -i $video_path -vframes 1 \/usr\/src\/app\/video\/screenshot.png #get the list of of all frames resembling to the image with %85 ratio ffmpeg -i $video_path -loop 1 -i \/usr\/src\/app\/video\/screenshot.png -an -filter_complex \"blend=difference:shortest=1,blackframe=85:32\" -f null - > \/usr\/src\/app\/video\/output.txt 2>&1 file='\/usr\/src\/app\/video\/output.txt' regex='t:([0-9\\.]+)' detected='\/usr\/src\/app\/video\/detected.txt' here you added video_path=$(find \/usr\/src\/app\/video -type f -name \"*.mp4\") line to find your video file in the on `\/usr\/src\/app\/video` directory of the container. i almost hear you saying \u201Cbut the video file won\u2019t be in the container but in your host machine!\u201D. indeed, that\u2019s why you are going to mount a volume on it! but for now, bear with me \uD83D\uDE42 as you can see, the paths of each file are changed accordingly as well. note that all scripts will be located in the \/usr\/src\/app directory and every file to be used with scripts or created via scripts will be in the \/usr\/src\/app\/video directory. below you can find your image\u2019s structure: root@236a6678c313:\/usr\/src\/app# where your script files are located . |-- dockerfile |-- scene-cut |-- scene-time |-- split-video-docker `-- video `-- chrome_video.mp4 #video file mounted with volume let\u2019s continue: scene-time -i \/usr\/src\/app\/video\/detected.txt -o \/usr\/src\/app\/video\/cutlist.txt scene-cut -i $video_path -c \/usr\/src\/app\/video\/cutlist.txt rm \/usr\/src\/app\/video\/cutlist.txt \/usr\/src\/app\/video\/detected.txt \/usr\/src\/app\/video\/output.txt \/usr\/src\/app\/video\/screenshot.png here, the destinations are changed as well and the files other than the videos that get removed at the end of the script. next, in scene-cut script, let\u2019s change the output path like this: trim_video () { output=\"\/usr\/src\/app\/video\/${input_name}-${name}.mp4\" ffmpeg \\ -nostdin \\ with this, you ensure that the split video files will be saved in the \/usr\/src\/app\/video\/ directory. the final bash script should look like this: if everything is correct, let\u2019s build your image. i am going to use buildx since i want this image to be compatible with both my macos m1 machine and windows amd64 machine. you can change the platform according to your needs. docker buildx build -t muhammettopcu\/video-splitter:1.0 --push --platform linux\/amd64,linux\/arm64 . with this, i build my images and push them to my repository on dockerhub. good, both of them pushed successfully. now let\u2019s try it on windows! the video file that i obtained by selenium video is in this location: c:\\users\\muhammettopcy\\desktop\\docker\\videos now i am going to use -v command when i run my image since i want to bind this location with docker container\u2019s video location: docker run -v c:\\users\\muhammettopcy\\desktop\\docker\\videos:\/usr\/src\/app\/video muhammettopcu\/video-splitter:1.0 it works! now you can use your dockerized web project and this docker image to get individual test run videos! with this, this blog post is concluded! i hope you get a basic knowledge about manipulating video files according to your needs. next, we are going to integrate a ci\/cd pipeline to your ruby web test automation project with jenkins! see you soon! :)"
},
{
"title":"Redis to ValkeyDB migration guide",
"body":"Redis is an open-source project beloved by developers worldwide, thanks to its performance and ease of use. Its popularity is clear. But recently, changes in Redis's licensing have made people worried about its future and how sustainable it will be. These developments have also highlighted ValkeyDB as a promising fork alternative to Redis that upholds the true spirit of open-source. The ...",
"post_url":"https://www.kloia.com/blog/redis-to-valkeydb-migration-guide",
"author":"Emre Kasgur",
"publish_date":"04-<span>Jun<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/emre-kasgur",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/redi-valkeydb-migraiton-guide-blog.png",
"topics":{ "devops":"DevOps","open-source-framework":"Open Source Framework","migration-guide":"migration guide","redis":"redis","monitoring-tools":"monitoring tools","amazon-elasticache":"Amazon ElastiCache","valkeydb":"ValkeyDB" },
"search":"04 <span>jun</span>, 2024redis to valkeydb migration guide devops,open source framework,migration guide,redis,monitoring tools,amazon elasticache,valkeydb emre kasgur redis is an open-source project beloved by developers worldwide, thanks to its performance and ease of use. its popularity is clear. but recently, changes in redis's licensing have made people worried about its future and how sustainable it will be. these developments have also highlighted valkeydb as a promising fork alternative to redis that upholds the true spirit of open-source. the licensing shift in redis: on march 20, 2024, redis announced a change in its licensing model, moving away from the bsd 3-clause license to a more restrictive dual-license arrangement (rsalv2 and ssplv1). this change means that companies using redis to power their services now face stricter rules and may need to either adjust to these new terms or look for other options. this change marks a significant move away from the original, freer ethos of redis, which was all about providing open and unrestricted access to its software. rsalv2 and ssplv1 come with restrictions that significantly impact the previous use cases for redis: rsalv2 (redis source available license version 2): this license allows users to use, modify, and distribute the software, but it restricts commercial use. specifically, it limits the ability to offer redis as a part of commercial managed service offerings, which means companies can't use redis in their cloud services without compliance. ssplv1 (server side public license version 1): developed by mongodb, this license is similar to rsalv2 but goes a step further. it requires that if a company offers redis as a service, they must also release the entire source code of their service, including all modifications and underlying software that interacts with the database, under the same license. while it is not articulated directly, it seems like these changes are actually and only targeting the companies that offer competitive services, like amazon elasticache or google cloud memorystore. however, it's important for all users, including those not directly competing in the service space, to understand the broader implications of these changes. broader implications for general users: future feature limitations: the new licenses may restrict the availability of new features to the broader community under more restrictive terms. community and support changes: the more restrictive nature of these license may potentially reduce contributions from the community. ecosystem fragmentation: as service providers respond to these changes, we may see a fragmentation of the redis ecosystem. different providers may fork or create alternate versions of redis (like valkeydb), leading to inconsistencies and compatibility challenges for end-users. why valkeydb? valkeydb is launched by linux foundation and has emerged as a significant fork of redis. it maintains the principles of open-source software that many developers and organizations value. valkeydb is fully compatible with redis and also aims to remain that way. it supports the same commands, data types, and features as redis, making it easy for existing redis users to switch to valkeydb with minimum operational overhead. valkeydb not only aims to maintain feature parity with redis but also introduces enhancements and performance improvements. here is a video snippet that summarizes these improvements and differences. migration from redis to valkeydb since, valkeydb doesn\u2019t have any major updates in its repository,it\u2019s actually still a redis. so, this similarity simplifies the migration process, allowing you to treat valkeydb as if it were redis, ensuring ease of transition. prepare your redis instance first, configure your redis instance to allow replication. this involves modifying your redis configuration file to include the replicaof line, pointing it towards your valkeydb instance. 1. modify redis configuration: - open the redis configuration file (typically named redis.conf). - locate the replicaof line and configure it to point to your valkeydb instance: replicaof valkeydb_host valkeydb_port - if authentication is used, set the masterauth directive to the master's password to allow the replica to authenticate with the master. 2. backup redis data: - run redis-cli bgsave to save the dataset to disk asynchronously. since it operates in the background, it doesn\u2019t affect any write or read requests. - to find out the folder of the saved rdb file, run config get dir in redis-cli. warning: during a bgsave, redis forks a new process, which initially shares the same memory as the parent process due to the copy-on-write method. however, as the database continues to change after the fork, additional memory and disk i\/o will be used. this is due to the copy-on-write mechanism, where changes to any data that the child process also references cause the modified data to be copied, thus increasing memory usage. if you\u2019re doing this operation in a busy and operational workload, you need to pick a time when write requests per second are at their lowest. 3- restart redis: - restart your redis instance for the changes to take effect. - verify that replication is functioning properly by checking the redis logs or using the info replication command in the redis cli. this command will show the status of the replication, including the connection to the master. configure valkeydb as a replica in the second step, you will set up valkeydb to operate as a replica. this setup requires configuring valkeydb to sync data from your redis master, mirroring all data handling and storage configurations. 1- install valkeydb: ensure that valkeydb is installed on a suitable server that can communicate with your redis instance. follow the official valkeydb installation guidelines. 2. configure valkeydb settings: edit the valkeydb configuration file (valkey.conf). add the following lines to enable replication: slaveof masterauth - start valkeydb: launch valkeydb with the updated configuration. you can typically start valkeydb using a command like: valkey-server \/path\/to\/your\/valkey.conf initiate and monitor data synchronization verify connection: once valkeydb starts, it should automatically begin syncing data from the redis master. check the logs of valkeydb to ensure it has successfully connected to redis and started the replication process. data synchronization: valkeydb will replicate all existing data from redis as well as any new data changes. this process ensures that valkeydb has a complete and up-to-date copy of the redis dataset. monitor replication: ensure that data remains consistent across both databases throughout this process. the command redis-cli info replication can provide insights into the replication status and health. gradual cutover to valkeydb switching from redis to valkeydb is an important step that needs careful planning to avoid disrupting your applications. the idea of a gradual cutover is to slowly move your operations from redis to valkeydb, allowing you to monitor and fix any issues that come up along the way. this process involves gradually shifting the read and write operations, checking data consistency, and making sure that both databases work well together until the final switch over. plan the cutover: -identify critical operations: categorize your redis operations by their criticality. start with non-critical operations (e.g., cache lookups) and move towards more critical operations (e.g., transactional data). - define success metrics: establish metrics to determine the success of the migration. these could include response times, error rates, and data consistency checks. - create a rollback plan: ensure you have a detailed rollback plan in case of any issues during the cutover. this plan should include steps to revert traffic back to redis seamlessly. redirect traffic gradually: -initial read operations: - implement read proxy: use a proxy layer (e.g., haproxy, nginx) or modify your application to redirect a small fraction (e.g., 5-10%) of read operations to valkeydb. - monitor performance: track the performance and behavior of valkeydb under this initial load. use monitoring tools to check for response times, error rates, and data consistency. - incremental increase: gradually increase the percentage of read traffic to valkeydb (e.g., 10-20% increments) while continuously monitoring system performance. write operations: - enable dual writes: start by writing data to both redis and valkeydb. this ensures that valkeydb receives all new data without interrupting redis operations. you can implement this in your application logic. or if you already\u2019ve, you can use middleware like a queue like kafka or or a proxy like twemproxy. -monitor write consistency : ensure that data written to valkeydb matches the data in redis. use data validation tools or custom scripts to verify consistency. - incremental write shift : gradually shift write operations to valkeydb. start with a small portion (e.g., 5-10%) and increase incrementally, similar to the read operations. monitor for any discrepancies or performance issues. monitoring and adjustments: - performance monitoring: use monitoring tools like prometheus, grafana, or built-in tools in valkeydb to track metrics such as latency, throughput, and error rates. - data consistency checks: regularly compare data between redis and valkeydb to ensure consistency. use tools like redis-diff or custom scripts for this purpose. - configuration adjustments: based on performance metrics and feedback, adjust configurations in valkeydb or your application. this might include tweaking connection settings, optimizing queries, or scaling resources. - feedback loop: establish a feedback loop with your team to quickly address any issues that arise. this could involve daily stand-up meetings or real-time monitoring dashboards. decommission redis after successfully migrating to valkeydb and ensuring that everything runs smoothly, it's time to say goodbye to redis. but before you do, there are a few steps to make sure everything is in place and to avoid any issues. think of it like a final cleanup after a big move. you've packed and moved all your stuff, double-checked that nothing's left behind, and now it's time to close the old door for good. 1. validate valkeydb operations read-only mode for redis:transition redis to read-only and monitor valkeydb\u2019s performance. extended testing:maintain this setup for a period to ensure stability. 2. decommission process shutdown plan and final backupdevelop a detailed plan and perform a final backup. server shutdowngradually shut down redis servers, monitoring system impact. 3. post-decommission monitoring continuous monitoring and alertingimplement comprehensive monitoring and alerting mechanisms. by following these steps, you can transition from redis to valkeydb smoothly, ensuring your applications remain robust and performant while adhering to open-source principles. considerations for a smooth transition in our migration from redis to valkeydb, we took several crucial steps to ensure a seamless transition and maintain data integrity. what we did: we began by setting up valkeydb to act as a replica of our redis instance. this involved configuring redis to allow replication and ensuring valkeydb could communicate with it. we then carefully monitored the synchronization process to make sure all data was accurately replicated from redis to valkeydb. this step was vital to ensure that valkeydb had an up-to-date copy of our dataset. why we did it: the primary reason for this replication setup was to minimize downtime and avoid data loss during the migration. by replicating data to valkeydb while keeping redis operational, we ensured that our services remained available and responsive throughout the process. this approach also allowed us to validate that valkeydb could handle our data and workload effectively before fully committing to the switch. how it is now: after successfully synchronizing the data, we gradually redirected traffic from redis to valkeydb. initially, we started with read operations to test the performance and consistency of valkeydb under actual usage conditions. once we were confident in its stability, we began shifting write operations as well. this gradual cutover helped us monitor and address any issues in real-time without impacting our users. the final step was to decommission redis. we took this step cautiously, transitioning redis to read-only mode first to ensure all data changes were captured by valkeydb. after an extended period of validation and monitoring, we fully decommissioned redis, relying solely on valkeydb for our database needs. this migration process was guided by our commitment to maintaining high availability, data integrity, and performance. by carefully planning and executing each step, we were able to achieve a smooth transition with minimal disruption to our services. valkeydb now serves as our primary database, providing the open-source flexibility and enhanced performance. conclusion at kloia, a leading software, devops, qa, and ai consulting company, we understand the importance of staying ahead with cutting-edge technology while ensuring seamless operations. migrating from redis to valkeydb not only aligns with open-source principles but also offers enhanced performance and flexibility. by following this comprehensive, zero-downtime migration guide, you can confidently transition to valkeydb, leveraging its robust capabilities for your business needs. for further assistance and tailored solutions, feel free to reach out to our team."
},
{
"title":"Work From Anywhere Anytime: Kloia Villa Bali is an Unparalleled Team Building Experience",
"body":"This year, Kloia facilitated the deployment of Kloian\u2019s to Bali with the objective of reinforcing Kloia's remote working culture and enhancing the employee experience. Kloia\u2019s company culture always adheres to the motto \"work from anywhere, anytime.\" Bali, with its natural beauty, cultural diversity, and tranquil atmosphere, proved to be an ideal location for our three-week work and expl...",
"post_url":"https://www.kloia.com/blog/work-from-anywhere-anytime-kloia-villa-bali-is-an-unparalleled-team-building-experience",
"author":"Neslihan Kaya",
"publish_date":"23-<span>May<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/neslihan-kaya",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/kloia-villa-bali-unparalleled-team-building-experience.webp",
"topics":{ "workanywhere":"workanywhere","working-remote":"working remote","hrmarketing":"HRMarketing","kloia-villa":"Kloia Villa","kloia-villa-bali":"Kloia Villa Bali" },
"search":"01 <span>aug</span>, 2024work from anywhere anytime: kloia villa bali is an unparalleled team building experience workanywhere,working remote,hrmarketing,kloia villa,kloia villa bali neslihan kaya this year, kloia facilitated the deployment of kloian\u2019s to bali with the objective of reinforcing kloia's remote working culture and enhancing the employee experience. kloia\u2019s company culture always adheres to the motto \"work from anywhere, anytime.\" bali, with its natural beauty, cultural diversity, and tranquil atmosphere, proved to be an ideal location for our three-week work and exploration trip. this experience demonstrated the immense potential of the \"work from anywhere, anytime\" concept. let's jump into my and other kloian's villa bali experiences: upon kloian\u2019s arrival in bali on the first day, kloian\u2019s were greeted by warm tropical air and a cordial welcome. the villa was a haven of tranquility, nestled amidst verdant foliage and offering a vast swimming pool, sumptuous furnishings, and even a private chauffeur. gade, our driver, was well-versed in the island's geography, ensuring seamless transportation. (we were unable to ascertain gade's preferred musical selection, but this is a matter for another occasion.) the villa which kloian\u2019s accommodate is a high level of luxury, with all cleaning and breakfast service provided by dedicated housekeepers. the villa also included a laundry service, which was a convenient additional feature. the prices were competitive, and the service was excellent. in the mornings, we woke up to watch the sunrise and then set out to explore bali's magical temples or artisan markets. we walked through rice paddies and strolled along stunning beaches. thanks to the time difference, we spent our mornings exploring and our evenings working. as we worked, our thoughts seemed to flow more freely; perhaps bali's tranquil air had something to do with it. bali's sunset views are an essential component of the island's alluring ambience. in particular, the temple-dotted landscapes of alas harum evoke the beauty of temple run. the impressive cliffs of uluwatu offer unparalleled ocean vistas and are ideal for observing the sunset. seminyak is renowned for its lively beaches and vibrant nightlife, while ubud, the cultural heart of bali, captivates visitors with its art galleries, craft shops, and lush forests. canggu is the preferred destination for surfers and caf\u00E9 aficionados. these regions exemplify the diverse facets of bali, offering countless opportunities for both exploration and relaxation. one day, you could be practising yoga in the peaceful surroundings of ubud, and the next, you could be surfing the waves in canggu. the variety and sense of freedom are what make every moment in bali special. however, be careful not to become too attached to those sunset views, colors, and the overall atmosphere, as you might fall in love with them and miss out on other experiences. of course, a trip to bali wouldn't be complete without experiencing its renowned massages. bali's natural beauty and cultural richness are matched by its renowned massages. after a long workday or a day of exploring, a massage in bali is the perfect way to relax both body and mind. when we had our first massage, we immediately felt the stress and fatigue melt away. traditional balinese massage techniques range from deep tissue to aromatherapy. as a team, we opted for massages with bali's unique aromatic oils, and it was a highly satisfactory experience. whether on the beach with a sea breeze or in a tranquil spa, massage can be enjoyed in a variety of settings. many team members have commented that their massage sessions were exactly what they needed. after the massage, both our minds and bodies felt refreshed, and we were ready to resume our activities in bali. massages in bali are not just relaxation sessions; they are an integral part of the island's cultural heritage. i would highly recommend setting aside an hour to experience one for yourself. naturally, after our massages, we couldn't resist sampling the tropical fruits that are a staple of balinese cuisine. the island's fruit is an integral part of daily life, with an astonishing variety on offer. whether it's breakfast, a midday snack, or just a treat, fresh tropical fruits are always within reach. a visit to bali's vibrant fruit markets is an experience in itself. in these markets, where colors and scents intertwine, you can find a wide range of fruits, from strawberries to coconuts, papayas to dragon fruit. dragon fruit and mangosteen are among the island's most popular fruits, and their juicy, sweet flavors create a tropical festival in your mouth. what's more, these fruits are not only delicious but also healthy. bali's famous young coconut, with its refreshing water straight from the shell, is the ideal way to cool off in the heat while reaping the benefits of vitamins and minerals. mangoes and pineapples are another bali pride point; they are so juicy and sweet that they do not even compare to those from other places. in short, bali's fruit variety is as rich and impressive as the island's other beauties. these fresh and vibrant fruits, available year-round, serve as a testament to bali's allure and generosity. we now move on to one of our most popular activities, atv tours. as we raced through bali's natural beauty, we all felt like kids again, shouting \"one more time, one more time!\" as we zipped through the forest trails, got a little wet, and had the time of our lives. if we'd had more time, we'd definitely have gone back for more. finally, we would be remiss if we did not mention luwak coffee, which is a must-try for anyone visiting bali. while it is the most expensive coffee in the world, it is definitely worth the experience. we were somewhat surprised by the price, but we enjoyed the coffee. bali is home to a great deal of biodiversity, not just in terms of landscapes, but also in the animal kingdom. monkeys are a common sight in the forests and around the temples. the ubud monkey forest is home to a variety of monkeys, some of which are known for their playful antics. while these creatures are undoubtedly charming, it is important to exercise caution and keep a close eye on your belongings. bali is home to a number of impressive animals, including elephants. these majestic creatures can be seen on safari tours around the island. there are various ways to interact with these majestic creatures without causing them harm, allowing visitors to observe their gentleness in their natural habitat. being in the presence of these gentle giants is both peaceful and awe-inspiring, a reminder of nature's beauty and power. it is also worth noting that bali is home to a diverse range of reptiles. komodo dragons, with their impressive size and imposing presence, are a particular highlight for nature enthusiasts. in contrast, lizards can be found in a wide variety of habitats and are typically harmless. they play an important role in maintaining the ecosystem's health. bali is not only a place of cultural and natural beauty, but also home to a vibrant array of wildlife. one day, you might be watching energetic monkeys swinging through the trees, and the next, you could be walking alongside an elephant or encountering exotic reptiles. bali is a paradise for nature and animal lovers, with endless opportunities to explore and connect with the wild side of life. \u201Cthe kloia villa event was not just a work trip; it was a team-building experience.\u201D while exploring bali's beauty, we also managed to complete our work. this demonstrated the potential for a harmonious combination of work and vacation. whether it was temple visits, night gatherings by the pool, or business meetings in the villa, everything came together to create an unforgettable experience. we are now looking forward to our next destination. please advise on the next destination."
},
{
"title":"A Beginner's Guide to Performance Testing",
"body":"Imagine this scenario: You've created an awesome application. Developers have systematically created a thorough set of unit tests for all features, the testing team has successfully conducted user acceptance tests, and you have extensively tested your application in a variety of scenarios. You've also run cross-browser testing and tested your application on a variety of mobile devices to...",
"post_url":"https://www.kloia.com/blog/a-beginners-guide-to-performance-testing",
"author":"Elif Ozcan",
"publish_date":"21-<span>May<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/elif-ozcan",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/begginers-guide-to-performance-testing-blog.png",
"topics":{ "test-automation":"Test Automation","behavior-driven-development":"Behavior Driven Development","performance":"performance","qa":"QA","performance-testing":"Performance Testing","qateam":"qateam" },
"search":"21 <span>may</span>, 2024a beginner's guide to performance testing test automation,behavior driven development,performance,qa,performance testing,qateam elif ozcan imagine this scenario: you've created an awesome application. developers have systematically created a thorough set of unit tests for all features, the testing team has successfully conducted user acceptance tests, and you have extensively tested your application in a variety of scenarios. you've also run cross-browser testing and tested your application on a variety of mobile devices to ensure that it works smoothly across platforms. everything looks perfect, and as soon as you launch the app, the users begin to arrive as expected. suddenly, calamity strikes! the application crashes unexpectedly and refuses to launch. you are surprised by this unexpected finding, after all that exhaustive testing. at this point, you realise the value of non-functional tests, especially performance tests. for this reason, i will focus on performance testing and its importance in this blog post. i will examine the common misconceptions and overlooked aspects that may surface before, during, and following the process of performance testing. understanding performance testing in software testing, many new technologies, tools, and test methods have emerged in recent years. especially in the process from the beginning to the end user, many new test principles and test tools have been developed for the acceptance tests of applications. we briefly define these tests and test approaches as functional tests. however, as technology advances and the user base grows, this presents a big challenge to the system\u2019s ability to reliably operate under high loads and long response times. considering these issues, you can conclude that the system is not operating at its optimal level. but to assess a system's performance, you want more than just conjecture\u2014you also need data. to obtain this data, performance tests, which are non-functional tests, must be conducted in conjunction with functional tests during the application testing process. if you can not measure it, you can not improve it! performance tests put the system through rigorous tests to assess how well it performs. it is an important and difficult step in the software development process. it's like giving your software a speed, capacity, scalability, and stability check-up instead of just looking for functionality issues! in this extensive assessment of a system\u2019s performance, different methods are used to exercise specific aspects of each system. load testing reveals how the system performs under high-traffic scenarios. endurance testing measures the system\u2019s durability over extended periods. scalability testing looks into how well the system can adapt to the changing load required by managing scaling processes, either up or down. stress testing and volume testing are other examples of these methods. all of these methods provide crucial information that helps improve system performance overall. performance testing is more necessary than ever the rapid speed of technical developments and the regular stream of system changes have made performance testing more important than ever. in the rush to push features out to keep up with the rapidly changing technological scene, the emphasis on performance might be unintentionally overlooked. neglecting performance testing in the face of rapid technological development can lead to unexpected issues, such as application performance bottlenecks and system vulnerabilities. performance testing can forecast product behaviour and the product's response to user activities under workload in terms of responsiveness and stability when implemented correctly. through performance testing, you can find out about application performance bottlenecks, system benchmarks to see which ones perform the best, and improvements to the infrastructure. more traffic from more devices: as your user base expands, so does the demand for apps. users can access your app from a variety of locations with varying network settings. performance testing ensures that your application can handle more users without sacrificing reliability or speed, and it also finds and fixes latency issues, providing a smooth experience to users worldwide. resource optimization: cost-effectiveness depends on the effective use of resources. when resource-intensive areas are identified through performance testing, cloud infrastructure or server deployments can be optimised and cost-effectively reduced. competitive edge: in a competitive market, poor performance may lead customers to look for alternatives. thorough performance testing helps to maintain a competitive advantage by improving the user experience. studies show that users tend to abandon a website or app if the page loading time increases. this impatience highlights the critical importance of performance testing, as it directly impacts user retention. user satisfaction and retention: in today's digital environment, users expect fast response times and easy interaction with software. users are less tolerant of slow or unreliable applications, and their level of satisfaction has a direct impact on retention. performance testing is critical for meeting and exceeding user expectations, ensuring that apps continuously offer a pleasant, responsive, and smooth user experience. cost saving: performance testing helps in the detection and resolution of problems before they turn into significant ones. in the long term, finding and fixing performance issues through testing can save a lot of money. fixing performance issues during the development process is usually less expensive than correcting them after the application has been released into production. common misconceptions and overlooked aspects mistake: focusing solely on functionality despite its criticality, performance testing is sometimes overlooked or ignored during the software development lifecycle. many project teams spend a great deal of resources testing the functionality of the system but spend little or no time doing performance testing. teams usually focus on functional tests to ensure that the product is functioning properly while overlooking or paying less attention to performance tests. overemphasising functional testing may create a false sense of security, as a well-functioning feature does not guarantee excellent performance in real-world scenarios. misconception: performance testing is cheap, easy, and quick the common misconception that performance testing is quick, easy, and cheap frequently results in important mistakes being made during the software development process. despite popular opinion to the contrary, successful performance testing needs careful preparation, methodical execution, and an in-depth understanding of the architecture of the application. this is not just a task to be completed on a development checklist; rather, it is an essential procedure that calls for resources, time, and knowledge. the difficulty of data preparation, in particular, emphasises the need to allocate the resources required to precisely simulate the many scenarios under which the system operates while ensuring that the performance testing findings are accurate and reliable. ignoring this factor might result in a variety of problems, such as subpar user experience and system crashes during heavy usage. it is critical to acknowledge the myth that performance testing is a simple, low-cost operation to guarantee the overall success and dependability of a software product. misconception: no need for a production-like environment furthermore, the environment fallacy in performance testing highlights the misconception that testing with an environment different from actual production can produce accurate results. it is best practice to set up dedicated environments for real-time performance testing, ensuring that they are instantiated when needed and destroyed as soon as the tests are completed. testing under conditions that differ from the production environment frequently produces inaccurate results because it ignores critical performance factors such as operating system settings, hardware configurations, and concurrently running apps. achieving dependable performance testing demands a meticulous effort to closely match the test and production environments, ensuring that the findings appropriately reflect the system's performance under real-world conditions. misconception: adding more hardware will solve performance issues it is commonly believed that performance testing is unnecessary because any issues found can be fixed with additional hardware - by adding servers, memory, etc. this belief frequently surfaces when teams experience system slowdowns or performance limitations. it is easy to jump to the conclusion that adding more cpus, memory, or storage will instantly address the issue at hand rather than thoroughly analysing and improving the current system. upgrading hardware may help temporarily, but it will not solve underlying software or architectural issues that are the root of the performance issues. effective performance improvements require an approach that is deeper than just adding more hardware, one that takes into account software architecture, bottleneck identification, code optimisation, and behaviour analysis of the system. mistake: scenarios that do not represent real user behaviour a frequently overlooked aspect of performance testing is its inability to simulate real-world scenarios accurately. understanding and recognizing your customers' actions is critical while doing performance testing. each user exhibits their own behaviours, interacts differently, and has different requests. tailoring performance tests to imitate different user personas and their distinct behaviours helps you see how the system will perform under various usage patterns. performance testing becomes more comprehensive and realistic by modelling scenarios that match actual user behaviours, such as variable traffic loads, transaction types, or geographic locations. qa engineer walks into a bar. orders a beer. orders 0 beers. orders 999999999 beers. orders a lizard. orders -1 beer. orders a sfdeljknesv. but the first customer comes in and asks where the toilet is. the bar bursts into flames and the customer kills everyone. conclusion in this blog post, i have talked about performance testing in general terms and the misconceptions in this process. performance testing plays a vital role in delivering high-quality software applications that meet user expectations. by evaluating performance characteristics, identifying bottlenecks, and optimising system components, performance testing ensures an application's responsiveness, scalability, stability, and reliability."
},
{
"title":"Creating End-to-End Web Test Automation Project from Scratch \u2014 Part 4",
"body":"In the previous blog post, you have configured and executed your tests in parallel with Selenium Grid. Now it is time to dockerize your web project! Let\u2019s start with installing docker on your machine. Let\u2019s Create and Configure Our Web Test Automation Project! Let\u2019s Write Our Test Scenarios! Bonus: Recording Failed Scenario Runs in Ruby Let\u2019s Configure Our Web Test Automation Project for...",
"post_url":"https://www.kloia.com/blog/creating-end-to-end-web-test-automation-project-from-scratch-part-4",
"author":"Muhammet Topcu",
"publish_date":"20-<span>Apr<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/end-to-end-web-test-automation-blog.webp",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","docker":"Docker","behavior-driven-development":"Behavior Driven Development","selenium":"Selenium","qa":"QA","performance-testing":"Performance Testing","qateam":"qateam","manual-testing":"manual testing","endtoend":"endtoend" },
"search":"01 <span>oct</span>, 2024creating end-to-end web test automation project from scratch \u2014 part 4 test automation,software testing,docker,behavior driven development,selenium,qa,performance testing,qateam,manual testing,endtoend muhammet topcu in the previous blog post, you have configured and executed your tests in parallel with selenium grid. now it is time to dockerize your web project! let\u2019s start with installing docker on your machine. let\u2019s create and configure our web test automation project! let\u2019s write our test scenarios! bonus: recording failed scenario runs in ruby let\u2019s configure our web test automation project for remote browsers and parallel execution let\u2019s dockerize our web test automation project bonus: recording scenario runs on docker with selenium video! let\u2019s integrate our dockerized web test automation project with ci\/cd pipeline! auto-scaling and kubernetes integration with keda installing docker docker\u2019s official page gives elaborate instructions about installing docker on various machine types, so i won\u2019t get into details. you can download and install the corresponding docker package to your device: docker for macos docker for windows docker for linux let\u2019s start docker desktop and open the terminal. if everything is installed successfully, you should see a similar response when you write `docker` in the terminal: then let\u2019s create your project image! create an image of your project docker images are configured and created via a `dockerfile`. this is the blueprint of your image. let\u2019s create a file named `dockerfile` without a file extension in your project folder and start populating the file with your configurations: base image: base images are self-explanatory. they form the basis of your image. the base image comes with os distribution, some programs, and dependencies. you can search and find images in the dockerhub. from ruby:3.0 using the from keyword, you chose your ruby:3.0 as your base image with all the dependencies you need to execute your web project. so all the things you install and configure will be upon this base image. i decided on this ruby image since it has all you need (and probably more) to run your project. but if you want to create more lightweight images, you may want to use ruby-slim or alpine images as your base image. but beware that you need to install more packages manually. run apt update && apt install git #run apk update && apk add git with run keyword, you can execute commands. with apt update && apt install git, you install git to your image. since the ruby:3.0 is a debian distribution, you use apt command. if you use an alpine based distro, you will need apk command. now let\u2019s specify your working directory. workdir \/usr\/src\/app the workdir instruction sets the working directory for any run, cmd, entrypoint, copy and add instructions that follow it in the dockerfile. copy gemfile \/usr\/src\/app then let's copy your gemfile to your working directory. copy gemfile \/usr\/src\/app and then install the gems that your projects require. run gem install bundler && bundle install --jobs=3 --retry=3 now let\u2019s copy the rest of the files of your project. copy . \/usr\/src\/app note: you might ask why you separately added your gemfile to the directory, instead of copying all of your project files first and then installing gemfile contents. i will touch on this subject in the next section. now you state with which command your image will be started: cmd parallel_cucumber -n 2 and finally, you state which ports you will expose: expose 5000:5000 then putting everything together: from ruby:3.0 run apt update && apt install git #run apk update && apk add git workdir \/usr\/src\/app copy gemfile \/usr\/src\/app run gem install bundler && bundle install --jobs=3 --retry=3 copy . \/usr\/src\/app cmd parallel_cucumber -n 2 -o '-p parallel' expose 5000:5000 building docker image now you have your dockerfile ready, let\u2019s build your image! first, navigate to your project directory in the terminal. docker build -t muhammettopcu\/dockerize-ruby-web:1.0 . let\u2019s look at the above code where: `docker build` is your main command `-t` is an option flag that enables you to tag your image `muhammettopcu` is my user name in dockerhub. you change it with yours. `dockerize-ruby-web` is your image\u2019s name `1.0` is the version of your image `.` refers to the current directory, in which your dockerfile resides. and your docker image is successfully created! let\u2019s check it with the following command: docker images building multi-arch images if you want your image to work on host machines with different cpu architectures, you need to build images compatible with different architectures. docker buildx build -t muhammettopcu\/dockerize-ruby-web:1.0 --push --platform linux\/amd64,linux\/arm64 . as you can see, you added `buildx` command and your platform types with the `--platform` option. `--push` lets you push your images to dockerhub. now you can see your image has different arch types! note that you can use different processor types as well, such as linux\/arm64\/v8 and linux\/arm\/v7. now let\u2019s talk about why the order of the command is important in a dockerfile and how you benefit from them. docker layers docker builds the images layers upon layers. every layer has a unique hash id. this layered structure makes it possible to re-build, download or upload images faster by only writing the changed layers and getting the other layers from the cache. to make it more clear, let me write it in a list: docker uses cached files. in a docker file, every line creates a layer. when re-building, it uses cached files from top to bottom until it finds a changed layer. then docker builds the rest of the layers from scratch. when creating the dockerfile, you should write the lines from the least likely to change to the most likely to change. let\u2019s demonstrate this to make it more clear: first, i am going to make a small change to your project file and rebuild your image: as you can see, i fully utilized my cache and didn\u2019t need to download or install anything else other than updating your source code. now i am going to change the order of your dockerfile like below. note that with this configuration, first, i copy all your project files to your image and then install the gems. from ruby:3.0 run apt update && apt install git workdir \/usr\/src\/app copy . \/usr\/src\/app run gem install bundler && bundle install #run apk update && apk add --no-cache build-base && apk add git cmd parallel_cucumber -n 2 -o '-p parallel' expose 5000:5000 now let\u2019s say i made a change in your code and want to rebuild your image. (i added a space to one of your files for this purpose.) as you can see, even though i only changed the source code, i needed to re-download and install the gem dependencies. that is because when building an image, docker cancels cache usage completely after the first changed layer. so it is best to locate your code source near the end of the file since it is the most likely to change. now let\u2019s run your project and see if your code is executed or not: first, with the `docker images` command, list the images: and copy the id of the latest version of your image from here, which is \u201C25f5ff8c731a\u201D in my case. then type `docker run 25f5ff8c731a`. you see that your scenarios do not run since it can not find drivers, but do not worry. you will dockerize selenium grid as well! :) dockerize selenium grid let\u2019s download selenium grid images to your machine. if you use a macbook with apple silicon (m1\/m2) download the below images: seleniarm\/hub seleniarm\/node-chromium seleniarm\/node-firefox otherwise, download these: selenium\/hub selenium\/node-chrome selenium\/node-firefox to download these images, you need to use `docker image pull ` command. so since i use macbook with m1 chip, i will use: `docker image pull seleniarm\/hub` now if you downloaded three of them, let\u2019s configure them with docker compose. docker compose configuration docker compose is a yml file to configure more than one image for them to be run in coordination. it allows you to write simple command lines without specifying everything with long strings. let's create your file bit by bit. version: \"3\" services: selenium-hub: image: seleniarm\/hub container_name: selenium-hub ports: - \"4442:4442\" - \"4443:4443\" - \"4444:4444\" networks: - dockerize-network `version` is the version of your docker compose `services` where you list your service name and the image it will use. `selenium-hub` is the service name `image` is your image `container_name` is optional. if you do not state, docker would generate randomly. `ports` is to map your host\u2019s ports with the container\u2019s. it is in the host:container format. note: when mapping ports in the host:container format, you may experience erroneous results when using a container port lower than 60, because yaml parses numbers in the format xx:yy as a base-60 value. so it is better to map them as strings. `networks` defines the network this container will be connected to. now your nodes: chrome: image: seleniarm\/node-chromium container_name: selenium-chrome shm_size: 2gb depends_on: - selenium-hub environment: - se_event_bus_host=selenium-hub - se_event_bus_publish_port=4442 - se_event_bus_subscribe_port=4443 - se_node_max_instances=4 - se_node_max_sessions=4 - se_node_session_timeout=180 networks: - dockerize-network `shm_size` is the shared memory size. you need a larger one since you run multiple browser instances on these nodes. `depends_on` allows you to prioritize the execution of services. if a service depends on another service, it will not start until the service it depends on has started. `environment` lets you define environment variables. most of the variables are self-explanatory: `se_node_max_instances` defines how many instances of the same version of the browser can run. `se_node_max_sessions` defines the maximum number of concurrent sessions that will be allowed. \u00A0 the firefox node: firefox: image: seleniarm\/node-firefox container_name: selenium-firefox shm_size: 2gb depends_on: - selenium-hub environment: - se_event_bus_host=selenium-hub - se_event_bus_publish_port=4442 - se_event_bus_subscribe_port=4443 - se_node_max_instances=4 - se_node_max_sessions=4 - se_node_session_timeout=180 networks: - dockerize-network now let\u2019s save your file and spin up your grid with the command below: and the network configuration: networks: dockerize-network: name: dockerize-network driver: bridge `network` is for the network configuration. `dockerize-network` is the network declaration. `name` is the name of your network. `driver` is the type of your network. so your final yml file looks like this # to execute this docker-compose yml file use `docker compose -f docker-compose-seleniarm.yml up` # add the `-d` flag at the end for detached execution # to stop the execution, hit ctrl+c, and then `docker compose -f docker-compose-seleniarm.yml down` version: \"3\" services: selenium-hub: image: seleniarm\/hub container_name: selenium-hub ports: - \"4442:4442\" - \"4443:4443\" - \"4444:4444\" networks: - dockerize-network chrome: image: seleniarm\/node-chromium shm_size: 2gb depends_on: - selenium-hub environment: - se_event_bus_host=selenium-hub - se_event_bus_publish_port=4442 - se_event_bus_subscribe_port=4443 - se_node_max_instances=4 - se_node_max_sessions=4 - se_node_session_timeout=180 networks: - dockerize-network firefox: image: seleniarm\/node-firefox shm_size: 2gb depends_on: - selenium-hub environment: - se_event_bus_host=selenium-hub - se_event_bus_publish_port=4442 - se_event_bus_subscribe_port=4443 - se_node_max_instances=4 - se_node_max_sessions=4 - se_node_session_timeout=180 networks: - dockerize-network networks: dockerize-network: name: dockerize-network driver: bridge now let\u2019s save your file and spin up your grid with the command below: docker compose -f docker-compose-seleniarm.yml up note that here -f option means file and docker-compose-seleniarm.yml is the name of your compose file. now your grid is up and running. you can check it with http:\/\/localhost:4444\/. let\u2019s see which networks you have with `docker network ls` command: your network is here. let\u2019s inspect it with docker network inspect to see which containers are connected to it: okay, then let\u2019s run your project image using this network! docker run --network dockerize-network muhammettopcu\/dockerize-ruby-web:1.0 it looks like 2 of 8 scenarios failed. since these node images include debugging packages, you can watch the browsers in your container! first, click on the sessions tab, then the camera icon beside the session you want to see. the password for the vnc is \u201Csecret\u201D by default. now you can see what is going on in your container! as a final note, you can scale the node number of your browsers by using `--scale` option. for example, let\u2019s say that you want to spin up the grid with 4 firefox nodes and 2 chrome nodes. then you can use: \u00A0 docker compose -f docker-compose-seleniarm.yml up --scale chrome=2 --scale firefox=4 with this, we completed dockerizing selenium grid and our project. in the next chapter, we will look at how to integrate a ci\/cd pipeline into our project with jenkins. but before that, we will have a bonus chapter as well! stay tuned :)"
},
{
"title":"CSI Performans Benchmark",
"body":"Abstract: The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Using CSI third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code. There are many solutio...",
"post_url":"https://www.kloia.com/blog/csi-performance-benchmark",
"author":"Omer Faruk Urhan",
"publish_date":"14-<span>Mar<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/omer-faruk-urhan",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/CSI-Performans-Benchmark-blog.png",
"topics":{ "kubernetes":"Kubernetes","longhorn":"Longhorn","container-storage-interface":"Container Storage Interface","csi":"CSI","nfs":"NFS","vsphere":"vSphere" },
"search":"29 <span>mar</span>, 2024csi performans benchmark kubernetes,longhorn,container storage interface,csi,nfs,vsphere omer faruk urhan abstract: the container storage interface (csi) is a standard for exposing arbitrary block and file storage systems to containerized workloads on container orchestration systems (cos) like kubernetes. using csi third-party storage providers can write and deploy plugins exposing new storage systems in kubernetes without ever having to touch the core kubernetes code. there are many solutions developed as csi driver plugins. however, solutions that can be easily integrated with on-prem environments are limited. one must perform some tests to choose between these solutions and see their pros\/cons. this article proposes a benchmark test to discover reliable container storage interface (csi) drivers for kubernetes clusters in on-premises environments. the test assesses the performance of longhorn, nfs, and vsphere csi drivers. the experiment entailed deploying virtual machines in a vmware architecture and configuring a kubernetes cluster (v1.24.9). using best practices, longhorn, csi driver nfs, and vsphere csi driver were installed on these kubernetes clusters. methodology: cluster configuration: virtual machines were established using the vmware architecture the virtual machines were configured to run a kubernetes cluster (v1.24.9). driver installations: by recommended standards, longhorn , nfs csi driver , and vsphere csi driver were deployed on the kubernetes cluster. storageclass configuration: each storage solution was assigned a unique \"storageclass\" on the cluster. the \"storageclass\" setups were used during the tests. testing environment: statefulset deployments were established. testing was carried out in pods within the statefulsets. persistent volume claims (pvcs) were produced in read-write-once (rwo) access mode. test parameters: test parameters included a \"random rw\" pattern. to obtain accurate results, the disc cache was disabled. tests were run concurrently for all csi kinds on several hosts. tool usage: fio (flexible io tester): fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=4g --filename=\/var\/data\/testfile ioping ioping -c 200 test outputs: benchmark testing yielded throughput, io, and latency values, displayed in three separate graphs. throughput: throughput is a metric that describes the amount of data able to flow through a point in the data path over a given time. throughput is typically the best storage metric when measuring data that needs to be streamed rapidly. input\/output (i\/o): iops, or input\/output operations per second, measures the number of storage transactions processed through a system each second. this metric is a great way to measure smaller data objects like web traffic logs. i\/o latency: latency describes the time required for a sub-system to process a single data request or transaction. latency also includes the time it takes to find the required data blocks and prepare to transfer data. \u00A0 conclusion: the benchmark test results show how longhorn, nfs, and vsphere csi drivers perform in a kubernetes cluster on vmware infrastructure. these findings can help you choose the best csi driver for your on-premises kubernetes infrastructure. each csi driver has different pros\/cons besides performance. it is necessary to consider these values as well as performance when making a choice."
},
{
"title":"Creating End-to-End Web Test Automation Project from Scratch \u2014 Part 3",
"body":"Let\u2019s Configure Our Web Project for Remote Browsers and Parallel Execution In the previous parts of this blog post series, we created our project and wrote a scenario together. I hope you were able to write 4 scenarios. Because you are going to run them in parallel and remotely! Let\u2019s Create and Configure Our Web Test Automation Project! Let\u2019s Write Our Test Scenarios! Bonus: Recording F...",
"post_url":"https://www.kloia.com/blog/creating-end-to-end-web-test-automation-project-from-scratch-part-3",
"author":"Muhammet Topcu",
"publish_date":"12-<span>Feb<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/end-to-end-web-test-automation-blog%20%284%29.png",
"topics":{ "test-automation":"Test Automation","selenium":"Selenium","ddd":"DDD","java":"java","qa":"QA","test-driven-development":"Test Driven Development","data-driven-test":"Data-Driven Test","endtoend":"endtoend" },
"search":"12 <span>feb</span>, 2024creating end-to-end web test automation project from scratch \u2014 part 3 test automation,selenium,ddd,java,qa,test driven development,data-driven test,endtoend muhammet topcu let\u2019s configure our web project for remote browsers and parallel execution in the previous parts of this blog post series, we created our project and wrote a scenario together. i hope you were able to write 4 scenarios. because you are going to run them in parallel and remotely! let\u2019s create and configure our web test automation project! let\u2019s write our test scenarios! bonus: recording failed scenario runs in ruby let\u2019s configure our web test automation project for remote browsers and parallel execution let\u2019s dockerize our web test automation project bonus: recording scenario runs on docker with selenium video! let\u2019s integrate our dockerized web test automation project with ci\/cd pipeline! auto-scaling and kubernetes integration with keda remote driver configuration in part 1, you configured your drivers to run your tests on your local machine. now you need to configure remote drivers. first, you need a remote url variable. as you did with all general configurations, you are going to state this inside your base_config.rb file. @remote_url = env['remote_url'] || 'http:\/\/localhost:4444' def self.remote_url @remote_url end with this code, you create a variable named remote_url. it means that if there is an environment variable called remote_url, your code uses the value of that variable. if not, then it uses \u201Chttp:\/\/localhost:4444\u201D. why did we choose it as your default remote_url? you will see it in the following section. note: you can pass these environment variables with your custom values when executing this test script on the command line. you will see an example of this shortly. now go to the file where your driver configurations are. in your driver.rb file, add your remote driver configurations under your case block. when 'remote-chrome' capybara.register_driver :selenium do |app| options = selenium::webdriver::chrome::options.new options.add_argument('--window-size=1280,720') add_default_values(options) capybara::selenium::driver.new( app, browser: :remote, url: baseconfig.remote_url, :options => options ) end when 'remote-firefox' capybara.register_driver :selenium do |app| options = selenium::webdriver::firefox::options.new options.add_argument('--window-size=1280,720') add_default_values(options) capybara::selenium::driver.new( app, browser: :remote, url: baseconfig.remote_url, :options => options ) end end as you can see, the only thing that differs from normal drivers is that when you are creating a new driver instance, you state your browser as \u201Cremote\u201D and give a url that your script is to be run through. note that this url is the one that you configured in your base_config.rb file a second ago. selenium grid configuration now is the time to configure selenium grid! selenium grid is what enables us to run your tests in parallel across multiple devices with grid, you can easily run tests in parallel on multiple machines it is possible to run your tests on different browser versions it enables cross-platform testing (e.g.: windows, macos) if you are ready, let\u2019s get down to it! first things first, go and download selenium server (grid) from its official website. selenium server is a .jar file, so you need java installed on your machine to use it. you can download java from its official website. at the time i wrote this blog post, selenium 4 hadn\u2019t been released yet. you do not need to download webdrivers any more, but if you need to use earlier versions of selenium for a reason, you can find them below: you need to download the web drivers that you are going to use in your tests. these web drivers can be in the same directory as your selenium-server.jar file, or you can state their directory in the path. for this, please download the suitable version for your browser below. note that the version of the driver should be the same as your browser\u2019s. chromedriver: http:\/\/chromedriver.chromium.org\/ geckodriver (mozilla): https:\/\/github.com\/mozilla\/geckodriver\/releases for macos, follow these steps: move the chromedriver file to the \"\/usr\/local\/bin\/\u201D folder, which you can reach by opening finder, and using cmd+shift+g combination. open the terminal and type `nano .bash_profile` command to edit .bash_profile file. add export path=\"\/usr\/local\/bin\/chromedriver\u201D to the last line. press control+x, y and enter respectively. now you can initialize your selenium grid! you can start selenium grid either in standalone mode or with hub & node configuration. standalone: this mode combines all grid components into one. with a single command, you can have a fully functional grid in a single process. but standalone mode can only run on a single machine. hub & node: hub & node is the most preferred mode because it allows you to: combine different machines (with different os and browser versions) in a single grid have a single entry point to run webdriver tests in different environments scaling capacity up or down without tearing down the grid let\u2019s demonstrate how the standalone works. spinning up in selenium standalone mode first, you need to spin up the grid in standalone mode by typing the command below in the terminal (use the version of your own jar file): java -jar selenium-server-4.9.0.jar standalone note that you need to be in the same directory as the jar file or state its full directory to start it from another directory. as you can see, the selenium server detects the drivers you downloaded and added to your path automatically. it then connects these drivers as nodes to your hub. (if you use selenium >4.0, it automatically detects browsers and downloads suitable webdrivers for them.) let\u2019s see how your grid looks by going to http:\/\/localhost:4444 via any browser. does this url look familiar? yep, that\u2019s your default remote-url. selenium server sets port 4444 for grid by default. let\u2019s examine the grid and its components: this is your node. you can consider nodes as machines or containers. these are the drivers that are installed on your node. you can run your tests on these browsers. sessions are the initialized browsers on which your tests are run. currently, no tests are running, so it is 0. this is the maximum number of sessions that can be opened at the same time. default is 8, so you cannot run more than eight tests on this machine simultaneously. you can override it with --max-sessions cli option. you can find complete cli options for selenium grid here. now let\u2019s run one of your tests and see if the session counter increases or not\u2026 \u2026 \u2026did it increase? no? you did not change your default driver; that\u2019s why! let\u2019s go to base_config.rb and change your driver to \u201Cremote-chrome\u201D. @browser = env['browser'] || 'remote-chrome' # available options # * chrome # * firefox def self.browser @browser end now run it again: \u00A0 now you can see that your machine is recognised by the grid! now let\u2019s try the hub & node mode. spinning up selenium in hub & node mode for demonstration purposes, i am going to use my macbook as hub and a node. and then connect my windows laptop as a node as well. if you have another device at your disposal, use it as well for this example. to start selenium server in hub mode, you need to type the following command in your terminal: java -jar selenium-server-4.9.0.jar hub you can see that you have no nodes connected yet. now by typing the below code, i connect my macbook as a node to it. note that since it is the same machine, i don\u2019t need to state my hub address. you do it as well. java -jar selenium-server-4.9.0.jar node now you can see your device as a node. i am going to add my windows machine as a node as well. to do this, download the jar file to the node machine, and type the below command into the command line of the node machine: java -jar selenium-server-4.9.0.jar node --hub http:\/\/192.168.0.27:4444\/grid\/register --port 5555 the bold numbers should be your host\u2019s ip number. you can see it on the terminal where you initiated your hub. now you can see both machines on your grid! but in order to use them in parallel, you need to configure your project for concurrent runs. so, let\u2019s get down to it! note: for every node, 2 cpus are recommended, and one chrome instance that is up and running uses approximately 120 mb of ram. but you can find the optimal node and instance ratio for your device through trial and error. parallel test configuration let\u2019s open your gemfile and add 'parallel_tests' gem to it. # gemfile source 'https:\/\/rubygems.org' gem 'capybara' gem 'cucumber' gem 'selenium-webdriver' gem 'rspec' gem 'webdrivers' gem 'parallel_tests' then run `bundle update` command on the terminal. done! you need to define a default profile for cucumber to give execute commands for your tests from the terminal. now create a cucumber.yml file under the root directory. \u00A0 and write the below line inside it: default: \"--format pretty\" this makes cucumber shell reports more readable. okay, now you can run your tests in parallel. there are several variations for this. let\u2019s see them: 1. run your code in parallel by stating the number of sessions with -n argument. the below code will start two browser instances. parallel_cucumber -n 2 2. run your code in parallel by stating the browser type as well. parallel_cucumber -n 2 -o 'browser=remote-firefox' 3. start multiple parallel execution processes running the same tests but in different browsers. parallel_cucumber -n 2 -o 'browser=remote-chrome' & parallel_cucumber -n 2 -o 'browser=remote-firefox' note that in the second and third examples, you are changing the values of environmental variables with custom values with the -o option. now let\u2019s run your code in parallel! and that\u2019s it! now you can run your code remotely and in parallel! in the 4th part, you are going to dockerize your project and run it in containers in parallel. see you in the next chapter!"
},
{
"title":"Cloud Revolution in Local Infrastructure with AWS Outposts Family",
"body":"AWS Outposts represents a critical piece in the evolving landscape of hybrid cloud strategies. This service, by seamlessly integrating cloud flexibility and scalability with the security and control of on-premises infrastructure, addresses the complex requirements of modern businesses. What is the AWS Outposts Family? AWS Outposts is a service by Amazon Web Services that brings the AWS c...",
"post_url":"https://www.kloia.com/blog/cloud-revolution-in-local-infrastructure-with-aws-outposts-family",
"author":"Enes Cetinkaya",
"publish_date":"09-<span>Feb<\/span>-2024",
"author_url":"https://www.kloia.com/blog/author/enes-cetinkaya",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/Cloud_Revolution_in_Local_Infrastructure_with_AWS_Outposts_Family-2.png",
"topics":{ "aws":"AWS","devops":"DevOps","cloud":"Cloud","aws-outposts":"AWS Outposts","infrastructure":"infrastructure" },
"search":"09 <span>feb</span>, 2024cloud revolution in local infrastructure with aws outposts family aws,devops,cloud,aws outposts,infrastructure enes cetinkaya aws outposts represents a critical piece in the evolving landscape of hybrid cloud strategies. this service, by seamlessly integrating cloud flexibility and scalability with the security and control of on-premises infrastructure, addresses the complex requirements of modern businesses. what is the aws outposts family? aws outposts is a service by amazon web services that brings the aws cloud infrastructure and services directly to the customer's data centre. this service comprises fully managed and customisable physical servers and networking equipment, functioning as an extension of aws\u2019s cloud infrastructure. rack: a rack is a metal frame that neatly and securely holds hardware like multiple servers, network devices, and storage units. in aws outposts, this rack contains the equipment that runs aws's cloud infrastructure and services. server: a server is a powerful computer that performs tasks like data processing, storage, and network services. the servers in aws outposts provide the necessary power and storage to run aws cloud services locally. reasons for using aws outposts low latency: provides low latency for critical applications when processed locally. data sovereignty and local processing: meets requirements for local data storage and processing. hybrid cloud solution: offers a flexible and scalable hybrid cloud solution by integrating cloud and on-premises resources. how do aws outposts work? hardware integration: aws physically places servers with computing and storage capacity in the customer\u2019s data center. network connection: outposts connect to local networks and the aws cloud network, facilitating continuous data synchronization and processing capabilities. management and orchestration: managed through the aws management console, it integrates with aws services and streamlines the orchestration of resources across cloud and local environments. when are aws outposts used? critical workloads: for high-speed and low latency-dependent workloads. legal and local processing requirements: when data needs to be stored or processed locally due to regulatory requirements. hybrid cloud needs: when integration of local and cloud resources is required,. expanded use cases for aws outposts gaming sector - data latency: aws outposts plays a critical role in the gaming industry. for instance, for popular games like \"league of legends\" by riot games, managing servers across different countries is essential. such games require low latency, and aws outposts meets this need with its local data processing capabilities. using local servers to provide a faster and uninterrupted gaming experience significantly enhances game performance. manufacturing sector - data residency: in manufacturing, the intense data flow from iot devices in factory environments can be processed locally with aws outposts. considering the need for data security and rapid processing, aws outposts facilitates the local storage and processing of data generated in factories, meeting data residency requirements. this not only increases data security but also provides real-time data processing and analysis, contributing to more efficient production processes. healthcare sector - data security and compliance: in healthcare, the security and privacy of patient data are of utmost importance. aws outposts enables hospitals and healthcare organisations to process and store data locally in compliance with regulations like hipaa. this enhances the security of patient data while still leveraging the benefits of cloud-based analytics and processing capabilities. financial services - regulatory compliance and data analysis: the finance sector is subject to strict regulatory requirements, often necessitating local data storage. aws outposts allow banks and financial institutions to comply with these regulations while benefiting from data analysis and processing capabilities. this enables faster and more effective decision-making in areas such as risk management and customer service. education - remote learning and research: universities and educational institutions can use aws outposts to support remote learning platforms and research projects. processing data locally provides students with low-latency access and allows researchers to quickly process large data sets. retail - customer experience and inventory management: in retail, aws outposts can help stores process customer data locally to provide personalized shopping experiences. it also offers real-time data processing capabilities for inventory management and logistics operations, leading to more efficient inventory control and increased customer satisfaction. energy and resources - data processing and monitoring: in the energy sector, particularly in oil and gas production, aws outposts can perform data-intensive monitoring and analysis locally. this enhances production efficiency while providing the necessary real-time data processing for equipment maintenance and fault detection. aws management console integration aws outposts is seamlessly integrated with aws's central management interface, the aws management console. this integration simplifies the management of aws outposts and offers several advantages: centralized management: users can manage cloud and local resources on aws outposts via the aws management console. this allows for centralized viewing, monitoring, and management of resources. coordination and control: the aws management console facilitates the configuration and updating of aws outposts. users can manage application deployments, security settings, and network configurations all from one interface. automatic updates and maintenance: aws outposts receive automatic updates from the aws cloud. this ensures a constantly updated infrastructure and access to the latest aws features. metadata management aws outposts effectively processes the metadata required for application and data management: metadata transfer and processing: aws outposts collects and processes metadata from running applications and databases. this metadata is used to monitor application performance, resource usage, and security status. efficiency and consistency: the management of metadata maintains continuous consistency and efficiency across cloud and on-premises resources. this ensures the smooth operation of applications even in complex hybrid environments. integration and automation: aws outposts can integrate with services like aws lambda and automatically share metadata with these services. this allows for advanced automation and smarter application management. conclusion aws outposts offers dynamic and flexible solutions to the technological needs of businesses. this service addresses low latency, data sovereignty, and local processing requirements while also providing the advantages of cloud computing flexibility and scalability. aws outposts has become an integral part of modern businesses' hybrid cloud strategies, adapting to their changing needs and maximizing the benefits of cloud computing technologies."
},
{
"title":"Effective Use of Hooks in Cucumber",
"body":"Test automation comes with some prerequisites and there are follow-up actions that need to be taken after testing. Cucumber, one of the popular tools for Behavior-Driven Development (BDD), enhances these processes more effectively. Specifically, the hooks feature in Cucumber automates pre and post actions. With the hooks feature, the test automation process becomes more repeatable, consi...",
"post_url":"https://www.kloia.com/blog/effective-use-of-hooks-in-cucumber",
"author":"Acelya Gul",
"publish_date":"12-<span>Dec<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/acelya-gul",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/Effective-Use-of-Hooks-in-Cucumber-BLOG%20%282%29.png",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","bdd":"BDD","behavior-driven-development":"Behavior Driven Development","cucumber":"Cucumber","hooks":"hooks","qa":"QA","qateam":"qateam" },
"search":"08 <span>mar</span>, 2024effective use of hooks in cucumber test automation,software testing,bdd,behavior driven development,cucumber,hooks,qa,qateam acelya gul test automation comes with some prerequisites and there are follow-up actions that need to be taken after testing. cucumber, one of the popular tools for behavior-driven development (bdd), enhances these processes more effectively. specifically, the hooks feature in cucumber automates pre and post actions. with the hooks feature, the test automation process becomes more repeatable, consistent, and efficient. hooks in cucumber are code blocks triggered automatically during the execution of a specific test scenario. these code blocks are executed either just before the scenarios start or immediately after they are completed. in testing terminology, such conditions are commonly referred to as test setup and test teardown. in cucumber, these conditions are referred as 'before hooks' and 'after hooks' identified with @before and @after tags. 'before hooks' are use for preparation tasks such as initiating the webdriver, using specific cookie values, while 'after hooks' are usedto automate tasks after the test such as saving screenshots, performing clean-up, or generating reports. using hooks ensures a smooth flow throughout the test process, leading to a more systematic and organized execution from start to finish. what are the advantages of using hooks? given the complex nature of test automation, we need tools that make processes simple and manageable. the hook feature in cucumber serves this very purpose. so, what are the advantages of using hooks? here are some key benefits: modularity and reuse: with hooks, steps that are common to test processes, such as specific start and end steps, are centralized and can be easily reused in different test cases. this encourages a modular structure of test cases and code reusability. consistency and efficiency: hooks ensure that scenarios start or end on a standardized basis and provide a base environment for tests. resource optimization: with hooks, it may be possible to use system or application-side resources more efficiently. because hooks help with reuse at every level, they help prevent unnecessary and frequent use of resources. what are the different types of cucumber hooks? hooks are defined as blocks of code that are automatically triggered at the start and end of test cases and play a critical role in improving the efficiency and streamlining of test processes. there are four main hook categories defined in cucumber. these are: scenario hooks, step hooks, conditional hooks, and global hooks. these four different types of hooks allow test processes to be managed more effectively, while at the same time enabling tests to be performed in a more consistent and controlled manner. especially in complex test scenarios, the structural advantages provided by hooks contribute to the execution of test processes with fewer errors and higher efficiency. scenario hooks: scenario hooks are designed to be executed for each scenario within a test suite. there are two main types of scenario hooks: before and after. these hooks are used to set preconditions before a scenario starts and to perform clean-up activities after a scenario has finished. before: this hook runs just before each scenario starts. it is used to set prerequisites such as preparing the test environment, initializing required data structures, or configuring dependencies. @before public void dosomethingbefore() { \/\/ do something before each scenario } after: this hook, which runs after each scenario is completed, is used to perform the necessary cleaning operations afterward. for example, closing opened resources, deleting temporary data, or saving test results can be done with this hook. @after public void dosomethingafter(scenario scenario) { \/\/ do something after the scenario } step hooks: step hooks are activated before and after each step of the test cases. these hooks work on the principle of 'invoke around', which means that if a beforestep hook is triggered, the afterstep hook will be triggered regardless of the result of the associated step. this feature allows special operations to be performed at the beginning and end of each test step, providing detailed control and customization at each stage of the test process. beforestep: @beforestep public void dosomethingbeforestep(scenario scenario){ } afterstep: @afterstep public void dosomethingafterstep(scenario scenario){ } conditional hooks: conditional hooks are selected based on the labels of the scenarios and run only under certain conditions. these hooks are not generic to every scenario as they are specific to certain tags and can be defined as @before(\u201Ctagname\u201D) or @after(\u201Ctagname\u201D). @after(\"@loginrequired and not @guestuser\") public void teardownlogin(scenario scenario){ \/\/ this hook only works at the end of scenarios with the 'loginrequired' tag and no 'guestuser' tag; for example, user logouts can be performed here. } global hooks: global hooks are special hooks that run at the very beginning and the very end of the test process. these hooks are triggered only once before all scenarios start or after all scenarios are completed, thus managing the global start and end actions of the test process. beforeall beforeall runs before all scenarios start. this hook is used for a broad setup at the beginning of the test process or to set initial conditions. @beforeall public static void beforeall() { \/\/ runs before all scenarios } afterall afterall runs after all scenarios have been completed. this hook is used for general cleanup at the end of the test process or for operations such as releasing resources. @afterall public static void afterall() { \/\/ runs after all scenarios } what are hooks example use cases? the key to success in test automation is to use the right tools in the right places. hooks is one of these tools and when applied correctly, it adds value to different stages of automation. here are some enlightening examples of hooks application scenarios: starting and closing the browser: hooks are ideal for steps that need to be performed in common at the beginning and end of tests. for example, if a web application is being tested, hooks can be used to manage the launch of a web browser before the test starts and the shutdown of the browser at the end of the test. \u00A0 public class driverhooksexample { webdriver driver; @before public void setup() { system.setproperty(\"webdriver.chrome.driver\", \"driverpath\/chromedriver\"); driver = new chromedriver(); } @after public void teardown() { driver.quit(); } } \u00A0 database operations: certain database operations may need to be performed during tests. for example, creating a specific database state before testing or undoing changes in the database after testing can be automated with hooks. public class databasehooks { private connection connection; @before public void connecttodatabase() { try { connection = drivermanager.getconnection(\"jdbc:mysql:\/\/localhost:3306\/kloiadb\", \"kloia\", \"secret\"); } catch (exception e) { e.printstacktrace(); } } @after public void closedatabaseconnection() { try { if (connection != null && !connection.isclosed()) { connection.close(); } } catch (exception e) { e.printstacktrace(); } } } automatic reporting and screen capturing: sometimes unexpected errors are encountered in test automation processes. in such cases, automatic reporting or taking screenshots at the time of the error can be very useful to analyze the problem later. with hooks, it is possible to configure such operations to be performed automatically at the end of each test case. public class reportinghooks { webdriver driver; @after public void capturescreenshotonfailure(scenario scenario) { if (scenario.isfailed()) { try { takesscreenshot ts = (takesscreenshot) driver; file source = ts.getscreenshotas(outputtype.file); fileutils.copyfile(source, new file(\".\/kloiatestscreenshots\/\" + scenario.getname() + \".png\")); system.out.println(\u201Cerror detected, screenshot taken \"); } catch (exception e) { system.out.println(\"exception while taking screenshot: \" + e.getmessage()); } } } } hooks' efficient utilization strategies some features are available to use cucumber hooks more efficiently. below are some examples: using customized tags: when using hooks, it is possible to determine which tests will be run with which hooks by creating customized tags. this feature allows hooks to run only on certain tests and prevents unnecessary processing. public class hooks { @before(\"@smoke\") public void test1() { system.out.println(\"it will only start before @smoke.\u201D); } @after(\"@smoke\") public void test2() { system.out.println(\"it will only start after @smoke.\"); } } avoiding complex operations: performing complex and time-consuming operations on hooks can adversely affect the overall performance of the tests. therefore, care should be taken to perform only necessary and fast operations on hooks. @before public void complexsetup() { application.lengthysetupmethod(); driver = new chromedriver(); } logging for debugging: in test automation, there are usually many test cases, and each scenario may fail at different stages and for different reasons. therefore, it is very important to keep a log record in order to quickly identify failed steps and understand what went wrong with these steps. hooks are an excellent way to keep detailed logs with the right level of granularity: public class logginghooks { private static final logger logger = logger.getlogger(logginghooks.class.getname()); webdriver driver; @before public void setup() { logger.info(\"test is starting..\"); driver = new chromedriver(); logger.info(\"browser successfully started.\"); } } \u00A0 taking actions according to scenario situations: in test automation, certain actions may need to be taken depending on the results of scenarios. especially in integrated test processes, the failure of one test may affect others or cause some tests to be skipped. with hooks, different actions can be performed automatically according to the result of the scenario. public class scenariooutcomehooks { webdriver driver; @after public void handlescenariooutcome(scenario scenario) { if (scenario.isfailed()) { system.out.println(\"the scenario failed: \" + scenario.getname()); \/\/ you can also add extra information or screenshots here if you wish. } else if (scenario.isskipped()) { system.out.println(\"the scenario skipped: \" + scenario.getname()); } else { system.out.println(\"the scenario was successful: \" + scenario.getname()); } } } aspects to be considered in the use of hooks 1. avoid overuse harnessing the power of hooks is important, but overuse should be avoided. unnecessarily defining too many hooks for each scenario can complicate your scenarios and make them difficult to maintain. avoid repetitive code when using hooks, and define common operations within a function to avoid code repetition. it is important to choose the location of each hook well. 2. avoid complex operations hooks should generally be used for low-level and repetitive transactions. putting complex transactions into hooks can make it difficult to understand and maintain your scenarios. additional complexity also hurts test execution performance. 3. proper naming naming hooks with descriptive names makes it easier to understand when or why the script works. 4. managing dependencies correctly when using hooks, be careful to manage the dependencies between scenarios well. take care to maintain the principle of independence so that one scenario does not affect others. 5. adaptability and flexibility use hooks to meet the needs of your scenarios and create simple structures that you can change as needed. conclusion hooks help us optimize testing processes catch errors earlier. this leads to higher quality and successful projects. the level of automation and coordination that goes into testing has a direct impact on the quality and delivery time of the application. using hooks to improve the success of software projects makes testing processes more effective and efficient. therefore, knowing how to use hooks effectively is a critical part of a test engineer's skill set."
},
{
"title":"re:Invent 2023 Recap",
"body":"As a seasoned re:Invent attendee since 2014, I've successfully navigated the intricate landscape of this premier event. Overcoming the notorious Fear-of-missing-out (FoMO) after a few years, my annual mission has evolved into assisting newcomers to sidestep the pitfalls of FoMO and strategically guide them in extracting maximum value from re:Invent, all while savoring the experience. Wer...",
"post_url":"https://www.kloia.com/blog/reinvent-2023-recap",
"author":"Derya (Dorian) Sezen",
"publish_date":"11-<span>Dec<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/derya-dorian-sezen",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/re-Invent-2023-Recap-blog%20%281%29.png",
"topics":{ "aws":"AWS","cloud":"Cloud","software":"Software","map":"map","aws-partner":"AWS Partner","reinvent2023":"reinvent2023","reinvent":"reinvent","awsambassador":"awsambassador" },
"search":"13 <span>dec</span>, 2023re:invent 2023 recap aws,cloud,software,map,aws partner,reinvent2023,reinvent,awsambassador derya (dorian) sezen as a seasoned re:invent attendee since 2014, i've successfully navigated the intricate landscape of this premier event. overcoming the notorious fear-of-missing-out (fomo) after a few years, my annual mission has evolved into assisting newcomers to sidestep the pitfalls of fomo and strategically guide them in extracting maximum value from re:invent, all while savoring the experience. werner's keynote in 2022 last year delved into the central theme of modernization. his discourse revolved around the dichotomy of the asynchronous and synchronous worlds. this emphasis stems from his acute awareness of the prevalent synchronous nature of software architecture in the industry. recognizing this as a hurdle to optimizing the use of aws services\u2014like eventbridge, serverless, and mqs\u2014he underscored the importance of transitioning towards more asynchronous software structures. the focal point of this year's 2023 keynote by werner centered around the critical theme of cost. the prevailing economic landscape likely attributes the prominence given to cost. following an extensive discussion, werner shifted gears to delve into the realm of ai, a move anticipated by the audience that, ultimately became the main theme as the keynote unfolded. below are the key takeaways that i found particularly valuable from this re:invent. i'll primarily focus on highlights related to modernization announcements, given that this is my primary area of interest: cost focus in werner's keynote: the keynote predominantly centered around the theme of cost. werner intriguingly connected the aspect of cost to sustainability, a perspective that, in my humble opinion, added a noteworthy dimension to raising awareness about cost considerations: werner touched upon the concept of \"architecting with cost in mind,\" citing the \"frugal architect\" principles, accessible at thefrugalarchitect.com: the frugal architect's three design principles, highlighted by werner: cost as a non-functional requirement aligning cost to business for enduring systems architecting as a series of strategic trade-offs werner discussed amazon.com's cost approach and referenced various aws customer use cases. here are the key highlights influencing cost: eventual consistency vs. strong consistency: werner delved into the cost implications during the development of the dynamodb service, emphasizing that this approach is applicable across all software architectures where strong consistency is excessively utilized, resulting in higher costs. programming language effect: each programming language comes with its own unique approach to managing memory, runtime engines, and other factors. whether a language is interpreted or compiled, these distinctions impact the resources they require, consequently affecting costs. here's a breakdown of the differences between them, directly influencing overall expenses: this table illustrates that c, rust, c++, ada, and java emerge as the most compute cost-effective languages. additionally, widely used languages like go and c# (.net) demonstrate commendable efficiency. however, prominent interpreted languages such as python, ruby, and php exhibit substantial cost disparities when compared to the top-performing languages. z garbage collector for java was highlighted in the context of the nubank case study, a feature now default in the latest java versions. tiered architecture: werner introduced a tiered structure at amazon.com, outlining different cost and resilience approaches for various tiers. he also referenced a map of amazon services, wherein each node represents a service, and the yellow areas denote hub services. aws management console myapplications: werner unveiled a fresh dashboard in the aws management console called myapplications, providing visibility into the costs of individual applications. while this introduction holds promise, it's worth noting that a significant portion of industry applications still operate on a non-microservices architecture. some are labeled as microservices but share common components like databases, cache layers, message queues, or services, making it challenging, and in some cases, nearly impossible, to accurately gauge the cost of each application. tunable architecture: werner introduced the concept of tunable architecture, emphasizing the flexibility of software architecture to be dynamically adjusted during runtime. this necessitates close monitoring of both runtime conditions and business requirements. codeguru profiler:werner also recommended leveraging tools such as codeguru profiler to identify the most resource-intensive code blocks and receive suggestions on potential refactoring strategies. the latter segment of werner's keynote delved into the realm of artificial intelligence (ai). werner underscored various revenue-generating ai use cases, highlighting applications like image processing and prediction. interestingly, genai was briefly touched upon in the final nine minutes of his keynote. the use cases included: aws generative ai cdk constructs code generation with sagemaker studio code editor amazon q: ai assistant featuring approximately 40 integrations aws application composer in vs code wrapping up his keynote, werner introduced the amazon inspector ci\/cd container scanner, a tool referenced in the preceding videos. additional highlights from re:invent: mainframe modernization: being an advocate for modernization, one of my key areas of interest lies in mainframe modernization. the predominant reason for this lies in the enduring prevalence of mainframes in the industry, acting as a hurdle to fully harnessing the potential of cloud computing and innovation. similar to the last re:invent, this edition also placed significant emphasis on mainframe topics. for detailed sessions related to mainframe discussions, you can find the link here: excitingly, it has now been seamlessly incorporated into the aws map. aws ambassadors @ re:invent as an aws ambassador, i cannot finish my blog post without mentioning the ambassadors, who are industry professionals from aws partners. i again had the chance to meet with several existing faces and new faces and exchanged information. as usual, there have been aws ambassador meetings after the sessions where we socialized and had the chance to have a chat together. kloia's impact at re:invent: kloia garnered attention during a significant application modernization session, aligning closely with our commitment to modernization initiatives: two kloia trucks made a presence on the re:invent campus, drawing considerable attention from attendees. congratulations to all the lucky prize winners! in summary, much like every re:invent, i had the opportunity to connect with spontaneous and valuable new individuals, establish new contacts, and acquire fresh insights from these connections. that's what i truly appreciate about re:invents!"
},
{
"title":"Creating End-to-End Web Test Automation Project from Scratch - Part 2.1.",
"body":"Previously, we created the web test automation project and wrote scenarios. Before configuring the project to be run in parallel, let\u2019s see how to save screen recordings of failed scenarios in this bonus chapter! Let\u2019s Create and Configure Our Web Test Automation Project! Let\u2019s Write Our Test Scenarios! Bonus: Recording Failed Scenario Runs in Ruby Let\u2019s Configure Our Web Test Automation...",
"post_url":"https://www.kloia.com/blog/dockerize-your-web-test-automation-project-from-scratch-part-2.1",
"author":"Muhammet Topcu",
"publish_date":"08-<span>Dec<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/end-to-end-web-test-automation-blog%20%283%29.png",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","docker":"Docker","bdd":"BDD","selenium":"Selenium","ruby":"Ruby","qa":"QA","test-driven-development":"Test Driven Development","performance-testing":"Performance Testing","ci-cd":"CI\/CD","qateam":"qateam","manual-testing":"manual testing","endtoend":"endtoend" },
"search":"26 <span>jan</span>, 2024creating end-to-end web test automation project from scratch - part 2.1. test automation,software testing,docker,bdd,selenium,ruby,qa,test driven development,performance testing,ci\/cd,qateam,manual testing,endtoend muhammet topcu previously, we created the web test automation project and wrote scenarios. before configuring the project to be run in parallel, let\u2019s see how to save screen recordings of failed scenarios in this bonus chapter! let\u2019s create and configure our web test automation project! let\u2019s write our test scenarios! bonus: recording failed scenario runs in ruby let\u2019s configure our web test automation project for remote browsers and parallel execution let\u2019s dockerize our web test automation project bonus: recording scenario runs on docker with selenium video! let\u2019s integrate our dockerized web test automation project with ci\/cd pipeline! auto-scaling and kubernetes integration with keda recording failed scenario runs on ruby! not every scenario runs as it should, unfortunately. to debug your code or pinpoint bugs, it\u2019s best to have some hard evidence. yes, i am talking about video recordings for your failed test scenarios! let\u2019s install \u201Cscreen-recorder\u201D gem by adding it to your gemfile and running the bundle update command: gem 'screen-recorder' bundle update \u00A0 and then let\u2019s add it to your env file. require 'screen-recorder' \u00A0 the gem you are going to use is dependent on the ffmpeg tool. install it via brew. brew install ffmpeg \u00A0 with the command `which ffmpeg`, you can find the bin directory for the ffmpeg: \u279C ~ which ffmpeg \/opt\/homebrew\/bin\/ffmpeg \u00A0 now add the directory of this binary to your env file as well: screenrecorder.ffmpeg_binary = '\/opt\/homebrew\/bin\/ffmpeg' \u00A0 and integrate the gem with your hook file. as you remember, you created your hook.rb file in part 1 of this blog series, and it looks like this: before do driver.get_driver page.driver.browser.manage.window.maximize end after do |scenario| begin if scenario.failed? puts \"failed ==> #{scenario.name}\\n#{scenario.exception}:#{scenario.exception.message}\" else puts \"passed ==> #{scenario.name}\" end capybara.current_session.driver.quit rescue exception => exception puts \"failed ==> #{exception}\" capybara.current_session.driver.quit end end \u00A0 first, you need to configure your before hook. you already know that the before hook is initiated before your scenarios up to this point. change your first line to this: before do |scenario| cucumber provides you with a scenario object, with which you can reach the name of your scenario. let\u2019s replace the blank lines in your scenario name with underscores using the command below. using underscores instead of spaces helps you avoid file name related issues, since you are going to save the videos with scenario names. scenario_name = scenario.name.gsub(\/[^a-za-z0-9 ]\/, \"\").gsub(\/\\s+\/, \"_\") \u00A0 with below line, print the scenario name on the console: puts \"started ==> #{scenario_name}\" \u00A0 then create a recorder object and state the file name and location with the command below: @recorder = screenrecorder::desktop.new(output: \"output\/failed_tests\/#{scenario_name}.mkv\") \u00A0 and start your recorder. @recorder.start now put everything together: \u00A0 before do |scenario| scenario_name = scenario.name.gsub(\/[^a-za-z0-9 ]\/, \"\").gsub(\/\\s+\/, \"_\") driver.get_driver page.driver.browser.manage.window.maximize puts \"started ==> #{scenario_name}\" @recorder = screenrecorder::desktop.new(output: \"output\/failed_tests\/#{scenario_name}.mkv\") @recorder.start end \u00A0 up to this point, you\u2019ve re-organized the before hook. after the before hook, your scenario will run and then the after hook will be executed. now configure the after hook. what you initially have is: after do |scenario| begin if scenario.failed? puts \"failed ==> #{scenario.name}\\n#{scenario.exception}:#{scenario.exception.message}\" else puts \"passed ==> #{scenario.name}\" end capybara.current_session.driver.quit rescue exception => exception puts \"failed ==> #{exception}\" capybara.current_session.driver.quit end end \u00A0 now alter your scenario name here as well: scenario_name = scenario.name.gsub(\/[^a-za-z0-9 ]\/, \"\").gsub(\/\\s+\/, \"_\") \u00A0 since you executed your scenario, you need to stop your recorder first: @recorder.stop \u00A0 and you need to delete the video file if the scenario passes since you will only need the failed ones. you can delete your video file with the command below: file.delete(\"outputs\/failed_tests\/#{scenario_name}.mkv\") \u00A0 since you are going to save your files to this directory, you need to create it first: now re-organize your if-else and begin-rescue blocks to save only the failed scenario recordings: after do |scenario| begin scenario_name = scenario.name.gsub(\/[^a-za-z0-9 ]\/, \"\").gsub(\/\\s+\/, \"_\") capybara.current_session.driver.quit @recorder.stop if scenario.failed? puts \"failed ==> #{scenario_name}\\n#{scenario.exception}:#{scenario.exception.message}\" else puts \"passed ==> #{scenario_name}\" file.delete(\"outputs\/failed_tests\/#{scenario_name}.mkv\") end rescue exception => exception puts \"failed ==> #{exception}\" capybara.current_session.driver.quit @recorder.stop end end let\u2019s run your scenarios by running the cucumber command on your terminal and see what happens: failed scenario recordings have been saved, indeed! note that in macos and linux, this gem records the whole desktop, and you cannot specify specific applications such as chrome or firefox. therefore, you need to run your test scenarios one at a time. since this was a bonus feature, i am not going to include these changes in the upcoming parts and continue from where we left off in the second part."
},
{
"title":"Creating End-to-End Web Test Automation Project from Scratch \u2014 Part 2",
"body":"Welcome to the second part of a blog post series called Creating End-to-End Web Test Automation Project from Scratch. The series consists of 8 posts in total: Let\u2019s Create and Configure Our Web Test Automation Project! Let\u2019s Write Our Test Scenarios! Bonus: Recording Failed Scenario Runs in Ruby Let\u2019s Configure Our Web Test Automation Project for Remote Browsers and Parallel Execution Le...",
"post_url":"https://www.kloia.com/blog/creating-end-to-end-web-test-automation-project-from-scratch-part-2",
"author":"Muhammet Topcu",
"publish_date":"22-<span>Nov<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/end-to-end-web-test-automation-blog%20%281%29.png",
"topics":{ "test-automation":"Test Automation","aws":"AWS","bdd":"BDD","behavior-driven-development":"Behavior Driven Development","cucumber":"Cucumber","gherkin":"Gherkin","ruby":"Ruby","qa":"QA","paralleltest":"paralleltest","test-driven-development":"Test Driven Development","manual-testing":"manual testing","endtoend":"endtoend" },
"search":"11 <span>jan</span>, 2024creating end-to-end web test automation project from scratch \u2014 part 2 test automation,aws,bdd,behavior driven development,cucumber,gherkin,ruby,qa,paralleltest,test driven development,manual testing,endtoend muhammet topcu welcome to the second part of a blog post series called creating end-to-end web test automation project from scratch. the series consists of 8 posts in total: let\u2019s create and configure our web test automation project! let\u2019s write our test scenarios! bonus: recording failed scenario runs in ruby let\u2019s configure our web test automation project for remote browsers and parallel execution let\u2019s dockerize our web test automation project bonus: recording scenario runs on docker with selenium video! let\u2019s integrate our dockerized web test automation project with ci\/cd pipeline! auto-scaling and kubernetes integration with keda let\u2019s write our test scenarios! in the previous post, we created our web test automation project and configured it. now we are going to write our test scripts by covering a few topics, including behaviour driven development and page object model and finish our basic project. but hey, this is just the beginning. :) behaviour driven development (bdd) in a nutshell, behaviour driven development is a method to write test scripts in an everyday language so that every stakeholder understands what the system should do without needing any coding background. you will implement this approach with the cucumber framework by using gherkin syntax. before you dive into the code, please read my blog post titled \u201Cgherkin keywords and cucumber expression\u201D. it will give you a fundamental understanding of what you are going to do. the first test feature create a folder called \u201Ctests\u201D under the features folder, and then create a file named \u201Csearch.feature\u201D under it. rubymine ide recognises .feature files as cucumber files and gives specific icons to them. note: sometimes the ide may not recognise feature files. if you encounter this issue, you need to add the file type to the ide. to do this, go to rubymine -> settings -> file types -> cucumber scenario, then click on the \u201C+\u201D sign and add \u201C*.feature\u201D there. now, your first test is going to be about the search feature on amazon.com. the test steps are: navigate to www.amazon.com website. fill the search bar with a keyword and click on the search icon. verify that the listed results are related to that keyword. here is the test expressed in gherkin syntax for cucumber: feature: search related scenarios. scenario: search an item on amazon.com given go to home page when search \"computer\" on search bar on home page then verify \"computer\" search results on search page if you don\u2019t understand what these steps mean, it means that you didn\u2019t read the post i mentioned earlier :) please read it. if you have followed the steps above, this is what you should be seeing: why? why these errors? these are just simple words, that\u2019s why. to give these steps a purpose, you need to define them through code in the background. to do that, create your first step definition file called \u201Cbase_steps\u201D by: right-clicking on your first step (the line that starts with given) show context actions create step definition create a new file you will see that your page is created under the \u201Cstep_definitions\u201D folder which is a default cucumber folder. in case your ide does this automatically, you can just manually create a file under the step_definitions folder. this is where we ended up: given(\/^go to home page$\/) do pending end to get through this, let me talk about page object model first. page object model (pom) with the page object model, we consider each web page as a class file. all the objects that exist on that page should be inside this class and nowhere else. by following this approach we reduce code duplication and make code maintenance much easier since a person can easily figure out where to look for the elements they are working with. let\u2019s look at the schema below. in the project structure, you can see that page-specific elements and functions reside in respective files. also, note that there are elements named \u201Csearch_bar\u201D on both pages. since these are different pages, it is possible to use the same name for different elements having similar properties. we use them in page-specific contexts, so they do not conflict. \u201Cthen, why did we name our step file base_page?\u201D since visiting urls is a general step, i prefer to put them under a step definition file named base_page. all objects and functions that cannot be categorised can be put there. note that this is just a convention i use and not a universal rule. with time, you may also develop your own approach. let's create your first page class base_page.rb file, in which you will write your navigating method! your first page now create a folder called \u201Cpages\u201D under the features folder, and then create a file named \u201Cbase_page.rb\u201D under it. then define your class as basepage and write your first class method. since you defined your default host in the environment file with app_host parameter in the previous blog post, you can just write visit function and it navigates to 'https:\/\/www.amazon.com' by default. class basepage def go_to_home_page visit end end \u00A0 but if you want to visit a different webpage, you can pass a parameter: class basepage def go_to_home_page visit('https:\/\/www.kloia.com') end end \u00A0 now that you have your method, you can define your step with it. base_page = basepage.new given(\/^go to home page$\/) do base_page.go_to_home_page end what does the first line do? basically, you need to call your go_to_home_page method from a different file. in order to do that, you need an instance of that class that has the objects and methods. so you create an instance called base_page with base_page = basepage.new line and access your methods through this instance. now you have your first step ready. run your scenario and see what happens! when you run the feature, you will see that your driver will navigate your browser to www.amazon.com but then close it without doing anything else since you did not define the other steps yet. note that the browser has terminated after the execution of steps. this is because we defined this behaviour in our hooks. see the previous blog post in this series. it\u2019s time to define your second step! create your second step definition like the first one, but this time name your file as home_steps.rb, since the objects and functions will be about the home page. now create your home_page.rb under the pages folder as well: and modify your file: initialize is the constructor method in ruby. you are going to define all your objects as variables inside of initialize method. now let\u2019s find your first object on amazon, search bar! you need to locate your objects with their css selector. for this: open the www.amazon.com right-click on the page and click inspect on the right-click menu. click the element selection button in the upper left corner. then click on the element that you want to inspect. in this instance, it is the search bar. copy the id of the object. which is \u201Ctwotabsearchtextbox\u201D. append # to use this id as a css selector: #twotabsearchtextbox. it is important to make sure that you write a css selector that matches only a single element on the page. tip: you can search your css selector with cmd+f to see if it matches or not. note that the syntax for css selectors are critical in picking the right elements: . means class => .nav-progressive-attribute # means id => #twotabsearchtextbox for the comprehensive css selector sheet, please refer here. now create your object variable with the selector you picked: class homepage def initialize @txt_search_bar_css=\"#twotabsearchtextbox\" end end \u00A0 now try to create a unique selector for the search button without looking below. class homepage def initialize @txt_search_bar_css=\"#twotabsearchtextbox\" @btn_search_submit_css=\"#nav-search-submit-button\" end end \u00A0 now you have all your objects, write a method to search a keyword. def search_item(item) find(@txt_search_bar_css).send_keys(item) find(@btn_search_submit_css).click end first, you need to create your function search_item(item), which takes one argument named \"item,\u201D which is the product that you want to find in a string format. find method finds an object with the selector, and by appending the send_keys function to it, you can send text into the text fields. the item that you send through here is the product name you want to search for. note that capybara provides you with useful functions to give commands to your webdriver such as fill_in and click_button functions, which take id, text or title as arguments. class homepage def initialize @txt_search_bar_id=\"twotabsearchtextbox\" @btn_search_submit_id=\"nav-search-submit-button\" end end def search_item(item) fill_in(@txt_search_bar_id, with: item) click_button(@btn_search_submit_id) end note that while naming object selector, i used three-charactered labels as a prefix to make it easier to know what type that object is. so when you see an object starting with a txt prefix, you can understand that it is a textbox, and you can send words into it with suitable methods. as a suffix, i stated in which type selector is written such as css, xpath, or id. you can find my naming conventions for object locators in the table below. you can create and use your own convention as well. just make sure you stick to it. | prefix | example | locator | | ------------- |-------------------|------------- | | btn | btn_login_id | button | | chk | chk_status_css | checkbox | | cbx | cbx_english_xpath | combo box | | lbl | lbl_username_css | label | | drp | drp_list_xpath | drop down | | slc | slc_list_css | selectbox | | txt | txt_email_css | textbox | | img | img_logo_xpath | image | | rdx | rdx_female_xpath | radiobox | also adding locator type as suffix can help you to specify locator type when using certain methods. sometimes you may need to pass the locator type to a function as an argument. you did not need to specify in your previous elements and that is because the default selector is css. do you remember that you chose your default selector as \u201C:css\u201D in the env.rb file? if you want to use xpath locator instead, you specify it like this in the find method: find(:xpath, @btn_search_submit_xpath) \u00A0 since you sent your text into the text field of the search bar, now you need to click on the search button next to it. find(@btn_search_submit_css).click let\u2019s find your object with the find method, and then click on it by appending the click method to the find method. now that your function for searching for an item is ready, it is time to implement it in the step definitions you have. create a class instance of homepage called home_page. call our search_item method through it in the body of the step definition. pass the arg to our function. \u00A0 note that the arg you passed is the word that you write in the feature file enclosed with double quotes. so the computer will be passed as an argument for this instance. with this configuration, you have made your step re-usable with different keywords. let\u2019s run your code again! as you see, it typed \u201Ccomputer\u201D and clicked on the search button. now comes the last step in your scenario! you need to verify that the results shown are related to the keyword that you searched. \u00A0 now you need to find the coloured keyword inside the double quotes and compare it with your search keyword. try to find its css selector, then check your result with the code below. class searchpage def initialize @lbl_search_result_css=\".a-color-state.a-text-bold\" end end this time, i used class notation. note that the lbl prefix indicates that this object is a label, so you can extract text from it. do you realise that i have written this object in a new class named searchpage? that\u2019s because the search result text belongs to the search page. so your project directory now looks like this: now create a verification method for search results: def verify_search_results(item) find(@lbl_search_result_css).text.should include(item) end here i defined a method called verify_search_results that takes the search keyword \u201Citem\u201D as an argument. what this method does: \u00A0 find the object with a specified selector by using find method. access its text by appending text method. use rspec\u2019s should keyword to make an assertion statement. if it is false, the step fails. what you assert is that the text of the lbl_search_result object includes your keyword itself. why did you assert it with include but not with == operator, you might ask. let\u2019s print find(@lbl_search_result).text to the console. you can see that the text is within double quotes. so simply using == would fail since your keyword passed with an arg is not within double quotes. for the verification to pass, you might try stripping this text from double quotes or enclosing your keyword with it. it is up to you. the below variation works as well: def verify_search_results(item) find(@lbl_search_result_css).text.should =='\"'+item+'\"' end now run your scenario one last time! and you are done! you have created your first scenario! let\u2019s meet in the comments section if you have any questions. next, we are going to level up your project to run it on remote browsers and to run multiple scenarios in parallel! so before going to part 3, please try to find and write at least 4 scenarios, so you can run your own scenarios in parallel!"
},
{
"title":"Creating End-to-End Web Test Automation Project from Scratch \u2014 Part 1",
"body":"In this series of blog posts, we will be creating an end-to-end Web Test Automation Project. The series consists of 8 posts in total: Let\u2019s Create and Configure Our Web Test Automation Project! Let\u2019s Write Our Test Scenarios! Bonus: Recording Failed Scenario Runs in Ruby Let\u2019s Configure Our Web Test Automation Project for Remote Browsers and Parallel Execution Let\u2019s Dockerize Our Web Tes...",
"post_url":"https://www.kloia.com/blog/creating-end-to-end-web-test-automation-project-from-scratch-part-1",
"author":"Muhammet Topcu",
"publish_date":"27-<span>Oct<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/end-to-end-web-test-automation-blog.png",
"topics":{ "test-automation":"Test Automation","docker":"Docker","kubernetes":"Kubernetes","jenkins":"Jenkins","bdd":"BDD","gherkin":"Gherkin","ruby":"Ruby","capybara":"Capybara","qa":"QA","ci-cd":"CI\/CD","qateam":"qateam","endtoend":"endtoend" },
"search":"26 <span>jan</span>, 2024creating end-to-end web test automation project from scratch \u2014 part 1 test automation,docker,kubernetes,jenkins,bdd,gherkin,ruby,capybara,qa,ci\/cd,qateam,endtoend muhammet topcu in this series of blog posts, we will be creating an end-to-end web test automation project. the series consists of 8 posts in total: let\u2019s create and configure our web test automation project! let\u2019s write our test scenarios! bonus: recording failed scenario runs in ruby let\u2019s configure our web test automation project for remote browsers and parallel execution let\u2019s dockerize our web test automation project bonus: recording scenario runs on docker with selenium video! let\u2019s integrate our dockerized web test automation project with ci\/cd pipeline! auto-scaling and kubernetes integration with keda we are going to create our test automation project with ruby which is a high-level programming language. capybara framework helps us automate browsers and cucumber framework allows us to write our scenarios in behavior driven development format. you don\u2019t know bdd approach? do not worry, i will cover it. to make our project executable on any machine without setting up environmental configurations and preventing the common programming issue of \u201Cbut it works on my machine!\u201D, we are going to dockerize our project using docker engine! and finally, we are going to integrate ci\/cd pipeline by using jenkins and make our project auto-scalable by using kubernetes. it will be a pretty cool project, and presumably a longer one, but bear with me, it will be worth your while. let\u2019s create and configure your web test automation project! to begin, you need a website to test. you are going to develop test suites and create automated tests for that website. i think a popular website would be the best option since it would give you the opportunity to work on real-life cases and lets us tackle possible challenges. so here it is: \u201Ci choose you amazon!\u201D now you can start installing what we are going to need. note: all installation tutorials will be for macos. package management systems or some commands may differ, but the process will be more or less the same. xcode installation you are going to use homebrew as a package management system. but most of its packages require a compiler. you can install the full xcode or just command line tools from the app store or using the command below. xcode-select --install homebrew installation a package management system makes installing packages much easier. note that you do not need to install homebrew to install the following languages or frameworks, but it is convenient. you can find more detailed homebrew installation instructions here. you can take a look at if you want. install homebrew with the following command: \/bin\/bash -c \"$(curl -fssl https:\/\/raw.githubusercontent.com\/homebrew\/install\/head\/install.sh)\" after the installation, enter this: brew --version you should see something similar to the one below if your installation is successful: ruby installation next, you will install rbenv with its ruby-build plugin, which allows you to install more than one ruby version on our machine and manage them easily. it is a light weighted and useful package. to install both of these: brew install rbenv ruby-build the command below gives you the instructions to load rbenv in your shell. rbenv init if you use bash as a shell, with the following command you can load rbenv in your shell. eval \"$(rbenv init -)\" >> ~\/.bash_profile the next command reloads your .bash_profile so the changes take effect. or you can simply close your terminal and open a new one. source ~\/.bash_profile now pick the preferred ruby version by listing and choosing the version you want to work with. note that capybara supports ruby version 3.0+ as of now. so make sure to install 3.0 or later version of it. # lists all available versions $ rbenv install -l # install a ruby version $ rbenv install -package- # check installed version $ ruby -v to make this ruby version the default for your machine, use the command below: rbenv global for the official and comprehensive installation guide, please refer to this documentation. bundler installation rubygems serves as a package manager for the ruby programming language. it establishes a uniform method for distributing ruby applications and libraries. the gems are simply open-source libraries that contain ruby code and are packaged with a little extra data. bundler is a dependency management gem that allows you to list all required gems in a file and automatically download them. it also gives you the flexibility to declare gem versions. install the bundler with the command below: gem install bundler rubymine installation i am going to use rubymine of jetbrains as my ide but you can choose your own favorite tool. you can download rubymine on this link, it is pretty straightforward. note that it gives a one-year license if you have an education e-mail. the community trial version gives you one month of trial license. rubymine configuration rubymine => preferences => plugins and search for gherkin. install it. extra: installing cucumber+ plugin makes it easy to execute scenario runs. you might find it useful as well. why do we use gherkin? it helps us to create code steps using english-like sentences that anyone without coding knowledge could understand. why do we need the plugin? it provides coding assistance for step definitions. for comprehensive information, please refer to our blog posts about gherkin syntax and cucumber framework and behavior driven development. now you are all set! it\u2019s time to create your project. creating the project use file => new => project\u2026 , name your project, and choose your ruby sdk. then click create. gemfile configuration next, you will create your gemfile. gemfile is the list of our dependencies. you are going to need the below gems for your project. # gemfile source 'https:\/\/rubygems.org' gem 'capybara' gem 'cucumber' gem 'selenium-webdriver' gem 'rspec' gem 'webdrivers', '~> 5.2.0' let me give you a brief explanation about what these gems do: capybara: this gem provides much more readable and easier automation methods than selenium\u2019s native ones. cucumber: cucumber is a framework for running automated tests written in plain language. selenium-webdriver: this gem provides ruby bindings for selenium. rspec: rspec is a unit test framework for the ruby programming language. webdrivers: this gem automatically installs and updates all supported drivers, which are essential for controlling web browsers through code. note: if you use selenium 4.6.0, you won\u2019t be needing this game and selenium would set up the required web drivers for you. after creating and saving the file with the gems listed, run `bundle install` command in your project\u2019s directory. congratulations, you\u2019ve installed everything you need! next, create cucumber\u2019s base folder structure: cucumber --init you will see that a feature folder is created along with some sub-folders. now you can start creating some important files and populate them! config folder and base_config.rb it is best to store the project\u2019s configurations explicitly in a file. to do that, create a folder named \u201Cconfig\u201D under the root of the project and then create a file named \u201Cbase_config.rb\u201D under it. what are you going to write in it? the project variables that do not change usually and environmental variables that you will manage dynamically. your driver and execution configurations are examples of this. # frozen_string_literal: true module baseconfig @wait_time = 20 # default wait time variable to be used on capybara functions. def self.wait_time @wait_time end @browser = env['browser'] || 'chrome' # available options # * chrome # * firefox def self.browser @browser end @headless = env['headless'] || 'false' # available options # * 'false' # * 'true' def self.headless @headless end end # frozen_string_literal: true` freezes every string literal in our file explicitly, meaning that you cannot change them in run time.. @wait_time defines a custom wait time before a driver function fails. with the `def self.wait_time` line, you can call it outside this file by simply calling baseconfig.wait_time @browser variable lets you define the default web driver to run tests on. env[\u2018browser\u2019] || \u2018chrome\u2019 means that if you provide an argument for the browser while running the cucumber in the command line, it uses that argument, otherwise uses \u2018chrome\u2019 as your default browser value. (e.g. parallel_cucumber -o \u2018browser=\u201Dfirefox\u201D\u2019) @headless defines whether your tests would be run on the browser in headless mode or not. env[\u2018headless\u2019] || \u2018false\u2019 is just the same as above. now let\u2019s create your driver configurations and make use of the above settings. utils folder and driver.rb create a folder named \u201Cutils\u201D under the root of the project and then create a file named \u201Cdriver.rb\u201D under it. you can store your custom-made general functions in this folder and call them wherever you need. you need to configure your drivers by populating driver.rb file with the code below! class driver def self.get_driver case baseconfig.browser when 'chrome' options = selenium::webdriver::chrome::options.new add_default_values(options) capybara.register_driver :selenium do |app| capybara::selenium::driver.new(app, browser: :chrome, options: options) end when 'firefox' options = selenium::webdriver::firefox::options.new add_default_values(options) capybara.register_driver :selenium do |app| capybara::selenium::driver.new(app, browser: :firefox, options: options) end end def self.add_default_values(options) options.add_argument('--disable-popup-blocking') options.add_argument('--ignore-certificate-errors') options.add_argument('--disable-notifications') add_headless_options(options) if baseconfig.headless == 'true' end def self.add_headless_options(options) options.add_argument('--no-sandbox') options.add_argument('--headless') options.add_argument('--window-size=1280,720') options.add_argument('--disable-dev-shm-usage') options.add_argument('--disable-gpu') options.add_argument('--test-type=browse') end end now i am going to break it down bit by bit: class driver def self.get_driver case baseconfig.browser when 'chrome' options = selenium::webdriver::chrome::options.new add_default_values(options) capybara.register_driver :selenium do |app| capybara::selenium::driver.new(app, browser: :chrome, options: options) end this part creates a class named driver. the class has a get_driver method with the self prefix, that will enable us to call it from outside by using driver.get_driver. the method has a switch-case block, customized with baseconfig.browser variable. you remember the @browser variable on our base_config.rb file? it will match with this and use the matching configurations! now let\u2019s inspect the browser-specific driver configurations: \u00A0 - options = selenium::webdriver::chrome::options.new, with this line, you create a chrome browser options object - add_default_values(options), with this method, you add default browser options to your web driver. now you implement this method with the code below: \u00A0 def self.add_default_values(options) options.add_argument('--disable-popup-blocking') options.add_argument('--ignore-certificate-errors') options.add_argument('--disable-notifications') add_headless_options(options) if baseconfig.headless == 'true' end you created a method named add_default_values that takes an `options object` as a parameter and adds arguments to it with the add_argument method. you have another method named add_headless_options(options) which only gets activated and adds another set of arguments to your options object if the @headless variable in our base_config.rb file is \u2018true\u2019. def self.add_headless_options(options) options.add_argument('--no-sandbox') options.add_argument('--headless') options.add_argument('--window-size=1280,720') options.add_argument('--disable-dev-shm-usage') options.add_argument('--disable-gpu') options.add_argument('--test-type=browse') end let\u2019s inspect these arguments: '--no-sandbox', lets you run the web driver outside of a sandbox environment, which is a very restricted one with few privileges. '--headless', opens the web driver without a window, in the background. '--window-size=1280,720', sets the window size of your web browser. '--disable-dev-shm-usage'is a chrome flag that is used to disable the use of \/dev\/shm , which is a shared memory space where chrome stores temporary files. '--disable-gpu' disables hardware acceleration. '--test-type=browse' allows the test code to run more smoothly with the benefits listed here note: you can find the other browser options here. finally, register your driver. capybara.register_driver :selenium do |app| capybara::selenium::driver.new(app, browser: :chrome, options: options) end capybara.register_driver lets you register your web driver type. you will use selenium web driver in the background. inside of the do block, you state your browser type and options, which you created earlier. when you look at case blocks for firefox and chrome, you will realize that the only thing that changes is the browser type. so, in this file, you used the @browser and @headless variables but you did not initialize the get_driver method anywhere yet. then let\u2019s get into it! support folder and env.rb in the previous section, you created a basic cucumber folder structure with cucumber --init command. before you get down to doing hooks.rb, create another important file just beside it. the env.rb! require 'capybara' require 'capybara\/dsl' require 'selenium-webdriver' require 'rspec' require 'webdrivers' require_relative '..\/..\/utils\/driver.rb' require_relative '..\/..\/config\/base_config.rb' include capybara::dsl include rspec::matchers capybara.configure do |config| config.default_driver = :selenium config.default_selector = :css config.app_host = 'https:\/\/www.amazon.com' config.default_max_wait_time = baseconfig.wait_time end \u00A0 require loads gems and executes them. with it, you import all class and method definitions in the file. require_relative does the same thing but by specifying the relative paths of the files. include brings in the methods of these two packages into your file. in the capybara.configure block, you configure capybara with attributes. configuring hooks.rb! and last but not least, let\u2019s configure your hooks.rb file! hooks are tools that cucumber provides to execute custom codes before and after a scenario is run. this is where you initiate your driver. before do driver.get_driver page.driver.browser.manage.window.maximize end after do |scenario| begin if scenario.failed? puts \"failed ==> #{scenario.name}\\n#{scenario.exception}:#{scenario.exception.message}\" else puts \"passed ==> #{scenario.name}\" end capybara.current_session.driver.quit rescue exception => exception puts \"failed ==> #{exception}\" capybara.current_session.driver.quit end end in the before block, you run the get_driver method which we defined in driver.rb. so before every scenario, a new driver instance gets created. the additional page.driver.browser.manage.window.maximize makes your web browsers\u2019 windows maximized. in the after block, you have a scenario object which cucumber provides. it has attributes such as name and exception. you use this information to increase readability by logging them or printing them to the console in the begin-rescue block. in the after block, taking screenshots of failed scenarios can be useful to pinpoint the reasons for failure. additional hook keywords can be found in the cucumber documentation. capybara.current_session.driver.quit terminates the current driver instance. so before every scenario, you start with a fresh driver instance, and after your scenario execution, you terminate it. with this, i conclude the first chapter of this blogpost series. in the next chapter, you are going to write your test scripts with page object model (pom) and cucumber! if you have any questions, don\u2019t hesitate to write a comment!"
},
{
"title":"Unleashing the Power of API Internship: My Journey at Kloia",
"body":"When it comes to shaping our careers and gaining valuable real-world experience, internships play a pivotal role. I have had the privilege of interning at Kloia for nearly three months, and I would like to share my experience with you in this article. A Deep Dive into the World of APIs My internship at Kloia began with a thorough exploration of the realm of Application Programming Interf...",
"post_url":"https://www.kloia.com/blog/unleashing-the-power-of-api-internship-my-journey-at-kloia",
"author":"Cem \u00D6zgeldi",
"publish_date":"17-<span>Oct<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/cem-özgeldi",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/unleashing-the-power-of-api-blog.png",
"topics":{ "kloia":"kloia","onboarding":"onboarding","hrmarketing":"HRMarketing","intern":"intern" },
"search":"17 <span>oct</span>, 2023unleashing the power of api internship: my journey at kloia kloia,onboarding,hrmarketing,intern cem \u00F6zgeldi when it comes to shaping our careers and gaining valuable real-world experience, internships play a pivotal role. i have had the privilege of interning at kloia for nearly three months, and i would like to share my experience with you in this article. a deep dive into the world of apis my internship at kloia began with a thorough exploration of the realm of application programming interfaces (apis). during my internship period, i had the opportunity to gain a deeper understanding of apis, their functions, and their significance in modern software development. this knowledge laid the foundation for my internship and set the stage for exciting challenges ahead. testing the waters: api testing trials one of the key highlights of my internship was the practical experience i gained in api testing. we conducted experiments to test apis, ensuring their functionality, and identifying areas with potential for improvement. this hands-on experience allowed me to apply my theoretical knowledge and develop valuable skills that are crucial in the software development landscape. collaboration beyond boundaries in the project i worked on, i had an opportunity to work closely with a backend team. this collaboration enabled me to progress concurrently, resulting in a seamless integration of front-end and back-end components. it was during this phase that i truly grasped the importance of cross-departmental collaboration. working with teams from different departments exposed me to the diversity of expertise and perspectives that contribute to the success of a project. the essence of teamwork the heart of my internship experience at kloia undoubtedly revolved around the significance of teamwork. i witnessed firsthand how crucial effective teamwork is in achieving project milestones and delivering high-quality software solutions. the synergy that arises when individuals from diverse backgrounds and skill sets come together is a powerful force that drives innovation and success. a culture of support and knowledge sharing one of the most significant takeaways from my internship was recognizing the value of mutual support and knowledge sharing. at kloia, there is a culture of helping one another, where experienced professionals are always willing to guide and mentor. this spirit of collaboration and knowledge exchange not only accelerates personal growth but also strengthens the overall team dynamic. here we are, onto new adventures my internship at kloia has been a transformative journey. it has equipped me with a deep understanding of apis, sharpened my testing skills, and underscored the critical role of teamwork in the software development process. moreover, it has instilled in me the value of collaboration and knowledge sharing as essential components of a successful career in technology. as i look back on these three months, i am grateful for the experiences, challenges, and friendships that i have gained. my time at kloia expanded my knowledge greatly and helped me become a a more capable and adaptable professional. i am excited to continue my journey in the world of technology, armed with the lessons and insights i have acquired during this enriching internship experience at kloia."
},
{
"title":"Using AWS Fargate for Efficient CI\/CD Pipelines",
"body":"In today's world, accelerating the application development process, reducing infrastructure management costs, and optimizing resource utilization are paramount. AWS Fargate, a service provided by Amazon Web Services (AWS), is an excellent serverless computing tool that addresses these needs. What is it? Fargate is a serverless service that simplifies running container-based applications ...",
"post_url":"https://www.kloia.com/blog/using-aws-fargate-for-efficient-ci/cd-pipelines",
"author":"Enes Cetinkaya",
"publish_date":"27-<span>Sep<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/enes-cetinkaya",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws-fargate-for-efficient-ci-cd-pipelines-blog.png",
"topics":{ "aws":"AWS","devops":"DevOps","eks":"EKS","amazon-ecs":"Amazon ECS","ci-cd":"CI\/CD","aws-fargate":"AWS Fargate" },
"search":"27 <span>sep</span>, 2023using aws fargate for efficient ci\/cd pipelines aws,devops,eks,amazon ecs,ci\/cd,aws fargate enes cetinkaya in today's world, accelerating the application development process, reducing infrastructure management costs, and optimizing resource utilization are paramount. aws fargate, a service provided by amazon web services (aws), is an excellent serverless computing tool that addresses these needs. what is it? fargate is a serverless service that simplifies running container-based applications by removing the need for infrastructure management. it integrates seamlessly with aws's elastic container service (ecs) and elastic kubernetes service (eks). why should we use it? conventional infrastructure management is time-consuming and intricate. with fargate, you can execute your applications directly in containers without the hassles of server management, capacity planning, or patch management. benefits of fargate automatic scaling: adjusts to the traffic inflow and outflow of your application automatically. maximum security: leveraging aws's robust security model, each container runs in isolation. cost optimization: the pay-as-you-go model means you only pay for the resources you use. fargate and ci\/cd integration aws's ci\/cd tools integrate easily with fargate. specifically, aws codebuild automates the tasks of compiling, testing, and containerizing your application. this integration makes the development process faster, more consistent, and secure. step-by-step ci\/cd setup: source codes and buildspec.yml: first, you set up the source codes and the buildspec.yml file in our github repo. codebuild integration: aws codebuild reads the buildspec.yml and manages the application's compilation, testing, and docker container image creation. uploading docker image to ecr: post compilation, the docker image is uploaded to the amazon elastic container registry (ecr). ecr safely stores docker images, granting access when required. fargate deployment: the final step involves running the container image on fargate. here, task definitions and fargate profiles specify how the application operates and its resource use. version: 0.2 env: variables: repo_uri: \"11111111111.dkr.ecr.eu-central-1.amazonaws.com\/example-reponame\" task_family: \"example-definition-name\" task_role_arn: \"arn:aws:iam::11111111111:role\/ecstaskexecutionrole\" cpu: \"1024\" memory: \"3072\" container_name: \"example-containername\" container_port: \"80\" log_group: \"\/example\/loggroup\" log_region: \"eu-central-1\" phases: install: commands: - apt-get update - apt-get install -y jq pre_build: commands: - echo cloning repository... - git clone https:\/\/@github.com\/username\/repo_name.git - cd repo_name - echo logging in to amazon ecr... - aws ecr get-login-password --region eu-central-1 | docker login --username aws --password-stdin 11111111111.dkr.ecr.eu-central-1.amazonaws.com - commit_hash=$(git log --format=\"%h\" -n 1) - image_tag=$(echo $commit_hash | cut -c 1-7) build: commands: - echo build started on `date` - echo building the docker image... - docker build -t $repo_uri:$image_tag . - echo pushing the docker image... - docker push $repo_uri:$image_tag post_build: commands: - echo registering ecs task definition... - | aws ecs register-task-definition \\ --family $task_family \\ --container-definitions \"[{ \\\"name\\\": \\\"$container_name\\\", \\\"image\\\": \\\"$repo_uri:$image_tag\\\", \\\"cpu\\\": $cpu, \\\"portmappings\\\": [{ \\\"containerport\\\": $container_port, \\\"hostport\\\": $container_port, \\\"protocol\\\": \\\"tcp\\\" }], \\\"essential\\\": true, \\\"logconfiguration\\\": { \\\"logdriver\\\": \\\"awslogs\\\", \\\"options\\\": { \\\"awslogs-group\\\": \\\"$log_group\\\", \\\"awslogs-region\\\": \\\"$log_region\\\", \\\"awslogs-stream-prefix\\\": \\\"ecs\\\" } } }]\" \\ --requires-compatibilities fargate \\ --network-mode awsvpc \\ --cpu $cpu \\ --memory $memory \\ --execution-role-arn $task_role_arn \\ --query 'taskdefinition.taskdefinitionarn' \\ --output text - echo update to fargate image version... - aws ecs update-service --cluster example-clustername --service example-service --task-definition example-definition-name --output text fargate and running concurrent tasks aws fargate offers the capability to run tasks concurrently, a feature particularly useful for applications that require parallel processing or need to handle multiple requests at the same time. for instance, if your application involves batch processing, data analytics, or real-time monitoring, you can distribute the workload across multiple tasks to accelerate the process. task definitions: create a task definition in ecs that outlines the docker container and the resources it will use. be sure to specify the cpu and memory requirements carefully to optimize resource utilization. service configuration: when creating or updating a service, you can specify the desired number of tasks to run concurrently. this is done by setting the \"number of tasks\" in the service definition. these features can significantly boost the efficiency and responsiveness of your application. however, the advantages of running tasks concurrently come with challenges such as potential data inconsistency and increased complexity in debugging and monitoring. therefore, it's crucial to implement proper synchronization and error-handling mechanisms. limitations of aws fargate resource limitations: while fargate does abstract away the underlying infrastructure, there are limits to the cpu and memory configurations you can assign to your tasks. tasks can use a maximum of 4 vcpus and 30gb of memory. cost considerations: although fargate might seem cost-effective initially, for workloads that run 24\/7, it might end up being costlier than running an ec2 instance, especially if not optimized properly. it's essential to monitor and adjust resource configurations to ensure cost-effectiveness. no persistent storage: fargate doesn't support persistent storage. so, if your containers need to store data permanently, you'll need to use an external storage solution like amazon s3 or a managed database service. slower startup time: fargate tasks might take a bit longer to start than tasks on ecs with ec2 as the launch type, especially if they need to pull large docker images. region availability: fargate might not be available in all aws regions. therefore, ensure it's available in your desired region before architecting a solution around it. less customizability: since fargate abstracts away the infrastructure layer, you get less control over the underlying resources. for some advanced use-cases, this lack of control might be restrictive. conclusion and future trends aws fargate offers a perfect solution for organizations aiming to streamline modern application development processes in a simple, swift, and cost-effective manner. removing the complexities of infrastructure management, it enables developers to focus on their primary task - coding the application. with the shift towards more serverless and cloud-native solutions, it's evident that services like aws fargate will become even more important. this evolving trend is set to redefine the methods of application development and deployment, marking a significant shift in the realm of cloud computing."
},
{
"title":"Amazon GuardDuty EKS Protection",
"body":"Amazon GuardDuty is designed to protect AWS workloads from various security threats by analyzing events, data, and activity across AWS accounts. With a focus on providing real-time threat detection, GuardDuty helps users stay one step ahead of potential security risks. By leveraging machine learning and AWS's extensive threat intelligence, GuardDuty identifies suspicious behavior and ale...",
"post_url":"https://www.kloia.com/blog/aws-guard-duty-eks-protection",
"author":"Halil Bozan",
"publish_date":"26-<span>Sep<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/halil-bozan",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws-guard-duty-eks-protection-blog%20%282%29.png",
"topics":{ "aws":"AWS","devops":"DevOps","monitoring":"monitoring","eks":"EKS","guardduty":"GuardDuty" },
"search":"28 <span>sep</span>, 2023amazon guardduty eks protection aws,devops,monitoring,eks,guardduty halil bozan amazon guardduty is designed to protect aws workloads from various security threats by analyzing events, data, and activity across aws accounts. with a focus on providing real-time threat detection, guardduty helps users stay one step ahead of potential security risks. by leveraging machine learning and aws's extensive threat intelligence, guardduty identifies suspicious behavior and alerts users about potential security issues. amazon guardduty for eks brings the same level of protection to containerized environments. by continuously monitoring eks clusters, guardduty helps ensure that your container workloads are shielded from malicious activities and unauthorized access attempts. integrating amazon guardduty with eks offers several benefits for enhanced security, with a focus on both eks audit log monitoring and eks runtime monitoring. eks audit log monitoring amazon guardduty's eks audit log monitoring plays a critical role in identifying and mitigating security threats by analyzing and alerting administrators about potential risks related to eks cluster configurations and control plane activities. enhanced threat identification: amazon guardduty continuously monitors eks audit logs for any misconfigurations that might expose the cluster to vulnerabilities. this includes unauthorized access attempts, changes to cluster settings, and api calls that deviate from standard patterns. by identifying these anomalies, amazon guardduty helps administrators stay vigilant and quickly respond to potential security breaches. proactive security: with amazon guardduty, administrators receive real-time alerts when suspicious activities are detected in the audit logs. this proactive approach empowers teams to take immediate action and prevent security incidents. eks runtime monitoring: amazon guardduty's eks runtime monitoring focuses on the runtime environment of eks clusters, analyzing container and pod behavior to detect anomalies, malicious activities, and potential compromises. automated or manual agent setup: amazon guardduty simplifies the process of setting up agents for eks clusters. administrators can choose to deploy the guardduty agent automatically or manually, depending on their preference and requirements. automated agent setup ensures seamless integration, allowing administrators to focus on security rather than cumbersome deployment procedures. cpu and memory monitoring: amazon guardduty provides insights into cpu and memory usage patterns within eks clusters. unusual spikes in resource utilization can indicate unauthorized access or malicious activities. by monitoring these metrics, guardduty aids administrators in identifying potential threats and resource abuse. runtime event types: amazon guardduty comprehensively analyzes various runtime event types, including unauthorized pod creations, privilege escalations, and unexpected container executions. these insights enable administrators to understand the full scope of potential security incidents and take appropriate actions. by leveraging eks audit log monitoring and eks runtime monitoring, organizations can benefit from a holistic security approach, ensuring comprehensive coverage of their eks clusters and containerized workloads. amazon guardduty's real-time alerts, proactive threat detection, and deep visibility into eks environments enable administrators to maintain a secure and robust containerized infrastructure on aws. guardduty findings for eks amazon guardduty generates various types of findings specific to eks clusters, providing valuable insights into potential security risks. these findings fall into two main categories: runtime findings and audit logging findings. you can see the most important main finding types below. amazon guardduty eks audit logging findings eks audit logs provide detailed insights into the activities within your kubernetes cluster. these logs help you understand who did what and when, which is crucial for security and compliance. unauthorized access attempts: amazon guardduty can detect audit logging findings related to unauthorized access attempts, such as failed login attempts using invalid credentials or attempts to access resources that a user or application shouldn't have permissions for. sensitive resource access: eks audit logs can reveal if there are attempts to access sensitive resources or perform privileged operations. amazon guardduty can help identify patterns of behavior that might indicate unauthorized access to critical data or configurations. pod security policy violations: if your eks cluster enforces pod security policies (psps) to ensure containers adhere to security standards, audit logs can help identify instances where pods violate these policies. amazon guardduty can flag such violations, helping you maintain a secure container environment. cluster configuration changes: changes to your eks cluster's configuration can impact security. audit logs can help you keep track of modifications to settings, roles, and permissions. amazon guardduty can alert you for unusual or unauthorized configuration changes. anomalous user behavior: amazon guardduty can identify abnormal user behavior based on audit logs. for example, if a user suddenly starts accessing resources they've never accessed before or performing actions they don't typically perform, this could be an indicator of a compromised account. amazon guardduty eks runtime findings amazon guardduty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior within your aws environment, including amazon eks (elastic kubernetes service) clusters. it helps you identify security threats by analyzing events, flows, and other data sources. unusual process activity: amazon guardduty can detect runtime findings related to unusual or suspicious process activity within your eks nodes. this detection includes identifying processes attempting to establish unauthorized network connections, executing suspicious code, or performing actions outside of their typical behavior. privilege escalation: amazon guardduty monitors for attempts to escalate privileges within your eks environment. this might involve detecting unauthorized attempts to gain elevated permissions or access resources that an application or user shouldn't have access to. network connections: amazon guardduty can identify network connections that might be indicative of a security breach or unauthorized access attempt. for eks, this could involve detecting unexpected network connections between containers, pods, or nodes. you can see detailed findings for eks runtime finding types here. understanding these different types of findings allows administrators to prioritize security response efforts and take prompt action to mitigate potential threats. by addressing these findings and implementing security best practices, organizations can maintain a robust and secure eks environment, safeguarding their containerized applications and data from various security risks. integrating amazon guardduty with amazon eks for account and organization-level protection in this section, i will talk about how you can manage guardduty on one account and multiple organization accounts. enable amazon guardduty service: go to the aws management console. navigate to the amazon guardduty service. enable guardduty for the aws region where your eks cluster is located. configure guardduty settings: enable eks protection runtime monitoring enable eks protection audit log monitoring enable guardduty for eks: open the amazon eks console. select your eks cluster. in the navigation pane, choose \"add-ons.\" find the amazon guardduty add-on and choose \"enable.\" wait for guardduty agents enrolled. configure eks integration: follow the on-screen instructions to configure the integration between eks and guardduty. this may involve granting necessary permissions and specifying the guardduty detector you created earlier. review and monitor: check guardduty eks protection cluster coverage status. it should be healthy. after checking guardduty for your eks cluster, regularly review the findings and alerts generated by guardduty to address any security concerns. description of guardduty finding for eks protection there are some findings above and they all have a meaning; newlibraryloaded: created or recently modified library loading process. processincection.ptrace: ptrace system call was detected in a container or node. more findings and explanations can be found here. extending guardduty to all accounts in an aws organization: to enable guardduty for all accounts under an aws organization, you can follow these steps: enable guardduty on the organization master account: log in to the aws management console using the master account of the aws organization. enable guardduty for the organization master account following the steps mentioned earlier.[enable amazon guardduty service, configure guardduty settings] also when enabled guardduty in the organization master account, on the settings page, you need to add the guardduty delegated administrator account that you want for the organization. enable eks guardduty protection addons for each account eks cluster following the steps mentioned earlier. [enable guardduty for eks] invite member accounts: from the aws organizations console, invite the member accounts to enable guardduty. accept invitations in member accounts: log in to each invited member account. accept the guardduty invitation from the master account. enable guardduty in member accounts: in each member account, enable guardduty following the same steps outlined earlier. guardduty eks protection pricing (as of september 2023) the pricing structure for eks protection is divided into three tiers based on the volume of events generated per month as seen above table. eks audit logs first 100 million events \/ month $1.73 per one million events next 100 million events \/ month $0.87 per one million events over 200 million events \/ month $0.22 per one million events eks runtime monitoring first 500 vcpus \/ month (for monitored eks instances) $1.50 per vcpu next 4,500 vcpus \/ month (for monitored eks instances) $0.75 per vcpu over 5,000 vcpus \/ month (for monitored eks instances) $0.25 per vcpu *** vcpus per month for an instance = (total hours a supported provisioned active eks instance ) x number of vcpus on the instance \/ (number of hours in a month) conclusion in a rapidly evolving threat landscape, safeguarding your containerized workloads is paramount. amazon guardduty, a powerful security service, extends its capabilities to amazon eks (elastic kubernetes service) environments, enhancing protection against security risks and unauthorized access. by leveraging advanced machine learning algorithms and aws's comprehensive threat intelligence, guardduty for eks brings real-time threat detection to the world of containers. eks audit log monitoring empowers administrators to maintain a proactive stance against potential breaches by analyzing audit logs for unauthorized access attempts, sensitive resource access, and changes to cluster configurations. the insights gained enable swift response and mitigation strategies to counter security threats. eks runtime monitoring takes a comprehensive approach by delving into the runtime environment of eks clusters. it identifies unusual process activities, privilege escalations, and network connections that might indicate malicious intent. with detailed insights into cpu and memory usage patterns, administrators can ensure resource integrity and prevent potential attacks. the integration of guardduty with eks is not just about protection; it's about staying ahead of threats. by offering granular findings for different security aspects of your eks clusters, guardduty aids in informed decision-making and rapid response to potential incidents. for organizations looking to ensure a holistic security posture, guardduty's detailed findings provide actionable intelligence. whether it's unauthorized access attempts, abnormal process behavior, or sensitive resource access, guardduty arms administrators with the information needed to fortify their eks environments. by following the steps outlined to extend guardduty protection across your aws organization, you can ensure that security is not just a concern of a single account but a comprehensive strategy embraced throughout your organization's accounts. as you embark on your journey to secure containerized workloads with amazon guardduty and eks, remember that vigilance is the key to success. regularly monitoring guardduty findings, staying updated with evolving threat landscapes, and continuously refining security practices will collectively contribute to maintaining a robust and secure containerized infrastructure on aws."
},
{
"title":"Migrating to Java 21 with Spring Framework and Spring Boot: Technical Tips and Strategies",
"body":"Introduction: Tools like Spring Framework and Spring Boot are crucial for creating Java-based applications. The new features and performance enhancements offered by Java 21 can help your Spring-based projects. This article will examine the technical facets of upgrading to Java 21 using Spring Framework and Spring Boot, offering you crucial advice and guidance to help you effectively comp...",
"post_url":"https://www.kloia.com/blog/migrating-to-java-21-with-spring-framework-and-spring-boot-technical-tips-and-strategies",
"author":"Orhan Burak Bozan",
"publish_date":"26-<span>Sep<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/orhan-burak-bozan",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/java21-spring-framework-spring-booth-blog.png",
"topics":{ "java":"java","software":"Software","java21":"java21","spring-framework":"spring framework","spring-boot":"spring boot" },
"search":"30 <span>apr</span>, 2024migrating to java 21 with spring framework and spring boot: technical tips and strategies java,software,java21,spring framework,spring boot orhan burak bozan introduction: tools like spring framework and spring boot are crucial for creating java-based applications. the new features and performance enhancements offered by java 21 can help your spring-based projects. this article will examine the technical facets of upgrading to java 21 using spring framework and spring boot, offering you crucial advice and guidance to help you effectively complete this move. 1. analysis of compatibility: conduct a comprehensive compatibility audit of your spring-based project before switching to java 21. consider the following important steps: make sure you're utilizing the most recent versions of spring framework and spring boot by updating your versions of those two frameworks. make that all other project requirements are likewise compatible with java 21 by reviewing the dependencies. recognize any mismatches in your spring-based project that require correction to be compatible with java 21. 2. spring and a modular system: java has a modular framework as of version 9. investigate the possibilities for using this technology in your spring-based projects. create logical modules for your project: logical module organization of your spring components helps improve project organization. learn how to successfully integrate spring projects with the modular system by reading \"integrating spring with the module system.\" 3. making use of spring's new language features: new language features introduced in java 21 may be helpful for your spring-based projects. be aware of how to make the most of them. examine the effectiveness of using pattern matching to update your spring components. switch expressions: discover how switch expressions can increase the effectiveness of your spring components. 4. audits of performance and security performance gains and improved security features come with the switch to java 21. adaptively optimize your spring-based projects. performance testing: evaluate how well your spring application and components perform and make any necessary adjustments. security improvements: implement the security enhancements provided by java 21 to strengthen the security of your application. 5. garbage collector configuration: garbage collection has undergone new innovations and enhancements with java 21. performance and efficient resource use depend on selecting the appropriate trash collector and configuring it to your project's needs. explore the many garbage collector types offered by java 21 (such as g1, zgc, or shenandoah) and pick the one that best satisfies the requirements of your project. configuration: based on the specifications of your project, optimize the trash collector settings. garbage collector settings are extremely crucial, especially in large and high-load applications. monitoring and analysis: keep an eye on garbage collector performance, and when necessary, evaluate it. this guarantees the efficient operation of your application and aids in avoiding unneeded delays. 6. constant instruction and inspection: ecosystems like spring boot and the spring framework are always changing. try to keep your employees up to date on new advances and offer regular training. conclusion: your projects may become more contemporary, dependable, and effective if you upgrade to java 21 and use spring framework and spring boot. but it necessitates meticulous preparation, compatibility testing, and technological know-how. the above-mentioned advice and tactics are meant to help you manage the conversion of your spring-based projects to java 21 successfully."
},
{
"title":"Kubernetes 1.28.0: A Comprehensive Look at New Features, Improvements, and Changes",
"body":"Kubernetes continues to evolve, offering new features and improvements with each release to enhance container orchestration. The 1.28.0 release is no exception, bringing a host of updates that touch on everything from security and performance to developer tools and API enhancements. Let's dive into what this new version has to offer. Security Enhancements Advanced Pod Security Pod Securi...",
"post_url":"https://www.kloia.com/blog/kubernetes-1.28.0-a-comprehensive-look-at-new-features-improvements-and-changes",
"author":"Enes Cetinkaya",
"publish_date":"14-<span>Sep<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/enes-cetinkaya",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/kubernetes-1-28-0-blog.png",
"topics":{ "kubernetes":"Kubernetes","security":"security","monitoring":"monitoring","api":"API","kubernestes1-28-0":"kubernestes1.28.0" },
"search":"17 <span>nov</span>, 2023kubernetes 1.28.0: a comprehensive look at new features, improvements, and changes kubernetes,security,monitoring,api,kubernestes1.28.0 enes cetinkaya kubernetes continues to evolve, offering new features and improvements with each release to enhance container orchestration. the 1.28.0 release is no exception, bringing a host of updates that touch on everything from security and performance to developer tools and api enhancements. let's dive into what this new version has to offer. security enhancements advanced pod security pod security gets a significant upgrade, allowing administrators to create more granular security rules. this makes it easier to enforce security best practices without compromising application functionality. kubelet certificates and tls the new kubelet tls bootstrap feature automates the creation of tls certificates, making it easier for kubelets to securely communicate with the control plane, thereby enhancing cluster security. recovery from non-graceful node shutdown this feature, now stable, allows for better handling of unexpected node shutdowns, enabling stateful workloads to restart on a different node successfully. performance and resource management kubernetes topology manager the topology manager feature has been improved for better resource allocation based on hardware topology, particularly beneficial for complex hardware setups like numa architectures. supported skew between control plane and node versions the supported version skew between node and control plane components has been expanded from n-2 to n-3. this change reduces the time lost to node maintenance, particularly beneficial for environments with long-running workloads. logging and monitoring dynamic logs dynamic auditing is now available, allowing for the instant creation of audit policies. this feature provides greater flexibility in adapting to changing security requirements and compliance standards. api and custom resource enhancements customresourcedefinition validation rules the introduction of the common expression language (cel) for validation rules allows for more complex validation without the need for webhooks. this addition simplifies the development and operability of custom resource definitions (crds). validatingadmissionpolicies this feature, now in beta, allows for in-process validation of requests to the kubernetes api server, offering an alternative to validating admission webhooks. match conditions for admission webhooks this feature, which has moved to beta, allows you to specify conditions for when kubernetes should make a remote http call at admission time. developer tools and flexibility kubectl debug the new `kubectl debug` tool simplifies the debugging process by allowing the creation of temporary debugging containers within existing pods. api awareness of sidecar containers this alpha feature introduces a \u201Drestartpolicy\u201D field for init containers, indicating when an init container is also a sidecar container. this feature enhances the startup sequence of containers within a pod. data backup and recovery snapshot and restorations new volume snapshot and restore capabilities have been introduced, making it easier to manage and recover data. other notable features support for cdi injection into containers this alpha feature provides a standardized way of injecting complex devices into containers. automatic assignment of default storageclass this feature, now stable, automatically sets a `storageclassname` for a persistentvolumeclaim if none is provided. pod replacement policy for jobs this alpha feature allows you to specify when new pods should be created as replacements for existing pods in jobs. job retry backoff limit, per index this extends the job api to support indexed jobs where the backoff limit is per index, allowing the job to continue execution despite some of its indexes failing. conclusion kubernetes 1.28.0 is packed with features and improvements that make the platform more secure, efficient, and developer-friendly. whether you're an administrator looking to enhance security measures or a developer aiming for more efficient resource management and debugging, this version has something to offer."
},
{
"title":"Going Global: Using AWS Global Accelerator for Improved Application Performance",
"body":"In today's interconnected world, businesses are no longer confined by geographical boundaries. With the rise of the internet and cloud computing, applications and services are accessed by users around the globe. However, delivering a seamless user experience across different regions can be a challenge due to latency, network congestion, and other performance issues. This is where AWS Glo...",
"post_url":"https://www.kloia.com/blog/going-global-using-aws-global-accelerator-for-improved-application-performance",
"author":"Ahmet Ayd\u0131n",
"publish_date":"22-<span>Aug<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/ahmet-aydın",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws-global-accelerator-for-improved-app-performance-blog.png",
"topics":{ "aws":"AWS","performance":"performance","aws-global-accelerator":"AWS Global Accelerator","going-global":"going global","accelerator":"accelerator" },
"search":"22 <span>aug</span>, 2023going global: using aws global accelerator for improved application performance aws,performance,aws global accelerator,going global,accelerator ahmet ayd\u0131n in today's interconnected world, businesses are no longer confined by geographical boundaries. with the rise of the internet and cloud computing, applications and services are accessed by users around the globe. however, delivering a seamless user experience across different regions can be a challenge due to latency, network congestion, and other performance issues. this is where aws global accelerator comes into play. in this blog post, i will explain how aws global accelerator can enhance application performance and share with you real-world use cases that demonstrate its benefits. understanding aws global accelerator aws global accelerator is a service that improves the availability and performance of applications by utilizing amazon's highly available and distributed network infrastructure. it directs traffic over the aws global network, allowing applications to achieve lower latency and enhanced availability for end users. the key features of aws global accelerator include: static ip addresses aws global accelerator provides you with static anycast ip addresses that can be easily associated with your application endpoints. these ip addresses remain the same across multiple aws regions, ensuring a consistent entry point for your users. traffic distribution the service intelligently routes traffic to the optimal endpoint based on health checks, geography, and routing policies that you define. this type of routing ensures that users are directed to the closest and healthiest endpoint, thus reducing latency and improving application performance. accelerated network backbone the service leverages the amazon global network, which is designed for low-latency and high-throughput traffic. this means that users can experience improved performance when accessing your application, even from distant locations. health checks and failover aws global accelerator continuously monitors the health of your application endpoints. if an endpoint becomes unhealthy due to application failures or other issues, traffic is automatically redirected to healthy endpoints, ensuring high availability. benefits of using aws global accelerator reduced latency one of the primary benefits of aws global accelerator is reduced latency. by directing traffic over the aws global network and routing it to the closest available endpoint, users experience faster response times and have a better overall user experience. high availability with the ability to perform health checks and automatically route traffic away from unhealthy endpoints, aws global accelerator improves application availability. this helps in maintaining uninterrupted service for users, even in the face of endpoint failures. global reach whether your users are in north america, europe, asia, or any other part of the world, aws global accelerator ensures that they are connected to an optimal endpoint. this global reach is crucial for businesses that have a diverse user base across the world. simplified architecture implementing global load balancing and failover mechanisms traditionally involves complex setups. aws global accelerator simplifies this process by providing a managed service that takes care of routing and failover logic. predictable performance with aws global accelerator's ability to route traffic based on policies and health checks, you can ensure that users are directed to endpoints that provide the best performance. this predictability is crucial for applications that demand consistent performance levels. real-world use cases e-commerce platform imagine a global e-commerce platform that serves customers from different regions. during peak shopping times, the platform experiences heavy traffic loads that can lead to performance degradation. by utilizing aws global accelerator, the platform can distribute traffic to multiple endpoints in different aws regions. this not only reduces the load on individual endpoints but also ensures that users are directed to the nearest endpoint, minimizing latency and providing a smooth shopping experience. additionally, if an endpoint experiences technical issues, aws global accelerator can automatically redirect traffic to healthy endpoints, preventing downtime and ensuring uninterrupted service. video streaming service a video streaming service that caters to a global audience faces the challenge of delivering high-quality video content with minimal buffering. aws global accelerator can play a crucial role in this scenario by optimizing the delivery of video streams. by directing users to the nearest content delivery endpoint, the service reduces buffering and improves the overall streaming experience. furthermore, aws global accelerator's ability to perform health checks ensures that users are directed to endpoints that can handle the load. if an endpoint becomes overloaded or experiences issues, traffic can be quickly redirected to healthier endpoints, maintaining a seamless streaming experience. online gaming platform online gaming platforms have to run with ultra-low latency to provide an immersive and enjoyable gaming experience. aws global accelerator can be used to optimize the delivery of gaming traffic. by directing players to the nearest game server endpoints, the service minimizes latency and ensures that in-game actions are transmitted quickly, reducing lag. if a game server experiences downtime or connectivity problems, aws global accelerator can automatically route players to alternative healthy servers, preventing disruptions and maintaining gameplay continuity. getting started with aws global accelerator implementing aws global accelerator for your application involves a few key steps, and i'll illustrate each step with examples: creating accelerators start by creating an accelerator and associating it an ip address. for instance, let's say you're setting up an accelerator for a global messaging app. you create an accelerator named \"kloiamsgappaccelerator\", by default aws provide ip address dynamically. configuring listeners configure listeners to define the protocols and ports that your application uses to receive traffic. continuing with the messaging app scenario, you might set up a listener for https traffic on port 443. creating endpoint groups create endpoint groups that consist of the resources serving traffic to users. these resources can be elastic ip addresses, network load balancers, or amazon ec2 instances. add endpoints add endpoints for each region and specify the resources that handle incoming traffic. conclusion in the era of global connectivity, ensuring optimal application performance for users around the world is a critical consideration. aws global accelerator offers a powerful solution to this challenge, leveraging amazon's vast network infrastructure to reduce latency, to enhance availability, and to provide a seamless user experience. from e-commerce platforms to video streaming services and online gaming platforms, aws global accelerator has proven its effectiveness across various use cases. by adopting aws global accelerator, businesses gain the ability to scale their applications globally without compromising on performance or availability. the benefits of reduced latency, high availability, and simplified architecture contribute to a better user experience which leads to increased customer satisfaction and loyalty. moreover, the predictability of performance ensures that businesses can meet the demands of their users even during peak traffic periods. as we move further into a digitally connected world, the role of aws global accelerator becomes increasingly crucial. it empowers businesses to transcend geographical boundaries and provide a consistent experience to users, regardless of their location. by strategically distributing traffic to the nearest and healthiest endpoints, aws global accelerator becomes a fundamental tool for delivering applications and services to a diverse global audience. to embark on your journey with aws global accelerator, it's essential to understand your application's requirements, design an optimal routing strategy, and continuously monitor and adjust your setup for optimal performance. by following best practices and utilizing the features provided by aws global accelerator, businesses can ensure that their applications perform at their best, delighting users and driving business success in the global marketplace."
},
{
"title":"Appium 2.0 Released: Migrating to Appium 2.0",
"body":"Appium 2.0 is officially released! It is the most significant and highly anticipated release in the past 5 years. It\u2019s finally time to celebrate this release! The main purpose of this highly anticipated release is to make the test automation process more modular and flexible. This includes transitioning to a new architecture for extensions and drivers. One of the key objectives of the re...",
"post_url":"https://www.kloia.com/blog/appium-2.0-released-migrating-to-appium-2.0",
"author":"Sinem Korkmaz",
"publish_date":"09-<span>Aug<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/sinem-korkmaz",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/migration-to-appium-2-0-blog.png",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","appium":"Appium","unittesting":"unittesting","qa":"QA","test-driven-development":"Test Driven Development","performance-testing":"Performance Testing","manual-testing":"manual testing","appium-2-0":"Appium 2.0" },
"search":"09 <span>aug</span>, 2023appium 2.0 released: migrating to appium 2.0 test automation,software testing,appium,unittesting,qa,test driven development,performance testing,manual testing,appium 2.0 sinem korkmaz appium 2.0 is officially released! it is the most significant and highly anticipated release in the past 5 years. it\u2019s finally time to celebrate this release! the main purpose of this highly anticipated release is to make the test automation process more modular and flexible. this includes transitioning to a new architecture for extensions and drivers. one of the key objectives of the release is to further strengthen appium through the contributions of the community. i would like to share with you the major new features and the breaking changes that come with the new version major new features protocol changes the most significant and extensive change in appium 2.x occurred in its architecture. it will no longer support the old jsonwp (json wire protocol) and mjsonwp (mobile json wire protocol) protocols. these protocols were used in selenium and appium before the w3c webdriver protocol was accepted as a web standard. up to appium 2.0, it also supported the w3c webdriver protocol along with these old protocols, thereby enabling older clients to communicate with new appium servers. however, this changes with appium 2.0, and from now on, only the w3c webdriver protocol will be supported. this step directs appium users towards a future of working in a more consistent and standard way across various platforms and technologies. new installation process with appium 2.x, the steps for appium installation have changed. let's take a look at the new installation steps. first of all, you need to upgrade your appium version to 2.0, so let\u2019s start with upgrading. to install appium 2.0 globally, you need to run the npm command below. npm i --location=global appium to verify the installed appium version, run the following command: appium -v appium drivers no longer come as default with the installation of appium 2.0. although this change adds steps for installation, it also brings some benefits.the size of appium installation has significantly dropped, eliminating the need to include drivers you don't use. as drivers can now be updated independently, you can update them without updating the entire appium framework, or keep the drivers at a stable version while updating appium. additionally, you can create your own driver in compliance with the appium driver architecture or install drivers shared within the community. you can install appium drivers using the following commands: appium driver install uiautomator2 appium driver install xcuitest appium driver install flutter appium driver install espresso to make sure that all the drivers are installed, run the following command: appium driver list with appium 2.0, another installation step involves independently installing the plugins. first, you can retrieve the list of available plugins by running the following command: appium plugin list you can install any plugin by running its respective installation command: appium plugin install appium plugin install images appium driver list tips: you can also use the 'appium-installer', a helper tool designed to simplify the installation steps. this tool was developed by the appiumtestdistribution group, and it is officially recommended by appium. beyond appium's official installation steps, community-developed appium-installer npm library simplifies the installation process. with appium-installer, you can choose the drivers and plugins you want to install and complete the installation with a single command. you may realize that the installation steps of appium-installer are very similar to the installation steps of webdriver.io. let's look at the installation steps of appium 2.0 using appium-installer. you install the appium-installer globally by running the command below. subsequently, you execute the appium-installer command, choose the drivers and plugins we wish to install, and thus complete our setup. install appium-installer npm install appium-installer -g install appium appium-installer install drivers and plugins from anywhere now, let's explore the features of appium's extension cli for managing drivers and plugins. let's take a look at how you can install other drivers developed by the community or not maintained by the appium team. the syntax for installing drivers and plugins is: appium install [--source=] [--package=] [--json] required arguments: : driver or plugin : this is the name, location, and\/or version of the extension you want to install. \u00A0 optional arguments: --source: this directs appium to where to find your extension. see the table below for details --package: when is git or github, --package is required. it should be the node.js package name of the extension. --json: return the result in json format source description example none without a '--source', appium automatically matches to official extension names and installs the latest version of any match via npm appium plugin install relaxed-caps npm install an extension via npm appium driver install --source=npm appium-tizen-tv-driver@0.8.1 github install an extension via a github org and repo appium driver install --source=github --package=ppium-windows-driver appium\/appium-windows-driver git install an extension via a git url appium driver install --source=git --package=appium-windows-driver https:\/\/github.com\/appium\/appium-windows-driver.git local install an extension from a local path on the filesystem appium driver install --source=local --package=custom-driver \/usr\/local\/plugins\/custom-driver \u00A0 configuration files in addition to command-line arguments, appium has now extended its support to include configuration files. essentially, nearly all parameters that were once exclusively provided through the cli in appium 1.x can now be managed through a configuration file. let's consider the advantages of customizing the configuration file. first, customization provides flexibility. you can easily switch between different configurations for various test scenarios. for instance, if you have different sets of devices or platforms, you can maintain separate configuration files for each set and switch between them as needed. second, instead of passing a series of command line arguments every time you start your appium server, you can manage your configuration from a file. finally, configurations can sometimes be a nuisance during installations. it is also beneficial to have the same configuration settings for all team members. you can share configuration files within the team and operate with consistent appium server settings. let's look at how we can configure the config file. appium supports json, yaml, js, and cjs file types for configuration files. you can name the configuration files anything you want, however, files named appiumrc.json (as recommended) will be automatically loaded by appium to specify a custom location for your configuration file, you can run the command 'appium --config \/path\/to\/config\/file'. let's examine an example of how you can configure plugins and drivers. { \"server\": { \"use-plugins\": [\"my-plugin\", \"some-other-plugin\"] } \"driver\": { \"xcuitest\": { \"webkit-debug-proxy-port\": 5400 } } } breaking changes features server base path in appium 1.x, the default server path \u201C\/wd\/hub\u201D was discontinued. it was changed from \u201Chttp:\/\/localhost:4723\/wd\/hub\u201D to \u201Chttp:\/\/localhost:4723\/\u201D. however, if you wish to adhere to the old practice, you can launch the appium server with the following command or set it in the appium configuration file as 'basepath: \u2018wd\/hub\u2019. appium --base-path=\/wd\/hub driver-specific command line options with appium 2.0,, the cli parameters specific to drivers and platforms have been moved under the drivers. this change provides easier management flexibility and organization for driver-specific parameters, thereby eliminating confusion about which parameter belongs to which driver. let's examine the old and new usage with the examples below. appium 1.x appium --webdriveragent-port 8100 appium 2.x appium --driver-xcuitest-webdriveragent-port=5000 capabilities with appium 2.0 supporting w3c standards, capabilities have changed as well. except for w3c standard capabilities (like 'browsername' and 'platformname'), a vendor prefix has been incorporated. a vendor prefix is a string succeeded by a colon. for example: appium:app appium:noreset appium:devicename \"wd\" javascript client library no longer supported over the years, some of appium's creators have maintained the wd client library. however, with appium 2.x, this library has been deprecated and hasn't been updated for use with the w3c webdriver protocol. consequently, if you're using this library, you'll need to transition to a more modern one. at this juncture, both appium and the wd client library recommend webdriverio. webdriverio, with its updated features, fully supports the new protocol standards introduced by appium 2.0. appium 2.0 brings new possibilities i've walked through the new features of appium 2.0 and given some examples. these enhancements, which enable anyone to develop and share drivers and plugins, open the door to a world of development opportunities far beyond the ios and android platforms. i hope this blog post was useful in helping your transition to appium 2.0."
},
{
"title":"Main Reasons of the Failed Test Automation Projects",
"body":"It is a fact: Software Test Automation accelerates the software development lifecycle and testing process. Automation improves software quality, speeds up regression testing, and reduces costs. That's why companies or teams often want to incorporate the word \"automation\" into their projects when they hear it. However, some time after the automation project starts, team focus shifts to ot...",
"post_url":"https://www.kloia.com/blog/main-reasons-of-the-failed-test-automation-projects",
"author":"Betul Sorman",
"publish_date":"01-<span>Aug<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/betul-sorman",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/failed-test-automation-project.png",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","behavior-driven-development":"Behavior Driven Development","qa":"QA","test-driven-development":"Test Driven Development","performance-testing":"Performance Testing","data-driven-testing":"Data-Driven Testing","two-factor-authentication":"two-factor authentication","manual-testing":"manual testing" },
"search":"17 <span>nov</span>, 2023main reasons of the failed test automation projects test automation,software testing,behavior driven development,qa,test driven development,performance testing,data-driven testing,two-factor authentication,manual testing betul sorman it is a fact: software test automation accelerates the software development lifecycle and testing process. automation improves software quality, speeds up regression testing, and reduces costs. that's why companies or teams often want to incorporate the word \"automation\" into their projects when they hear it. however, some time after the automation project starts, team focus shifts to other tools, new project priorities or feature improvements. in these cases, automation fails to meet expectations, forcing teams to revert to manual testing. this turns the test automation effort into waste\u2026 as kloia, we have worked in various fields and identified many reasons that can cause a software test automation project to fail. in this blog post, i will point at five reasons for failure and discuss the measures that can be taken to address these issues. 1 - lack of knowledge from decision makers modern organizations are embracing automation to streamline processes and enhance efficiency. however, many organizations suffer from lack of adequate support from decision makers. this is caused by the lack of knowledge about automation. moreover, some decision makers do not see this disconnect as a problem, and they show little interest in familiarizing themselves with the intricate workings of this pivotal aspect of their business. for instance, the decision maker may have a misconception that test automation will solely provide cost savings. this may cause them to allocate inadequate budget for the project. when this happens, the project team does not have sufficient resources to meet the requirements. therefore, the decision maker\u2019s misconception leads to an inevitable failure. decision makers need to understand the potential of test automation and the basic prerequisites for successful test automation projects. these informed decision makers can make the right decisions, providing the necessary resources and support for projects. this, in turn, enables the successful implementation of test automation projects and helps organizations enhance software quality under a systematic, sustainable leadership. 2 - unrealistic expectations there is no way to put this softly: test automation adds incredible strength to your development process but it takes time and it won\u2019t solve all of your problems. many teams assume that switching to automation is an easy, one-off task, but it is an ongoing effort. once you start automating with the simplest scenarios, you will then need to move to more complex scenarios to reap the benefits of test automation. however, it's impossible to automate every function. even if all tests were to be automated (they won't be), testers are needed to manage these automated tests. your application will be constantly changing and growing. as a result, tests will need to be updated and new tests will be added as the application grows. none of these happen magically, they all need effort and time. teams that treat test automation as a one-off task underestimate the true effort needed for automation and end up creating unrealistic timelines for themselves. they often stem from misconceptions about the capabilities and limitations of test automation tools, as well as the time and effort required for successful implementation. decision makers and stakeholders may have exaggerated expectations regarding the speed, coverage, and effectiveness of automated tests. these unrealistic time expectations can arise at different stages of a test automation and they negatively impact project goals, quality, and resource allocation. some teams expect automation to fix everything. these teams mistakenly believe that once they implement automation, all their testing problems will be solved, including eliminating human errors, detecting all bugs, and achieving 100% test coverage. this expectation can lead to disappointment when automation falls short of meeting these lofty, impossible goals. companies who expect everything from test automation overlook the fact that test automation has its limitations and cannot replace human intuition, creativity, and domain expertise. as a result, critical issues go undetected, and the project fails to deliver the desired outcomes. automation may not solve everything if the project is struggling to keep up with the quality concern and releases faster than usual. can it help? certainly. it helps shorten some cycles in manual testing, it can help the team to get to the market faster. therefore, test automation should be viewed as an important ingredient, but not a magic wand that can fix all issues. it is essential to establish realistic goals and timelines, taking into account the inherent challenges and limitations of test automation. by setting achievable objectives, allocating sufficient resources, and understanding the iterative, ongoing nature of test automation, organizations can improve the chances of success in their test automation endeavors. managing expectations realistically is crucial for the long-term viability and effectiveness of test automation projects. emphasizing the collaborative effort between automation and human testers can lead to a more effective and sustainable approach to software testing. finally, educating decision makers and stakeholders about the benefits and limitations of test automation can help manage expectations effectively. 3 - lacking a clear roadmap having a plan is critical if you are looking for sustainable success across your projects. it is not possible to know whether you are successful when you are not completely sure of what you want to achieve. it is easy to fail if you don\u2019t know what you are aiming for. this applies to automation projects too, especially if this is your first one. let me walk you through a common example. an excited team wants to start test automation. they pick the first feature in their spec document - login with two-factor authentication (2fa). this feature, unfortunately, is quite complex to automate reliably. with each release, automation gets more complex and potentially less secure due to the additional layer it introduces. the team ends up spending considerable effort in developing and maintaining the test automation for 2fa, but gets very low returns from the investment into test automation. to avoid problems like this, it is very important to plan automation projects at the beginning. you can start planning by asking the following key questions about test automation: which test cases are suitable for automation, which ones will be automated? which test scenarios will still require human intervention? who will be responsible for maintaining cases and scenarios? which technology stacks and applications will be affected? which test processes will take priority? which tests will run on certain pipelines? what is the review procedure for test results? who is responsible for fixing failed scenarios? how will new scenarios be added to the existing jobs? do we need any pocs to establish clarity in any areas, before we start working on them? what is the desired level of stability before we scale solutions? answering these fundamental questions before starting the project helps to gain the maximum benefit in a short time and increases the chances that your efforts will not go to waste after the initial round of automation. 4 - insufficient team skills and experience a successful automation project requires specific skills to take off and keep going. from analysts to automation specialists, the team must have all the skills needed to run end-to-end automation. however, it can be difficult to find people with enough technical knowledge to add these skills to the team. for example, small companies and startups may not be able to afford to hire dedicated automation engineers, as finding successful, experienced, or interested individuals can be difficult and time-consuming. thus, the team's lack of expertise and knowledge to complete all their tasks and activities will result in automated processes that are not up to expectations. it is important to create a balanced team for an effective test automation project. if the team does not have these skills, it is important to invest in training to gain these skills or to hire the right talent with the right skills. teams can accelerate their learning pace by inviting everyone to take part in automation projects. test automation can be introduced to new hires as a part of the onboarding process. 5 - automating a broken test process automation, when done right, speeds up everything at a fraction of the cost. that is why it is important to choose a healthy testing process to automate. when a broken process is automated, all that is achieved is a faster broken process. you can maximize automation output by first evaluating the existing processes and obtaining answers to the questions below: do we need improvements before we automate? are there bottlenecks or inefficiencies in the existing process? are there similar processes that can be merged before automation? what are the clear steps in the lifecycle of a story? what are the responsibilities of each team member at each step? what is the process for iterative improvement? there is no perfect process for performing automation tasks, but improvements are always possible. some basic improvements are removing tasks that do not add value, such as rework, unnecessary approvals. once you remove these bottlenecks and implement the improvements, you can build a test automation process that speeds up the new, lean process. conclusion software test automation projects can fail for various reasons. failure is often associated with a lack of or incorrect approach, planning, implementation, or management process. the main reasons for project failure include inadequate resources, ineffective test strategies, faulty test scenarios, incorrect prioritization, poor communication, lack of collaboration, technical issues, and misunderstanding of requirements. additionally, factors such as the wrong selection of test automation tools, inexperience or lack of training in tool usage can also negatively impact project success. to ensure a successful software test automation project, it is important to have the right strategy and planning, select appropriate tools, execute a good training and communication process, create effective test scenarios, and adopt a continuous improvement approach."
},
{
"title":"Seamless Updates with Canary Deployment on AWS EKS: Leveraging Istio, Argo CD, and Argo Workflows",
"body":"In the dynamic world of cloud-native applications, deploying new features or updates to production without causing disruptions to users is a challenging task. Canary deployment, allows organizations to roll out changes gradually, reducing risks and gathering valuable feedback from a subset of users before a full release. In this comprehensive guide, we will explore how to implement canar...",
"post_url":"https://www.kloia.com/blog/seamless-updates-with-canary-deployment-on-aws-eks-leveraging-istio-argo-cd-and-argo-workflows",
"author":"Ahmet Ayd\u0131n",
"publish_date":"27-<span>Jul<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/ahmet-aydın",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/seamless-updates-canary-deployment-aws-eks-blog.png",
"topics":{ "aws":"AWS","devops":"DevOps","eks":"EKS","argo-cd":"argo cd","istio":"istio" },
"search":"03 <span>aug</span>, 2023seamless updates with canary deployment on aws eks: leveraging istio, argo cd, and argo workflows aws,devops,eks,argo cd,istio ahmet ayd\u0131n in the dynamic world of cloud-native applications, deploying new features or updates to production without causing disruptions to users is a challenging task. canary deployment, allows organizations to roll out changes gradually, reducing risks and gathering valuable feedback from a subset of users before a full release. in this comprehensive guide, we will explore how to implement canary deployment on aws elastic kubernetes service (eks) using powerful tools like istio, argo cd, and argo workflows. by combining these technologies, development teams can innovate rapidly while maintaining the reliability and stability of their cloud-native applications. in this guide, we will delve into the step-by-step process of setting up an aws eks cluster, configuring istio for advanced service mesh capabilities, and simplifying application deployment with argo cd. furthermore, we will showcase how argo workflows can orchestrate the canary deployment process, providing a seamless and controlled transition to the new version. let's dive into the world of canary deployment on aws eks, where reliability meets innovation, and continuous improvement becomes a reality. setting the stage with aws eks aws elastic kubernetes service (eks) is a managed kubernetes service that simplifies the deployment, management, and scaling of containerized applications using kubernetes. it provides a reliable and scalable platform for running microservices and applications. # create an eks cluster using the aws management console or aws cli aws eks create-cluster --name my-eks-cluster --role-arn arn:aws:iam::123456789012:role\/eks-cluster-role --resources-vpc-config subnetids=subnet-1a,subnet-1b,subnet-1c,securitygroupids=sg-1234567890 # configure the kubeconfig file to access the eks cluster aws eks update-kubeconfig --name my-eks-cluster # verify the cluster status kubectl get nodes example: setting up an aws eks cluster introducing istio for service mesh capabilities istio is an open-source service mesh platform that offers advanced networking and security features for kubernetes applications. it provides powerful traffic management, observability, and fault tolerance capabilities. # install istio using helm helm repo add istio.io istio helm repo update helm install my-istio istio.io\/istio # enable istio automatic sidecar injection kubectl label namespace default istio-injection=enabled # create istio virtualservice and destinationrule for canary deployment kubectl apply -f - < example: installing istio and enabling traffic management simplifying application deployment with argo cd argo cd is a declarative, gitops continuous delivery tool for kubernetes. it helps automate application deployments, reduces human errors, and provides a consistent way to manage applications' configuration. installing argo cd and argo workflows to use argo cd and argo workflows for your canary deployment, you need to install these tools in your kubernetes cluster. # create a namespace for argo cd kubectl create namespace argocd # install argo cd using helm helm repo add argo https:\/\/argoproj.github.io\/argo-helm helm repo update helm install argocd argo\/argo-cd -n argocd # expose the argo cd ui kubectl patch svc argocd-server -n argocd -p '{\"spec\": {\"type\": \"loadbalancer\"}}' example: installing argo cd # argo-cd-application.yaml apiversion: argoproj.io\/v1alpha1 kind: application metadata: name: my-application spec: destination: namespace: default server: project: default source: repourl: path: kubernetes targetrevision: main syncpolicy: automated: {} example: deploying applications with argo cd orchestrating the canary deployment with argo workflows argo workflows is an excellent tool for orchestrating complex workflows in kubernetes. with argo workflows, you can define the steps and logic required to execute the canary deployment process seamlessly. # install argo workflows using helm helm install argoworkflows argo\/argo-workflows example: installing argo workflows canary deployment workflow i'll provide a comprehensive overview of the canary deployment workflow using istio, argo cd, and argo workflows. i'll walk readers through the entire process, from pushing updates to the canary environment to monitoring and gathering feedback during the canary deployment. # canary-deployment-workflow.yaml apiversion: argoproj.io\/v1alpha1 kind: workflow metadata: name: canary-deployment spec: entrypoint: canary-deploy templates: - name: canary-deploy steps: - - name: image-promotion template: image-promotion - - name: traffic-shifting template: traffic-shifting - - name: validation-checks template: validation-checks onexit: exit-handler - name: image-promotion container: image: myorg\/image-promotion:latest command: [\"sh\", \"-c\", \"echo 'image promotion step'\"] - name: traffic-shifting container: image: myorg\/traffic-shifting:latest command: [\"sh\", \"-c\", \"echo 'traffic shifting step'\"] - name: validation-checks container: image: myorg\/validation-checks:latest command: [\"sh\", \"-c\", \"echo 'validation checks step'\"] - name: exit-handler container: image: myorg\/exit-handler:latest command: [\"sh\", \"-c\", \"echo 'canary deployment completed'\"] example: creating a canary deployment workflow to illustrate the canary deployment workflow, let's consider a real-world example. the software company deploys a new version of its e-commerce application, introducing a more efficient product recommendation algorithm. the canary deployment process begins by pushing the new version to the canary environment. argo cd detects the change and triggers the canary deployment workflow orchestrated by argo workflows. during the canary deployment, istio directs only 5% of the user traffic to the canary version, while the remaining 95% is directed to the stable version. observing the behavior of the canary version under real-world traffic conditions helps identify potential issues and gauge its performance. to ensure the canary version's stability, argo workflows runs a series of automated tests against the canary deployment, including api endpoint tests and performance tests. these tests validate the application's functionality and performance against predefined criteria. additionally, the software company monitors application metrics and gathers user feedback to assess the canary version's performance. prometheus and grafana are used to monitor key performance indicators (kpis) such as response times and error rates. benefits of canary deployment canary deployment offers several significant benefits for the scenario of deploying cloud-native applications on aws eks with istio, argo cd, and argo workflows: risk mitigation canary deployment minimizes the risk associated with introducing new features or updates to the production environment. by initially releasing the changes to a small subset of users (the canary group), any potential issues or bugs can be identified early without impacting the majority of users. improved reliability canary deployments help ensure the reliability of applications. by testing the new version under real-world conditions with a limited user base, developers can gather valuable feedback and address any performance or stability issues before a full release. smooth rollback if the canary version shows unexpected behavior or performance degradation, a rollback is straightforward. only a small percentage of users are affected, and the impact of the rollback is minimal. early feedback loop canary deployments provide an early feedback loop from real users. this allows development teams to understand how users interact with the new features, gather feedback, and make necessary improvements before scaling the changes to the entire user base. optimized resource utilization canary deployments allow for optimal resource utilization. by directing only a fraction of the traffic to the canary version, computing resources are not fully utilized, reducing any potential overload on the system in case of a production problem. seamless user experience users in the canary group experience a seamless transition to the new version. with istio's traffic management capabilities, the transition can be smooth and controlled, ensuring minimal disruption to users. fast iteration and continuous improvement canary deployments enable development teams to iterate quickly and continuously improve their applications. this rapid iteration leads to faster innovation and the ability to respond promptly to market demands. easy monitoring and observability canary deployments come with comprehensive monitoring and observability features. tools like prometheus and grafana provide real-time visibility into the canary version's performance, making it easier to detect any anomalies or issues. validation of hypotheses canary deployments can be used to validate hypotheses about user behavior or performance improvements. for example, developers can test whether a specific algorithm change leads to better user engagement. increased developer confidence with the safety net of canary deployments, developers gain confidence in deploying changes to production. it encourages a culture of continuous integration and continuous deployment (ci\/cd). conclusion canary deployment on aws eks with istio, argo cd, and argo workflows offers a powerful strategy for seamless application updates. by gradually rolling out changes to a subset of users, organizations can mitigate risks and gather early feedback, ensuring improved reliability and a smooth user experience. the combination of istio's traffic management, argo cd's gitops approach, and argo workflows' orchestration capabilities streamlines the canary deployment process. this combination empowers development teams to iterate quickly, respond to user feedback, and maintain a competitive edge. canary deployment fosters a culture of continuous improvement, encouraging data-driven decisions based on real user interactions. it optimizes resource utilization and instills confidence in deploying changes to production. with its many benefits, canary deployment remains a key strategy for delivering high-quality cloud-native applications and staying ahead in the ever-evolving technology landscape."
},
{
"title":"Authentication and Authorization with Keycloak on AWS EKS and Aurora",
"body":"In today's technology landscape, managing user authentication and authorization is a critical aspect of application development. Fortunately, with the help of modern tools and cloud services, this process has become much more streamlined and efficient. In this comprehensive guide, we will explore how to leverage Keycloak, an open-source identity and access management solution, in conjunc...",
"post_url":"https://www.kloia.com/blog/simplifying-authentication-and-authorization-with-keycloak-on-aws-eks-and-aurora",
"author":"Ahmet Ayd\u0131n",
"publish_date":"26-<span>Jun<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/ahmet-aydın",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/keycloak-aws-eks-aurora.png",
"topics":{ "aws":"AWS","devops":"DevOps","eks":"EKS","keycloak":"Keycloak","aurora":"Aurora" },
"search":"01 <span>aug</span>, 2024authentication and authorization with keycloak on aws eks and aurora aws,devops,eks,keycloak,aurora ahmet ayd\u0131n in today's technology landscape, managing user authentication and authorization is a critical aspect of application development. fortunately, with the help of modern tools and cloud services, this process has become much more streamlined and efficient. in this comprehensive guide, we will explore how to leverage keycloak, an open-source identity and access management solution, in conjunction with aws eks (elastic kubernetes service) and aws aurora for postgresql to simplify the management of user identities and secure access to your applications. understanding keycloak user authentication and authorization are essential components of modern applications. keycloak is an open-source identity and access management solution that offers a wide range of features to simplify this process. it provides comprehensive support for single sign-on (sso), social login integration, user federation, and fine-grained access control policies. key features and benefits keycloak offers several key features that make it a powerful choice for managing user authentication and authorization. these features include sso capabilities, support for various identity providers, user federation for centralized user management, and flexible access control policies. keycloak also provides integration options with other tools and frameworks, making it highly customizable and extensible. keycloak architecture overview keycloak follows a modular architecture that consists several components working together to provide authentication and authorization services. the keycloak server is the central component responsible for handling user authentication, authorization, and identity management. it can be integrated with various user stores, such as databases or ldap servers. keycloak also supports identity brokering, which allows integration with external identity providers like google or facebook. use cases for keycloak keycloak is a versatile solution that caters to various use cases. it can be used for securing web applications, mobile applications, or microservice architectures. keycloak's sso capabilities enable users to log in once and access multiple applications seamlessly. its fine-grained access control policies allow organizations to enforce authorization rules across their applications, ensuring that users only have access to the resources they are authorized for. setting up aws eks aws eks (elastic kubernetes service) is a managed kubernetes service provided by amazon web services. it simplifies the process of deploying, managing, and scaling containerized applications using kubernetes. provisioning an eks cluster setting up an eks cluster involves several steps, starting with creating an amazon virtual private cloud (vpc) to host the cluster. you will need to configure the vpc networking, subnets, and security groups. once the vpc is set up, you can create the eks cluster using the aws management console, cli, or sdks. leveraging aws aurora for postgresql aws aurora for postgresql is a fully managed, highly scalable, and durable relational database service provided by aws. it offers the performance and reliability of commercial databases with the cost-effectiveness and ease of use of open-source databases. provisioning aurora for postgresql cluster to leverage aurora for keycloak's backend database, you need to provision an aurora cluster. this involves selecting the appropriate instance types, storage options, and configuring the cluster settings to meet your application's requirements. benefits of using aurora for keycloak's backend database utilizing aurora for keycloak's backend database brings several benefits. aurora provides high availability, automatic scaling, and automated backups, ensuring the resilience and durability of keycloak's data. it also offers improved performance with the ability to handle high read and write loads efficiently. deploying keycloak on aws eks deployment options: kubernetes manifests vs. helm charts there are multiple deployment options for deploying keycloak on eks. one option is to use kubernetes manifests, which define the desired state of the keycloak deployment. another option is to utilize helm charts, which provide a more streamlined and automated approach for deploying keycloak. creating keycloak deployment with helm when deploying keycloak using helm chart, you need to define the necessary inputs such as chart name, version, services type, and ingress rules. to ensure data persistence and scalability, it is recommended to use an external database storage solution for keycloak. aws aurora for postgresql is a managed database service that can be used as the backend for keycloak. export pg_password= helm install keycloak bitnami\/keycloak --set postgresql.enabled=false \\ --set externaldatabase.host= \\ --set externaldatabase.user=postgres --set externaldatabase.port=5432 \\ --set externaldatabase.password=$pg_password --set externaldatabase.database=postgres \\ --set service.type=nodeport proper management of keycloak's lifecycle is crucial for maintaining the availability and reliability of the authentication and authorization services. this includes scaling keycloak pods based on the workload, performing rolling updates to apply patches and updates, and monitoring keycloak's performance and health. keycloak\u2019s management console is accessible on the node port. default admin username is user and password is in secret named keycloak. securing access to applications keycloak follows a standardized authentication and authorization flow. understanding this flow is crucial for implementing secure access to applications integrated with keycloak. implementing single sign-on (sso) with keycloak keycloak provides robust support for implementing sso across multiple applications. by configuring keycloak as an identity provider, users can log in once and access multiple applications without having to provide their credentials again. keycloak offers integration with popular social identity providers such as google, facebook, and twitter. this enables users to log in to keycloak using their social media accounts, simplifying the registration and login process. implementing fine-grained access control policies keycloak's fine-grained access control policies allow organizations to define granular authorization rules. by assigning roles and permissions, access to resources within applications can be controlled with precision. conclusion by combining the power of keycloak, aws eks, and aws aurora for postgresql, you can simplify the management of user authentication and authorization in your applications. keycloak's rich feature set, combined with the scalability and reliability of aws services, provide a robust foundation for securing access to your applications. with detailed explanations and step-by-step instructions provided in this guide, you will have the knowledge and tools necessary to successfully deploy and configure keycloak on aws eks and integrate it with aws aurora for postgresql. embrace the power of these technologies to enhance the security and user experience of your applications."
},
{
"title":"AWS CloudWatch integration for Humio",
"body":"Humio is a time-series log management solution for logging, on-premises or in the Cloud. You integrate many cloud log services to Humio. Today, I will give information about humio\u2019s AWS CloudWatch integration. Humio has a few Lambda functions for CloudWatch integration. Humio offers these functions quickly as a CloudFormation template. You can find them on their GitHub page. If you want ...",
"post_url":"https://www.kloia.com/blog/aws-cloudwatch-integration-for-humio",
"author":"Halil Bozan",
"publish_date":"12-<span>Jun<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/halil-bozan",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/Humio%20AWS.jpg",
"topics":{ "aws":"AWS","devops":"DevOps","cloudwatch":"CloudWatch","cloudtrail":"CloudTrail","cloudformation":"Cloudformation","logging":"logging" },
"search":"13 <span>jun</span>, 2023aws cloudwatch integration for humio aws,devops,cloudwatch,cloudtrail,cloudformation,logging halil bozan humio is a time-series log management solution for logging, on-premises or in the cloud. you integrate many cloud log services to humio. today, i will give information about humio\u2019s aws cloudwatch integration. humio has a few lambda functions for cloudwatch integration. humio offers these functions quickly as a cloudformation template. you can find them on their github page. if you want to use a terraform script for this integration, you find it on kloia\u2019s github. humio\u2019s cloudwatch integration creates three lambda functions; cloudwatchingester: this lambda function sends logs to humio. autosubscriber: this lambda function auto-subscribes to cloudwatchingester when a new log group is created. cloudwatchbackfiller:this lambda runs if you set humiosubscriptionbackfiller to true. this lambda function provides to check existing log groups for the subscription to cloudwatchingester before humio integration. you can use the humio cloudwatch cloudformation template to integrate aws cloudwatch log groups. now, example time... in this example, i will show how to send a cloudtrail event to humio. first, you create a cloudwatch logs group or use an existing logs group for cloudtrail events. then you create a role for cloudtrail that enables it to send events to the cloudwatch logs group. aws cloudtrail update-trail\u200A\u2014\u200Aname trail_name\u200A\u2014\u200Acloud-watch-logs-log-group-arn log_group_arn\u200A\u2014\u200Acloud-watch-logs-role-arn role_arn after that, you can send logs successfully to the log group by updating your trail like an above. finally, we subscribe to the lambda function(cloudwatchingester) to log groups and we can now see cloudtail events in humio. sending events to humio is very easy you can easily send your events to humio, as my example shows. humio makes it easy to manage all your events. you can create alerts and send notifications to your communication channels, such as slack or email. you can search easily by creating queries and create customized dashboards for your events, which keep you on top of what is happening on your application.."
},
{
"title":"AWS Resource Access with IAM Role from Kubernetes",
"body":"Today, we will talk about AWS resources access methods from Kubernetes. We will cover two ways to this. These methods are kube2iam and kiam. They provide access to AWS resources without extra AWS\u2019s credentials on your cluster, and they perform with IAM access to nodes of your cluster. Why do we need Kube2iam or Kiam ? Many products are developed on AWS and Kubernetes today. They need to ...",
"post_url":"https://www.kloia.com/blog/aws-resource-access-with-iam-role-from-kubernetes",
"author":"Halil Bozan",
"publish_date":"12-<span>Jun<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/halil-bozan",
"featured_image":"https://cdn2.hubspot.net/hubfs/4602321/Kubernetes-2.jpg",
"topics":{ "aws":"AWS","devops":"DevOps","cloud":"Cloud","kubernetes":"Kubernetes" },
"search":"01 <span>aug</span>, 2024aws resource access with iam role from kubernetes aws,devops,cloud,kubernetes halil bozan today, we will talk about aws resources access methods from kubernetes. we will cover two ways to this. these methods are kube2iam and kiam. they provide access to aws resources without extra aws\u2019s credentials on your cluster, and they perform with iam access to nodes of your cluster. why do we need kube2iam or kiam ? many products are developed on aws and kubernetes today. they need to access aws\u2019s resources from the applications running inside a kubernetes cluster on aws. however, this isn\u2019t the right way to give access directly to credentials. so we shouldn't use credentials directly on the cluster. thanks to kube2iam and kiam, kubernetes applications can gain access to aws resources through the aws iam role. kube2iam kube2iam was the first project that was created for accessing aws resources on kubernetes clusters with the iam role. first, let\u2019s explain how it works. kube2iam runs as a daemonset on your cluster. it runs on each worker with hostnetwork: true (in the host network). kube2iam daemon and iptables rules need to run before all other pods will require access to aws resources. iptables prevent pods from directly accessing the ec2 metadata api and unwanted requests for access to aws resources. so the traffic to the iam service of aws must be proxied for pods. we assume an existing role and create a new role for pods annotations. in this way, our pods access aws resources by assuming the role. apiversion: v1 kind: pod metadata: name: my-app labels: name: my-app annotations: iam.amazonaws.com\/role: role-arn kube2iam supports namespace restrictions. so if we have a specific namespace on our cluster and by using the flag --namespace-restrictions to provide to assume a role for pods, we can enable this mode by an annotation on the pod\u2019s namespace. let\u2019s do an example; we will use kops for creating a k8s cluster and use the python boto3 framework for the aws s3 buckets list with kube2iam. first, we install kube2iam with helm chart; helm install stable\/kube2iam namespace kube-system \\ --name kube2iam \\ --set=extraargs.base-role-arn=arn:aws:iam::role\/ \\ --set extraargs.default-role=kube2iam-default \\ --set host.iptables=true \\ --set host.interface= \\ --set rbac.create=true after that, we create the aws iam policy to allow nodes to assume different roles; { \"version\": \"2012-10-17\", \"statement\": [ { \"effect\": \"allow\", \"action\": [ \"sts:assumerole\" ], \"resource\": [ \"arn:aws:iam::xxxxxxxxxxxx:role\/k8s\/*\" ] } ] } we create the roles that the pods can assume. for example, thanks to the role and amazons3fullaccess policy we created above, we got access to listing s3 bucket. { \"version\": \"2012-10-17\", \"statement\": [ { \"sid\": \"\", \"effect\": \"allow\", \"principal\": { \"service\": \"ec2.amazonaws.com\" }, \"action\": \"sts:assumerole\" }, { \"sid\": \"\", \"effect\": \"allow\", \"principal\": { \"aws\": \"arn:aws:iam::xxxxxxxxxxxx:role\/nodes.k8s.example.com\" }, \"action\": \"sts:assumerole\" } ] } finally, we create a python script with the boto3 framework to list s3 buckets on aws. import boto3 client = boto3.client('s3') response = client.list_buckets() for i in response['buckets']: print([i['name']]) in this script, we take the buckets in the response variable with boto3. then we create a deployment for k8s. apiversion: extensions\/v1beta1 kind: deployment metadata: name: kiam-demo spec: template: metadata: labels: app: kiam-demo annotations: iam.amazonaws.com\/role: arn:aws:iam::xxxxxxxxxxxx:role\/k8s\/kube2iam-demo spec: containers: - name: kiam-demo image: halil9\/kube2iam-demo kiam kiam is a project inspired by the kube2iam project. it provides access to aws resources with an iam role on our cluster like kube2iam. it is developed to resolve the shortcomings of kube2iam. kiam works to resolve issues of kube2iam. it has two components (a server and an agent); these components run as a daemonset on our cluster. kiam agent doesn\u2019t directly communicate with iam, kiam agent communicates with iam through the server component. server components prefetch credentials from aws. this optimization reduces response times for sdk clients. we will assume an existing role and will create a new role for pods annotations. apiversion: v1 kind: pod metadata: name: my-app labels: name: my-app annotations: iam.amazonaws.com\/role: role-arn kiam has a server and agents as shown below. the server communicates with iam with k8s master role for apps on the agent. kiam deployment is more complicated than kube2iam. we must define iam resources for communication between kiam components on our cluster and aws. we can provision iam resources with terraform modules at kiam github repository and before when we install kiam, we must define a certificate (helm chart generates a self-signed tls certificate by default) for communication between kiam components(server and agent). if you want to create and install your own, you can create tls certificates and private keys as described here. (show details) if we want to install kiam, we can use helm charts deployment ; helm install stable\/kiam namespace kube-system --name kiam \\ --set=extraargs.base-role-arn=arn:aws:iam:::role\/ \\ --set extraargs.default-role=kube2iam-default \\ --set host.iptables=true,host.interface= \\ --set rbac.create=true and then; we can follow the same steps i outlined above for the s3 buckets listing example. conclusion here is a table that compares the two solutions. if we evaluate the two projects in similar situations; installation: kube2iam can be installed easily with the helm chart, but kiam agent can be tricky during installation. security: kiam is better than kube2iam because kiam has two components (server and agent) and nodes can\u2019t directly communicate with aws through this feature, only the server communicates with aws. contribution: kiam github's contribution is more frequent than kube2iam github's contribution. performance: this depends on your choices. for example, if security or up-to-dateness is important for your case, you should use kiam. but if the easy installation is important for you, you should use kube2iam. kiam is better than kube2iam with respect to technical performance. resources; https:\/\/github.com\/jtblin\/kube2iam https:\/\/github.com\/uswitch\/kiam"
},
{
"title":"My first year at kloia",
"body":"I started working at kloia about a year ago. You might remember my blog post about my first month. Many things have changed since then. First of all, I learned a new language. I don\u2019t mean a programming language. No, I learned the language of a programmer. Now I know the difference between dockers and docker for instance! :D I moved to Software Test Automation Engineering from Energy Sys...",
"post_url":"https://www.kloia.com/blog/my-first-year-at-kloia",
"author":"Muhammet Topcu",
"publish_date":"15-<span>May<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/My-first-year-at-kloia-blog.png",
"topics":{ "kloia":"kloia","onboarding":"onboarding","working-remote":"working remote","qateam":"qateam","hrmarketing":"HRMarketing" },
"search":"15 <span>may</span>, 2023my first year at kloia kloia,onboarding,working remote,qateam,hrmarketing muhammet topcu i started working at kloia about a year ago. you might remember my blog post about my first month. many things have changed since then. first of all, i learned a new language. i don\u2019t mean a programming language. no, i learned the language of a programmer. now i know the difference between dockers and docker for instance! :d i moved to software test automation engineering from energy system engineering. this move was full of mysteries for someone like me who does not have a programming background. all i saw was a wall of mist on my first days, and i was trying to find my way in it. my mentors were my beacons that shed light on my path. with their help, i filled the gaps between what i think i know and what i actually know. the knowledge that i obtained along helped me dissipate the fog. but the more i learn, the more i realize my ignorance. and oh my, it never ceases to amaze me! now, let\u2019s take a look at what i have been doing throughout the last year. but beware: when you gaze into the code, the code also gazes into you. step one: learn how to use computer i am not kidding. i have been using computers for a long time, but i needed to learn the basics again. why? because i used macos for the first time in my life! if you are 25 years old and you have been using windows for 17 years, life is pretty hard. are you just like me? here is a test of mine creatively called, \u201Care you a windows user?\u201D do you force quit applications relentlessly when trying to type @ sign? do you try to find \u201Ccreate new file\u201D option in the right click menu? do you expect clicking the red x button would close an application? if you said yes to at least two questions, or you don\u2019t know what this is about, then we are probably the same. my first client, a new horizon kloia is a consultancy company. my department, quality assurance, gives consultancy regarding application test procedures such as manual testing, api testing, performance testing, and integration of test automation infrastructure. after the onboarding period, i was eager to be a part of a project. until kloia placed me on a client project, i was working with my mentors one-on-one with small tasks, acting as tutorials and keeping me ready for the project to come. aaand i got assigned to a big test automation project along with five teammates! the istegelsin project was my first project, and i was very eager to learn anything i can. that might be the reason that i got involved in 4 different sub-projects at once including web, mobile, api test automation and data management projects. it was an intense project, but i got a chance to get hands-on experience on test creation, test automation, documentation, agile development, api testing and more.. towards the end of my involvement in the project, i was able to onboard new team members and to contribute new ideas. gain it, no matter what it is if you are working at kloia, you do not gain just knowledge or income, you gain weight as well. almost everyone loves eating at kloia and social meetings usually revolve around a meal or magically ends up with it. we also have a google sheet named \u201Ckiloia\u201D, where the entire company logs in their weight every six months. i still do not know what the goal of this sheet is: gaining weight or losing it? :d one thing is certain though, everyone in kloia is worth their weight in gold. that\u2019s how the company makes us feel. be curious. ask everything, but not twice. not everyone has a memory like an elephant. for someone like me who could only compete with a goldfish in terms of memory, it was difficult to memorize every work-related term that i don\u2019t know. at some point, my brain got to the point that it would burst out with all the unknown words. that is when i realized that i needed a note-taking app to remember all these terms. by using versatile note-taking apps such as notion or obsidian, i started creating files for every keyword related to programming, business or science that i have no idea about. in my spare time, i populated each file by searching through the internet. i did this usually after work and it is a tiresome business. pushing myself to learn after work required all the glucose in my brain cells, and it is not sustainable doing this every day and night. yet, i tried it. i exhausted my brain by trying to learn and do everything at once, and as you would guess, i burned out\u2026 it took four days of break to cool myself out. since then, i have developed a more sane personal development routine. now i try to stay below my mental exhaustion limits. if you are coming from a different industry like me, to catch up with your peers, a higher pace is needed. but you shouldn\u2019t rush things as i did, or you burn your circuits as well. :d finding out your limits and proceeding with safe speeds below those limits is key to creating a fruitful and sustainable career. learning by yourself is also a crucial skill. but if figuring out something by yourself takes too much time, you should consider taking advice from others. there is a fine line between self-study and wasting time. here is how i balance my inquisitive efforts: i do not hesitate asking any questions, but i don\u2019t ask the same question twice. a new achievement unlocked! another thing kloia provides is a space for self-improvement. many companies regard their employees as discardable or replaceable parts, and they do not want to allocate any self-development resources. but in kloia, i can assure you that if you need anything, they would do anything to satisfy your needs. kloia encourages its employees to get professional certificates. and these certificates do not need to be related to your field at all, so long as it is related to the software industry! i got my istqb ctfl certificate recently and everything was covered by the company - even tough i failed the first test. now i am preparing for aws developer associate certification, which is encouraged by the company as well. any material you need to be prepared for certification is also provided, should it be online courses or written materials. i needed a book to prepare aws-da certificate, and it was delivered to me in three days! if you need to improve your english, kloia offers support as well. the company guarantees to cover your expenses on english learning platforms such as cambly after you advance to a certain level. what i feel lucky about working at kloia having great mentors: i can ask anything, whether it is related to my department or not. utilizing my skills: i can utilize my diverse skills such as story writing, english, and translation to come up with creative marketing outputs and to contribute to kloia in different ways. ever-growing learning curve: to be able to learn new things in every working hour is just a paradise for me. this is amplified with internal informative sessions. internal events: especially food related ones. period. feeling at home: every kloian is someone that i can trust and someone who is ready to help. literally working at home helps as well. ^^ with this, i conclude my first year at kloia. it was a great year with tons of information and important milestones for my career. see you next year\u2026 in a blog post or in our company! :)"
},
{
"title":"Building a Scalable and Secure Network with AWS VPC Lattice",
"body":"The Amazon Virtual Private Cloud (VPC) is a service that allows users to create their own isolated networks in the AWS cloud. The VPC Lattice is a VPC architecture that provides a way to build a scalable and highly available network topologies in AWS. In this blog post, I will explore the AWS VPC Lattice architecture, talk about its benefits, and share examples of how it can be used. Wha...",
"post_url":"https://www.kloia.com/blog/building-a-scalable-and-secure-network-with-aws-vpc-lattice",
"author":"Ahmet Ayd\u0131n",
"publish_date":"18-<span>Apr<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/ahmet-aydın",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/secure-network-aws-vpc-lattice.png",
"topics":{ "aws":"AWS","devops":"DevOps","cloud":"Cloud","security":"security","vpc":"vpc","virtual-private-cloud":"Virtual Private Cloud","aws-vpc-lattice":"AWS VPC Lattice","lattice":"lattice" },
"search":"02 <span>oct</span>, 2023building a scalable and secure network with aws vpc lattice aws,devops,cloud,security,vpc,virtual private cloud,aws vpc lattice,lattice ahmet ayd\u0131n the amazon virtual private cloud (vpc) is a service that allows users to create their own isolated networks in the aws cloud. the vpc lattice is a vpc architecture that provides a way to build a scalable and highly available network topologies in aws. in this blog post, i will explore the aws vpc lattice architecture, talk about its benefits, and share examples of how it can be used. what is aws vpc lattice? the aws vpc lattice is a vpc architecture that allows users to create a scalable and highly available network topology in aws. the aws vpc lattice is based on a hub-and-spoke model that uses transit vpcs to interconnect multiple vpcs together. this architecture allows users to isolate different workloads, create custom routing policies, and enforce security policies across multiple vpcs. the vpc lattice also provides a centralized management plane for managing network traffic, routing, and security policies. benefits of aws vpc lattice the aws vpc lattice provides several benefits, including: scalability: the vpc lattice architecture allows users to scale their network topology as their requirements grow. users can easily add or remove vpcs from the lattice without affecting the overall network performance. isolation: the vpc lattice architecture provides isolation between different vpcs. this separation allows users to isolate different workloads, create custom routing policies, and enforce security policies across multiple vpcs. custom routing policies: the vpc lattice architecture allows users to create custom routing policies that enable them to route traffic between different vpcs based on their requirements. centralized management plane: the vpc lattice architecture provides a centralized management plane for managing network traffic, routing, and security policies. this allows users to easily manage and monitor their network topology. high availability: the vpc lattice architecture is designed to be highly available. it uses transit vpcs to interconnect multiple vpcs, providing redundancy and ensuring that the network remains available even in the event of a failure. example of aws vpc lattice let us consider an example of how the aws vpc lattice can be used in a real-world scenario. imagine a company that has multiple departments, such as sales, marketing, and finance. this company wants to isolate its workloads to enhance security and compliance. the company can use the vpc lattice architecture to create a separate vpc for each department and interconnect them using transit vpcs. each department vpc can have its own security policies and routing policies, allowing them to manage their workloads independently. the transit vpcs can be used to route traffic between the department vpcs based on their requirements. for example, the marketing department vpc may require access to a database hosted in the finance department vpc. the transit vpc can be used to route traffic between the two vpcs, allowing the marketing department to access the database securely. another example of using the vpc lattice is to create a network topology that spans across multiple regions. users can create a transit vpc in each region and use the vpc lattice to interconnect them. this provides a highly available and scalable network topology that can span across multiple regions, allowing users to deploy their applications closer to their customers and reduce latency. conclusion the aws vpc lattice architecture provides a scalable, highly available, and secure network topology that allows users to interconnect multiple vpcs together. it provides isolation between different workloads, enabling users to create custom routing policies and enforce security policies across multiple vpcs. the vpc lattice architecture also provides a centralized management plane for managing network traffic, routing, and security policies. it is a powerful architecture that can be used to design and operate complex"
},
{
"title":"AWS Application Composer: Simplifying Application Development",
"body":"As the world becomes more digital, organizations rely increasingly on software applications to streamline their operations, serve their customers better, and stay ahead of the competition. However, developing and deploying applications can be a complex, time-consuming, and error-prone process, especially when dealing with cloud-native architectures, microservices, and containers. Fortuna...",
"post_url":"https://www.kloia.com/blog/aws-application-composer-simplifying-application-development",
"author":"Ahmet Ayd\u0131n",
"publish_date":"07-<span>Apr<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/ahmet-aydın",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws-app-composer-blog.png",
"topics":{ "aws":"AWS","devops":"DevOps","cloud":"Cloud","software":"Software","awsapplicationcomposet":"awsapplicationcomposet" },
"search":"07 <span>apr</span>, 2023aws application composer: simplifying application development aws,devops,cloud,software,awsapplicationcomposet ahmet ayd\u0131n as the world becomes more digital, organizations rely increasingly on software applications to streamline their operations, serve their customers better, and stay ahead of the competition. however, developing and deploying applications can be a complex, time-consuming, and error-prone process, especially when dealing with cloud-native architectures, microservices, and containers. fortunately, amazon web services (aws) has introduced a powerful tool that simplifies and accelerates application development, called aws application composer. aws application composer is a visual development tool that helps you design, build, and deploy cloud-native applications without writing code manually. it provides a drag-and-drop interface for creating application components, connecting them with each other, and defining their behavior and relationships. aws application composer also generates the necessary code, configuration files, and deployment artifacts automatically, based on your design, and deploys them to aws services such as aws elastic beanstalk, aws lambda, and amazon api gateway. aws application composer is built on top of aws cloudformation, which is a service that lets you define and deploy aws infrastructure as code. this means that you can use aws application composer to not only create applications but also provision the underlying infrastructure they require, such as amazon rds databases, amazon s3 buckets, and amazon ec2 instances. aws application composer uses aws cloudformation templates to describe the resources and dependencies of your application, which can be versioned, tested and managed like any other code artifact. how does aws application composer work? aws application composer consists of three main components: 1. the application composer designer: this is the visual interface where you can create, edit, and manage your application components, such as apis, data sources, business logic, and integrations. the application composer designer provides a drag-and-drop canvas, where you can choose from a library of pre-built components, customize their properties and configurations, and connect them with each other using inputs and outputs. the application composer designer also provides a preview mode, where you can simulate your application's behavior and test its functionality before deploying it. 2. the application composer runtime: this is the execution environment where your application runs, processes requests, and interacts with external services and data sources. the application composer runtime is a serverless architecture, which means that it automatically scales up or down based on the demand and charges you only for the resources you consume. the application composer runtime uses aws lambda functions to execute your application's logic, aws api gateway to expose its endpoints, and aws dynamodb to store its data. 3. the application composer cli: this is the command-line interface where you can interact with aws application composer from your local machine or from a ci\/cd pipeline. the application composer cli provides a set of commands for creating, updating, and deleting your applications, as well as for managing their dependencies, versions, and deployments. the application composer cli also integrates with your favorite development tools, such as visual studio code and git, and supports yaml and json formats for your configuration files. benefits of aws application composer aws application composer offers a number of benefits for developers and organizations, including: faster time to market with aws application composer, you can create and deploy cloud-native applications in a fraction of the time it would take to write code manually. this means you can deliver new features and functionality to your customers more quickly and stay ahead of the competition. increased agility aws application composer allows you to iterate and experiment with your application design and architecture more easily and quickly than traditional development methods. this means you can respond to changes in customer demand, market conditions, and technology trends more effectively and stay ahead of the curve. reduced costs aws application composer eliminates the need for manual coding, which can be expensive and error-prone. it also automates many of the tasks involved in deploying and managing applications, such as provisioning infrastructure, monitoring performance, and scaling resources. this means you can save time and money on development and operations and focus on delivering value to your customers. improved scalability and resilience aws application composer is built on top of aws cloudformation and aws lambda, which are both highly scalable and resilient services. this means your applications can scale up or down automatically based on demand, and can recover quickly from failures or errors without disrupting your business. examples of aws application composer in action now that we have seen what aws application composer is and how it works, let's look at some examples of how you can use it in practice. example 1: building a serverless api suppose you want to create a restful api that exposes a set of endpoints for managing an online store's products, orders, and customers. you can do this easily with aws application composer by following these steps: open the aws application composer designer and create a new application. drag a \"lambda function\" component from the library and name it \"product crud\". customize the \"product crud\" component by specifying its runtime environment, code, and dependencies. drag an \"api gateway\" component from the library and name it \"store api\". connect the \"product crud\" component to the \"store api\" component using the \"lambda integration\" input. define the endpoints of the \"store api\" component by specifying their methods, paths, and responses. save and preview your application in the aws application composer designer. deploy your application to aws elastic beanstalk or amazon api gateway. now you have a serverless api that can handle http requests and responses, process them using aws lambda, and store the data in aws dynamodb. example 2: building a cloud-native web application suppose you want to create a web application that allows users to upload, view, and share images and videos. you can do this easily with aws application composer by following these steps: open the aws application composer designer and create a new application. drag an \"s3 bucket\" component from the library and name it \"media storage\". drag a \"lambda function\" component from the library and name it \"media processing\". customize the \"media processing\" component by specifying its runtime environment, code, and dependencies. drag a \"dynamodb table\" component from the library and name it \"media metadata\". connect the \"media processing\" component to the \"media storage\" and \"media metadata\" components using the appropriate inputs and outputs. drag an \"api gateway\" component from the library and name it \"media api\". connect the \"media api\" component to the \"media processing\" component using the \"lambda integration\" input. create a web application frontend using your favorite web development framework, such as react or angular. use the aws sdk or api gateway sdk to interact with your application's backend, such as uploading and downloading media files or querying metadata. save and preview your application in the aws application composer designer. deploy your application to aws elastic beanstalk or amazon s3. now you have a cloud-native web application that can store, process, and serve media files, scale automatically with demand, and integrate with other aws services. conclusion in this blog post, i walked you through aws application composer, a powerful tool that simplifies and accelerates application development on aws. i covered how aws application composer works, its main components, and how you can use it to build complex, scalable, and resilient applications quickly and efficiently. iprovided some examples of aws application composer in action, such as building a serverless api, a cloud-native web application, and a data pipeline. aws application composer is a valuable addition to the aws ecosystem, and it can help developers, architects, and organizations save time, reduce costs, and increase agility. aws application composer offers a range of benefits that can help organizations streamline their operations, reduce costs, and stay ahead of the competition. by providing a visual development interface, automated code generation, and integration with other aws services, aws application composer enables developers to focus on delivering value to their customers and business, rather than worrying about infrastructure and operations management. if you are interested in learning more about aws application composer, aws offers a range of resources, documentation, and training materials to help you get started. whether you are a seasoned developer or new to cloud-native application development, aws application composer can help you accelerate your time to market, increase your agility, and reduce your costs."
},
{
"title":"Application Security: Against DDoS attacks with AWS Shield",
"body":"Public websites are open to DDoS (Distributed Denial-of-Service) attacks which are usually generated by Botnets with distributed traffic targeting a particular website or service. Conventional security firewalls may not be able to make a successful defense because of the following reasons: Source IP: The bots which generate the traffic do not have the same IPs and are usually not even in...",
"post_url":"https://www.kloia.com/blog/application-security-against-ddos-attacks-with-aws-shield",
"author":"Halil Bozan",
"publish_date":"05-<span>Apr<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/halil-bozan",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/kpi-in-blog%20%285%29.png",
"topics":{ "aws":"AWS","security":"security","awsshield":"awsshield","ddos":"ddos" },
"search":"05 <span>apr</span>, 2023application security: against ddos attacks with aws shield aws,security,awsshield,ddos halil bozan public websites are open to ddos (distributed denial-of-service) attacks which are usually generated by botnets with distributed traffic targeting a particular website or service. conventional security firewalls may not be able to make a successful defense because of the following reasons: source ip: the bots which generate the traffic do not have the same ips and are usually not even in the same ip block. firewall source-ip-based traffic block rules would not be enough for such a distributed attack. dynamic patterns: smart ddos attacks do not usually have the static request header and data, which makes rule-based firewalls vulnerable to such types of attacks. unicast traffic: internet traffic is usually unicast which may possess the defender firewall under heavy traffic. inbound network capacity may be saturated during a ddos with unicast traffic. in this blog post, i will explain how to protect your public service with aws shield. what is aws shield? aws shield is a managed threat protection management service that protects application traffic coming outside of the aws network. aws shield protects against ddos attacks for aws resources at the network and transport layers (layers 3 and 4) and the application layer (layer 7). aws shield has two tiers standard and advanced. aws shield standard aws shield standard is a free service offered by amazon web services (aws) that protects applications running on aws against distributed denial of service (ddos) attacks. aws shield standard provides always-on detection and automatic inline mitigations that protect against common ddos attack methods, such as syn\/ack floods, udp floods, and reflective attacks. the service is automatically enabled for all aws customers, and it is designed to be simple to use, with no additional setup or configuration required. if an attack is detected, aws shield standard automatically mitigates it by filtering out the malicious traffic and allowing only legitimate traffic to reach the application. while aws shield standard provides basic protection against ddos attacks, customers who require more advanced protection, such as protection against larger and more sophisticated attacks, can upgrade to aws shield advanced, which provides additional features and support for custom rules and mitigations. aws shield advanced aws shield advanced is a premium ddos protection service offered by amazon web services (aws) that provides enhanced protection against ddos attacks for aws customers. in addition to the features provided by aws shield standard, aws shield advanced includes access to 24\/7 support from aws ddos response team, as well as advanced mitigation techniques and custom mitigation controls that can be tailored to specific applications and workloads. aws shield advanced provides greater visibility into attacks and their mitigations, with access to real-time metrics and automated attack reports. customers also have the ability to integrate aws shield advanced with other aws services, such as amazon cloudfront, aws global accelerator, and elastic load balancing, to provide a comprehensive ddos protection solution for their applications. aws shield advanced is available as a paid service, and pricing is based on a monthly subscription fee, as well as additional fees for data transfer and mitigation. the cost varies based on the level of protection required and the size and complexity of the customer's infrastructure. (as of april 2023) service aws shield standard aws shield advanced subscription no 1 year monthly fee no fee $3000 data transfer fee no fee as in the table below note: if your organization has multiple accounts, you only need to pay the monthly fee once for all accounts. additional costs will apply for any data transfer that originates from shield-protected services. you can view the additional fees for data transfer of each service below table. shield advanced data transfer out usage fees (per gb, as of april 2023) service up to 100 tb next 400 tb next 500 tb next 4 pb above 5 pb cloudfront $0.025 $0.02 $0.015 $0.01 aws support elb $0.05 $0.04 $0.03 aws support aws support elastic ip $0.05 $0.04 $0.03 aws support aws support global accelerator $0.025 $0.02 $0.015 $0.01 aws support route 53 no fee no fee no fee no fee no fee if the blue box in the picture below is examined, it represents the protection offered by aws shield standard. by subscribing to aws shield advanced and adding resources under its protection, you can start to baseline your traffic and gain an understanding of the throughput capacity of these protected resources. based on this information, you can adjust the limits accordingly and provide mitigations against attacks targeting those resources much faster. aws shield advanced has these additional protection features: cloudwatch event notification and ddos threat dashboards you can use cloudwatch event notification to create rules that detect ddos attacks and trigger automated responses. for example, you can create a rule that detects an increase in traffic to your ec2 instances or an increase in requests to your api gateway endpoints. when the rule is triggered, you can use cloudwatch to send a notification to your security team, trigger an aws lambda function that mitigates the attack, or initiate an automated response using aws waf (web application firewall). aws shield also provides a ddos threat dashboard that you can use to monitor your aws resources and detect potential ddos attacks. the dashboard provides a real-time view of your aws environment and displays metrics such as the number of requests, the size of requests, and the source of the traffic. you can use this information to identify patterns and anomalies in your traffic and take action to mitigate any potential ddos attacks. both aws shield standard and advanced provide ddos threat dashboards and also can be integrated with cloudwatch event notification, but the level of detail and insights provided in the dashboard is greater for aws shield advanced customers. shield response team the aws shield response team is a specialized team within aws that is responsible for responding to ddos attacks against aws customers. the aws shield response team is staffed by security experts who have extensive experience in ddos mitigation and network security. when an aws customer experiences a ddos attack, they can contact the aws shield response team for assistance. the team works with the customer to identify and mitigate the attack, using a combination of automated and manual techniques. in addition to responding to ddos attacks, the aws shield response team also provides proactive support to customers, helping them to configure their aws environments for maximum security and resilience against ddos attacks. it's worth noting that the aws shield response team is only available to aws customers who have subscribed to the aws shield service. if you are an aws customer and you need assistance with ddos protection, you can contact the aws shield response team through the aws support center. l7 anomaly detection via waf aws shield adaptive protection is a security feature that helps protect customers against distributed denial of service (ddos) attacks. this service is designed to provide automatic and scalable protection against ddos attacks and can be used by any customer who is using amazon elastic compute cloud (ec2), elastic load balancing (elb), aws global accelerator, or amazon cloudfront. aws shield adaptive protection: automatic ddos mitigation: aws shield adaptive protection automatically detects and mitigates ddos attacks, without the need for customer intervention. proactive protection: the service provides proactive protection against ddos attacks by monitoring traffic patterns and looking for anomalies that could indicate an attack. scalability: aws shield adaptive protection is designed to scale automatically to handle high-volume ddos attacks. integration: the service is integrated with other aws services like amazon cloudfront, amazon route 53, aws global accelerator, and aws elastic load balancing. real-time monitoring and reporting: aws shield adaptive protection provides real-time monitoring and reporting on ddos attack activity and mitigation, allowing customers to stay informed about the status of their infrastructure. advanced protection: aws shield advanced provides additional features such as 24\/7 access to aws ddos response team, the ability to customize rules, and enhanced protection for elastic ip addresses. aws shield adaptive protection is a powerful security feature that provides automatic and scalable protection against ddos attacks, allowing customers to focus on running their applications without worrying about ddos attacks. l7 anomaly detection via waf aws shield l7 anomaly detection via waf (web application firewall) is designed to protect web applications from layer 7 ddos attacks, which are attacks that target the application layer of the osi model. these attacks can be difficult to detect and mitigate because they can mimic legitimate traffic, making it challenging to differentiate between malicious and non-malicious traffic. the waf component of aws shield l7 anomaly detection provides a set of rules that can be used to identify and block suspicious traffic, such as traffic from known malicious ip addresses, traffic that contains sql injection or cross-site scripting (xss) attacks, and traffic that contains unusual url patterns. the waf rules can also be customized to meet application-specific requirements. when a layer 7 ddos attack is detected, aws shield l7 anomaly detection via waf can automatically create and apply mitigation rules to block malicious traffic. the system can also send notifications to the aws console and to the customer via amazon sns (simple notification service). aws shield l7 anomaly detection via waf can be used with any web application running on aws, including those hosted on amazon ec2 instances, amazon elastic load balancing (elb), and amazon cloudfront. it can be enabled and configured through the aws management console, aws cli, or aws sdks. there are no upfront costs to use aws shield l7 anomaly detection via waf, and customers are only charged based on the volume of traffic protected. health based detection health-based detection uses a combination of machine learning algorithms and heuristics to monitor the health of an application and identify abnormal traffic patterns that may indicate a ddos attack. the system analyzes a wide range of metrics, such as network traffic, application performance, and server resource utilization, to determine the normal behavior of an application under normal operating conditions. once the normal behavior of an application has been established, health-based detection can monitor the application for any deviations from the expected behavior. if abnormal traffic patterns are detected, such as a sudden increase in traffic or a spike in server resource utilization, health-based detection will automatically generate an alert in the aws management console and send a notification to the email addresses specified by the customer. the alert will include information about the type of attack, the affected aws resources, and the recommended next steps for mitigating the attack. customers can also configure health-based detection to automatically initiate mitigations, such as blocking traffic from specific ip addresses or redirecting traffic to other aws resources. health-based detection is a feature that is available with both aws shield standard and aws shield advanced, and it is automatically enabled for all aws customers. there are no additional fees for using health-based detection, and customers only pay for the traffic that is protected by aws shield. proactive event response proactive event response is a feature that is available with aws shield advanced, which is a paid tier of aws shield that provides more advanced ddos protection features than the basic aws shield offering. with proactive event response, aws shield advanced can detect potential ddos attacks in real-time and automatically notify customers of the attack. cost protection when aws shield advanced protection is enabled for your aws resources, aws waf can be associated with your resources at no additional cost, except for cases where additional costs may apply, such as adding partner rules or using the bot control manage rule group. the baseline rules in the firewall manager can also be configured without incurring any additional costs. aws shield advanced uses automatic scaling tomitigate the effects of the attack during an attack. when an attack is detected, aws shield advanced automatically scales up the resources that are under attack, which can help absorb the traffic and reduce the impact on your application. aws shield advanced can also automatically notify aws drt(ddos response team), which can work with you to mitigate the attack and provide guidance on how to prevent similar attacks in the future. also, can be applied for reimbursement for those extra scaled resources during the mitigation of the ddos attacks. aws shield protection conclusion a high ddos resiliency can be provided for your applications with aws shield. below are some effective reasons to consider implementing aws shield as part of your overall security strategy; protection against ddos attacks: ddos attacks are a growing threat to businesses of all sizes, and they can cause significant damage to your brand, reputation, and revenue. aws shield provides comprehensive protection against ddos attacks, helping to keep your online applications and services up and running. automated protection: aws shield provides automated protection against ddos attacks, which means that you don't have to spend time monitoring and responding to attacks. instead, aws shield takes care of this for you, freeing up your it resources to focus on other critical tasks. minimal latency impact: aws shield is designed to minimize latency impact, so your online applications and services continue to run smoothly even during an attack. this means that your customers can continue to access your services, which helps to maintain customer satisfaction and loyalty. integration with other aws services: aws shield integrates seamlessly with other aws services, such as amazon cloudfront, amazon route 53, and elastic load balancing, to provide a comprehensive security solution for your business. this makes it easy to implement and manage a security strategy that works for your unique needs. access to 24\/7 support: aws shield advanced provides access to 24\/7 support from aws security experts, who can help you to optimize your security strategy and respond to any security incidents that may occur. this provides an additional layer of protection and peace of mind for your business. implementing aws shield as part of your overall security strategy protects your business against ddos attacks, minimizes downtime and latency impact, and provides a comprehensive security solution that integrates seamlessly with other aws services."
},
{
"title":"AWS App Mesh on EKS: Simplify Microservices Communication",
"body":"As organizations continue to adopt microservices architecture, they face the challenge of managing the communication between these services. AWS App Mesh is a service mesh that makes it easy to monitor and control microservices communication. In this blog post, I will explore how to set up AWS App Mesh on Amazon Elastic Kubernetes Service (EKS) and how it simplifies the communication bet...",
"post_url":"https://www.kloia.com/blog/aws-app-mesh-on-eks-simplify-microservices-communication",
"author":"Ahmet Ayd\u0131n",
"publish_date":"24-<span>Mar<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/ahmet-aydın",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws-app-mesh-on-eks-blog%20%282%29.png",
"topics":{ "aws":"AWS","microservices":"microservices","eks":"EKS","appmesh":"appmesh" },
"search":"27 <span>mar</span>, 2023aws app mesh on eks: simplify microservices communication aws,microservices,eks,appmesh ahmet ayd\u0131n as organizations continue to adopt microservices architecture, they face the challenge of managing the communication between these services. aws app mesh is a service mesh that makes it easy to monitor and control microservices communication. in this blog post, i will explore how to set up aws app mesh on amazon elastic kubernetes service (eks) and how it simplifies the communication between microservices. what is aws app mesh? aws app mesh is a service mesh that provides a way to control and monitor the communication between microservices. it is designed to work with any containerized application running on aws, making it easy to integrate into any existing microservices architecture. with app mesh, you can define and manage the traffic between your microservices, and visualize the communication between them. setting up aws app mesh on eks to set up app mesh on eks, you need to follow these steps: 1. create an amazon elastic kubernetes service (eks) cluster: if you do not have an eks cluster, you can create one using the aws management console, the aws cli, eksctl, or infrastructure-as-code (iac) tools like terraform. in this scenario, i will use eksctl. to install eksctl, follow these steps: curl --silent --location \"(uname -s)_amd64.tar.gz\" | tar xz -c \/tmp sudo mv \/tmp\/eksctl \/usr\/local\/bin after installation, you can create an eks cluster like with the following command. the default kubeconfig location is the kubeconfig environment path or ~\/.kube\/config. eksctl create cluster -n appmesh-poc 2. to begin, install the app mesh controller. the app mesh controller, which deploys and manages app mesh resources, is a kubernetes controller. to install it, use the appmesh-controller helm chart. # add aws's eks chart helm repo helm repo add eks # create app mesh crds kubectl apply -k \"github.com\/aws\/eks-charts\/stable\/appmesh-controller\/\/crds?ref=master\" # create namespace kubectl create ns appmesh-system # set necessary environments export cluster_name= export aws_region= export aws_account_id= # create iam open id connect provider for cluster eksctl utils associate-iam-oidc-provider --region=$aws_region \\\\ --cluster=$cluster_name \\\\ --approve # get app mesh controller iam policy from github repo curl -o controller-iam-policy.json # create iam policy aws iam create-policy \\\\ --policy-name awsappmeshk8scontrolleriampolicy \\\\ --policy-document file:\/\/controller-iam-policy.json # create service account, attach policy and create cloud formation stack for app mesh eksctl create iamserviceaccount --cluster $cluster_name \\\\ --namespace appmesh-system \\\\ --name appmesh-controller \\\\ --attach-policy-arn arn:aws:iam::$aws_account_id:policy\/awsappmeshk8scontrolleriampolicy \\\\ --override-existing-serviceaccounts \\\\ --approve # install app mesh with helm helm upgrade -i appmesh-controller eks\/appmesh-controller \\\\ --namespace appmesh-system \\\\ --set region=$aws_region \\\\ --set serviceaccount.create=false \\\\ --set serviceaccount.name=appmesh-controller 3. creating a mesh involves defining a logical boundary for your microservices. this boundary determines the traffic routing rules, policies, and observability of your services. to create a mesh, use the samples available on the aws app mesh inject github repository. # for deployment have to install awscli, jq, and, kubectl packages # set necessary environments export aws_account_id= export aws_default_region= export vpc_id= # get aws's app mesh examples repo git clone # deploy example http2 application cd aws-app-mesh-examples\/walkthroughs\/howto-k8s-http2\/ .\/deploy.sh 4. the aws app mesh dashboard allows you to view the available resources. virtual gateways enable resources outside of your mesh to communicate with the resources inside your mesh. virtual services are an abstraction of a real service provided by a virtual node, either directly or indirectly through a virtual router. virtual routers handle traffic for one or more virtual services within your mesh. virtual nodes serve as a logical pointer to a spesific task group, such as a kubernetes deployment. benefits of using aws app mesh on eks by using app mesh on eks, you can simplify the communication between microservices in the following ways: service discovery app mesh provides service discovery, which makes it easy for microservices to discover and communicate with each other. you can define virtual services and virtual nodes to represent your microservices, and use them to route traffic between your services. this makes it easy to add or remove microservices without disrupting the communication between them. traffic management app mesh provides traffic management, which makes it easy to control the flow of traffic between microservices. you can define routes to specify how traffic flows between virtual services and virtual nodes. this makes it easy to implement a\/b testing, canary releases, and blue\/green deployments. observability app mesh provides observability, which makes it easy to monitor the communication between microservices. you can use cloudwatch logs and metrics to monitor the traffic between your microservices, and use x-ray to trace the path of requests as they travel through your microservices. conclusion aws app mesh on eks simplifies the communication between microservices. by using app mesh, you can define and manage the traffic between your microservices, and visualize the communication between them. this makes it easy to add or remove microservices, implement a\/b testing or canary releases, and monitor the communication between your microservices. if you're building microservices on eks, consider using app mesh to simplify your communication."
},
{
"title":"Beyond bug counts: Using KPIs for product quality and team morale",
"body":"Companies strive to improve themselves to deliver high-quality products. To do this, they set goals for the team, periodically check against these goals, and evaluate the next steps. One of these tools, Key Performance Indicators (KPIs) are used to analyze many aspects of successful product practices, including software QA. Using KPIs helps track progress, measure results, and drive impr...",
"post_url":"https://www.kloia.com/blog/beyond-bug-counts-using-kpis-for-product-quality-and-team-morale",
"author":"Acelya Gul",
"publish_date":"24-<span>Mar<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/acelya-gul",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/kpi-in-blog%20%282%29.png",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","qa":"QA","qateam":"qateam","kpi":"KPI","key-performance-indicator":"Key Performance Indicator" },
"search":"30 <span>may</span>, 2023beyond bug counts: using kpis for product quality and team morale test automation,software testing,qa,qateam,kpi,key performance indicator acelya gul companies strive to improve themselves to deliver high-quality products. to do this, they set goals for the team, periodically check against these goals, and evaluate the next steps. one of these tools, key performance indicators (kpis) are used to analyze many aspects of successful product practices, including software qa. using kpis helps track progress, measure results, and drive improvement to ensure consistently outstanding test results. in this blog post, we'll look at kpis that test teams can use to improve performance\/productivity and monitor their success. what is a key performance indicator (kpi)? kpi stands for key performance indicator. a kpi is a metric that helps businesses measure their progress against predetermined goals and objectives over time. keeping track of these indicators helps companies set goals and measure progress towards them by watching how they perform relative to their competitors and close to prior performance. they help organizations answer critical questions such as \"where do we stand right now?\" and \"what have we achieved?\". they also inform strategic investments by helping identify areas where improvements can be made. in other words, they help organizations understand how they're doing compared to where they want to be. a well-defined list of kpis will help inform businesses on how they need to develop strategies to capitalize on areas of improvement while also avoiding making costly mistakes. let's walk through a simple kpi example. say you own an e-commerce business that sells women's clothing. your main goal is to increase sales and revenue. you can use a kpi like \"monthly sales revenue\" for this. if you see that your sales revenue drops for several months in a row, you can investigate the reasons and adjust your marketing strategy. in addition, if you notice an increase in revenue after launching a new product, you can release some versions of this product and expand the product line further. in this way, it may be possible to obtain a longer-term income. in summary, by following some kpis, you can understand how your company is performing and make data-based decisions to be more successful. what are the kpis of software testing and qa? quality assurance (qa) teams are critical partners in the software development process. evaluating the performance and competencies of qa teams directly affects both testing processes and software development processes. moreover, qa-specific kpis provide a quantitative way to measure the impact of a qa team\u2019s work and enable them to effectively manage the development process. managers use kpis to measure the effectiveness of testing processes and to provide visibility into the qa teams' impact on product quality. by monitoring key performance indicators, test managers can identify potential issues before they become problems. this allows them to address issues quickly and prevent costly mistakes. additionally, kpis assist with decision-making around resource allocation and forecasting future project requirements. companies who value testing follow many qa-specific kpis to increase the success of their quality teams. they include measurements such as speed rates, success and failure rates, and efficiency rates. these indicators allow organizations to track the progress, to make necessary improvements to improve its processes and to maximize the roi (return on investment) from its software qa initiatives. the benefits of using kpis for qa teams and managers: there are many advantages to using kpis for qa: - better measurement: having clear metrics allows measuring the team's performance in a precise, objective way. for example, comparing the number of bug reports reported in certain periods and the number of bugs fixed in the same periods can provide information about team performance. - continuous improvement: by constantly monitoring kpis, teams can identify weak areas and improve themselves to ensure the best possible results. for example, if there is an increase in \u201Cdefect leakage rate\u201D, this increase may imply that there are some errors in the tests performed in the test environment. monitoring kpis makes such deviations visible, so that the team can find improvements. - focus: kpis allow you to focus on areas that can impact the business the most. if you are working in an e-commerce site, product search, product listing and cart modules are the most used features. tracking the number of bugs found in these features helps prioritize focusing on the areas that matter most to customers. - efficiency: with the targeted kpis, more efficient performance of the team can be achieved. kpis should already be created to improve performance. if the determined kpis are followed and then improvements are made according to these kpi results, high efficiency can be obtained from the team. - better decision-making: by tracking kpis, managers can make data-driven decisions on key issues such as process improvements and resource allocation. for example, if a feature is of high importance and has a high bug density, the team needs to spend more resources to test this feature. how are kpis set and tracked for your qa testing team? maximizing the performance of your qa test team starts with setting and monitoring the right kpis. but how exactly do you set and monitor kpis for your qa test team?here are some basic kpis you might consider at the beginning: test cycle time: time spent designing, executing, and reporting on a test. with this metric, it is possible to measure process efficiency and effectiveness. test case design time: time is taken to design the scenarios of the created tests. test case pass rate: ratio of the number of tests passed without defects over the total number of tests. it is a fundamental success criterion and if low, needs improvement. test coverage: a critical metric for software quality assurance that quantitatively measures the effectiveness and thoroughness of software testing by evaluating the percentage of code or functionalities covered by tests. it helps identify areas of the application that require additional testing and provides a measure of the quality of testing. once you are confident with the basics, you can monitor more complex kpis to analyze your team's performance in more detail. the following kpis can provide valuable results for your testing processes: defect leakage rate: detects efficiency in the testing process and fix bugs before they reach the end user. the data is obtained by proportioning the bugs caught in the test environment and the bugs detected in the production environment. defect discovery rate: ratio of the bugs detected during the testing process over the bugs detected during the entire product process to determine the efficiency of the testing process. defect severity distribution: distribution of bugs by severity. it shows the effect of the test team on the software. reduction in defect density: ratio of the detected bugs in a period of time compared to the previous period. test case efficiency: number of test cases a team can run in a period of time. this kpi provides insight into the speed of the test team. test case effectiveness: ratio of detected bugs to the number of test cases run in a given time period. it is one of the most critical kpis of testing processes. test efficiency: the time taken to test a product is calculated by proportioning the time spent developing it. with this measurement, it is possible to analyze the optimization of the test team and how effectively the resources are used. after new kpis are determined, they need to be monitored regularly and the data obtained should be analyzed. after the analysis, necessary actions should be taken to improve the processes where necessary. common challenges in implementing kpis in software testing while implementing and maintaining kpis brings success, there are things to watch out for as you roll out your qa-specific kpis.. one of the biggest challenges is setting actionable kpis for your qa team. it is only possible to access accurate data with meaningful kpis. incorrect data, on the other hand, allows you to make completely erroneous inferences, and an unsuccessful scenario will occur. collecting and monitoring the data specified by kpis can be a long and arduous process. as the test processes progress, it should be ensured that the data remain meaningful with kpis. that's why the team needs to participate and create data for the kpi. this may cause a different problem. if the team understands the importance of kpis, it will be easier to implement them successfully. working with too many kpis instead of creating enough ones makes it challenging to prioritize and focus on the most effective ones. monitoring kpis requires extra time, resources, special effort and cost. qa teams are often hard-working teams and have specific responsibilities. therefore, kpi tracking often seems like an additional burden. however, this burden ceases to be a problem when the processes are considered on a long-term basis. because with kpis monitored, improvements will be made in the qa teams and the quality of the developed product or service will increase. in addition, workflows can be optimized and costs reduced. therefore, working with kpis is not a sunk cost burden - it is an investment. conclusion: improve quality and team morale with intentional measurements kpis are essential tools for maintaining the performance of qa test teams. utilizing kpis can lead to more efficient and effective testing, resulting in higher quality products and greater customer satisfaction. this way, the companies can continue their product development processes without incurring many material losses. a team that uses kpis knows the impact of their work, and can identify where they need to improve. well-selected kpis provide insight into how a team is performing. with this data, managers can identify the strengths and weaknesses of the team and develop team development plans. by the way, kpis are one of the most basic solutions for a company to understand where it stands, and there is more. with its expertise in test automation and quality assurance, kloia helps companies close their gaps and optimize their processes. we create more efficient and effective testing environments by making data-based decisions. you can contact us if you want to have a quick chat."
},
{
"title":"Slice 1: Gherkin Keywords and Cucumber Expression",
"body":"Cucumber is a framework that uses Gherkin Syntax to drive Behavior Driven Development in your test infrastructure. (Need a quick primer on BDD? Check out this blogpost) In this blog post, I am going to take a glance Gherkin keywords and demonstrate the power of Cucumber Expression \u2014 the alternative to regex step definitions. Gherkin Keywords Gherkin keywords have specific functionalities...",
"post_url":"https://www.kloia.com/blog/mastering-cucumber-framework-slice-1-gherkin-keywords-and-cucumber-expression",
"author":"Muhammet Topcu",
"publish_date":"12-<span>Jan<\/span>-2023",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/mastering-cucumber-framework-5-slices1.png",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","cucumber":"Cucumber","gherkin":"Gherkin","test-driven-development":"Test Driven Development" },
"search":"02 <span>apr</span>, 2024slice 1: gherkin keywords and cucumber expression test automation,software testing,cucumber,gherkin,test driven development muhammet topcu cucumber is a framework that uses gherkin syntax to drive behavior driven development in your test infrastructure. (need a quick primer on bdd? check out this blogpost) in this blog post, i am going to take a glance gherkin keywords and demonstrate the power of cucumber expression \u2014 the alternative to regex step definitions. gherkin keywords gherkin keywords have specific functionalities, such as defining features. every feature consists of scenarios having a certain scope, and every scenario consists of steps defined by regex or cucumber expression in step definitions. feature\/ability\/business need name of the .feature file. there can be only one feature keyword per .feature file. it can get description words below itself. feature: user features application should be tested against possible behaviors of registered users. scenario: logging in with registered user note: the description line shown above can also be used with background, rule, scenario, or scenario outline (and with their aliases) unless a keyword is used beforehand. also note that some keywords such as feature get semicolons as a suffix and do not work without it. rule a keyword level between feature and scenario. it can be used to group certain scenarios having the same rule. each rule can have its own background block. note: rule keyword is pretty new, and certain test case management tool integrations may support it. also, cucumber extensions on some ides do not group scenarios by rule keyword, so they can not be executed separately. background: given go to home page rule: user must be logged in background: given sign in with 'username' and 'password' scenario: logging out when click logout button on home page then verify user is logged out on home page rule: user must be non-logged in scenario: verify non-logged in on refreshed page when go to home page then verify user is not logged in scenario\/example - name of the test case. - it contains given, when, then, and, and but keywords. given - first scenario step. - it is usually used to describe the initial context. when - it is used to declare events or actions specific to the aforementioned scenario. then - this keyword is used to declare an expected result or outcome after the when step. and and but - the purpose of these two keywords is to replace given, when, or then keywords when they are used more than once to increase readability. scenario: signing in given home page is open when click on sign in button and fill areas with 'username' and 'password' then 'username' is displayed on top right corner but sign in button shouldn't be displayed * (asterix) - all cucumber steps can be written with * keyword. - it is best suited for listing similar steps. scenario: making halva given i am hungry * i have oil * i have flour * i have sugar when i make halva and eat it then i am not hungry scenario outline\/scenario template - name of the test case. - this keyword is used to run a scenario with different values consecutively. thus, enables us to do data-driven testing. - it contains given, when, then, and, and but keywords. note: the placeholders are written inside of <> characters. \u00A0 examples\/scenarios - this keyword is used along with scenario outline to specify the values to be replaced with placeholders. scenario outline: shop cart item removal given there are items in the cart when i remove items from the cart then verify items left in the cart examples: | start | remove | left | | 12 | 5 | 7 | | 20 | 5 | 15 | background - background is used for the general steps that should be executed for every scenario. - it can be used with the rule keyword, affecting only the scenarios nested inside that specific rule. background: given go to home page and sign in with 'username' and 'password' scenario: logging out when click logout button on home page then verify user is logged out on home page scenario: verify login on refreshed page when go to home page then verify user is logged in cucumber expressions to understand cucumber expressions, we need to understand what a step definition is. scenario steps are high-level statements, and they actually don\u2019t have any meaning as far as a programming language is concerned. to give them a purpose, we need to define what they do in the background. that is called the \u201Cstep definition. ides like rubymine usually create step definitions with regex. cucumber expression provides us with a different way to create a step definition. let\u2019s compare them with each other. #regex step definition and(\/^create a user named \"([^\"]*)\"$\/) do |arg| pending end #cucumber expression step definition and(\"create a user named {string}\") do |arg| pending end as you can see from the example above, cucumber expression does not directly use a regex inside the step definition. it handles it differently. parameter types the text between curly braces in the example above is called parameters. there are some built-in parameters that cucumber provides us. these are: built-in parameter description {int} matches an integer. e.g. 18 or -59. {float} matches a float. e.g. 9.2 or -8.3. {word} matches a word without spaces. e.g. kloia. {string} matches a string with a double or single quote. e.g. \u201Ckloia company\u201D {} matches anything (\/.*\/). {bigdecimal} matches the same as {float}, converts it to bigdecimal if supported. {double} matches the same as {float}, converts it to 64 bit float if supported. {biginteger} matches the same as {int}, converts it to biginteger if supported. {byte} matches the same as {int}, converts it to 8-bit signed integer if supported. {short} matches the same as {int}, converts it to 16-bit signed integer if supported. {long} matches the same as {int}, converts it to 64-bit signed integer if supported. if the parameters above do not meet your needs and you want to create a custom parameter using your own regex, that is also possible. there are four main fields to create a custom parameter. argument description name this is the name of your parameter. this is what is used between curly brackets. for the example above, it is {name}. regexp regular expression to capture the contents of the argument. type the return type of the transformer. transformer needs to have at least arity 1 if regex does not have any capture groups. otherwise, argument number must be equal to the regex\u2019s capture group count. note: transformer should be a function. let\u2019s say that we want to give a person\u2019s name in the step and create a person object with it. here is the parametertype configuration and step definition: parametertype( name: 'name', regexp: \/\"([^\"]*)\"\/, type: person, transformer: -> (arg) {person.new(name: arg)} ) and(\"create a user named {name}\") do |person| puts person puts \"id: \" + person.id puts \"name: \" + person.name puts \"age: \" + person.age end the person class used inside the transformer: class person attr_accessor :id, :name, :age def initialize(options = {}) self.id = options[:id] || \"unknown\" self.name = options[:name] || \"unknown\" self.age = options[:age] || \"unknown\" end end \u00A0 the feature file: feature: step definition examples scenario: creating step definitions and create a user named \"kloia\" \u00A0 output of the code: # id: unknown name: kloia age: unknown you might realize the example given above might not be the best way to handle objects, but it is a good example to demonstrate the power of cucumber expressions. you may want to refer cucumber expression page on git hub. as a finishing line, let me share an idiom i love \u201Cto run with salt to anyone who says i have a cucumber.\u201D - turkish idiom meaning: trying to help everyone without thinking afterward and ending up in a bad situation. but hey, we are going to run test cases with cucumber. there is a difference!\uD83D\uDE00"
},
{
"title":"Underrated Announcements from re:Invent 2022",
"body":"re:Invent is the major annual AWS public conference. This year it took place between 28 Nov - 02 Dec, as usual in Las Vegas.. What happened in re:Invent, is public, so it does not stay in Vegas! There are so many updates and announcements - you get bombarded with social media shares, recaps, webinars, podcasts not only during the event, but also in the weeks after! There are many good re...",
"post_url":"https://www.kloia.com/blog/underrated-announcements-from-reinvent-2022",
"author":"Derya (Dorian) Sezen",
"publish_date":"14-<span>Dec<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/derya-dorian-sezen",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/reinvent-2022-underrated-announcements.jpeg",
"topics":{ "aws":"AWS","reinvent2022":"reinvent2022" },
"search":"14 <span>dec</span>, 2022underrated announcements from re:invent 2022 aws,reinvent2022 derya (dorian) sezen re:invent is the major annual aws public conference. this year it took place between 28 nov - 02 dec, as usual in las vegas.. what happened in re:invent, is public, so it does not stay in vegas! there are so many updates and announcements - you get bombarded with social media shares, recaps, webinars, podcasts not only during the event, but also in the weeks after! there are many good re:invent recaps and blog posts covering the mainstream announcements. in contrast, i will focus on the less-mentioned and underrated announcements on application modernization. async vs sync aws cto werner vogel\u2019s keynote is the most expected session at re:invent every year. this year the keynote had a major emphasis on asynchronous vs synchrony and event-driven architectures. werner vogel\u2019s keynote: \u201Cthe world is asynchronous.\u201D werner\u2019s focus on synchrony reflects the reality in the field. most software architectures are synchronous, and the reason for being synchronous is not always a business requirement - it\u2019s because they are legacy. this prevents most aws customers from benefiting from the services appropriate for distributed design and decoupled architectures, such as eventbridge, sqs, lambda serverless, and step functions, among others. while the keynote emphasized event-driven architectures strongly, the reality for established businesses in the industry is usually synchronous: monolith, 2-tier architectures, and if you are lucky, 3-tier :) soa with smaller services using the same rdbms, and called \u201Cmicroservices\u201D :) (which is not true!) desktop applications mainframes\/cobol werner said it right: \u201Csynchrony leads to tightly coupled systems.\u201D to support werner\u2019s vision of asynchrony, we need to modernize the workloads to decoupled architectures wherever applicable. modernization for an asynchronous world here are the underrated sessions and announcements to support such modernization: 1- .net microservice extractor: this tool was in its early stages of development when it was announced during the last re:invent. but now, it is much more capable of extracting functionality that you define automatically from a monolithic application, and it works great! there is a major effort on this tool, which is a step forward for splitting monolithic applications. this is how the tool looks like, you can also watch a demo. .net microservice extractor 2- porting assistant for .net: a considerable portion of the legacy applications are .net, which is the version from pre-.netcore era. this version of .net has to work in windows operating system. even if you containerize it, it still needs to run inside a windows container which is not optimal. this tools helps you to convert .net to .netcore which brings an opportunity for furtner modernization and opens the door for containers and serverless. 3- bluage mainframe\/cobol modernization: bluage was acquired by aws in 2021. this is a specialized company capable of converting cobol to java. i have personally attended to a workshop and witnessed that it is successful in doing that. here is a relevant screenshot from an aws blog: aws blu age mainframe modernization to benefit from the mainstream announcements like lambda snapstart application composer eventbridge, you need to modernize your legacy! this re:invent was the first one with a major emphasis on software architecture, including werner\u2019s keynote. first time in re:invent, i noticed an event storming session, which luca mezzalira delivered: api310 domain-driven design and event storming during the previous years, there were many topics around lift&shift migrations, but this is not mentioned anymore, which means migrations without modernizations does bring the ultimate value. i am finishing my post with another phrase from werner: \u201Csynchronous is an illusion\u201D"
},
{
"title":"A Kloia QA Summit Story: How to Meet Offline (Fast)",
"body":"To be or not to be... working remotely! That's the question. It has been almost a year since I started working in kloia, but I couldn't find the chance to meet most of my teammates in person. That's the handicap of working remotely. We work from every corner of the globe. So it's hard to arrange a common time or place for us to have a chit-chat over a cup of coffee. . Kloia QA Sapanca wa...",
"post_url":"https://www.kloia.com/blog/a-kloia-qa-summit-story-how-to-meet-offline",
"author":"Muhammet Topcu",
"publish_date":"12-<span>Dec<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/do-remote-programmers-dream-of-hand-shakes.jpg",
"topics":{ "remote":"Remote","workanywhere":"workanywhere","qa":"QA","working-remote":"working remote","teammates":"Teammates","qateam":"qateam" },
"search":"12 <span>dec</span>, 2022a kloia qa summit story: how to meet offline (fast) remote,workanywhere,qa,working remote,teammates,qateam muhammet topcu to be or not to be... working remotely! that's the question. it has been almost a year since i started working in kloia, but i couldn't find the chance to meet most of my teammates in person. that's the handicap of working remotely. we work from every corner of the globe. so it's hard to arrange a common time or place for us to have a chit-chat over a cup of coffee. . kloia qa sapanca was a rare opportunity for us to meet with every member of our team. we found the chance to meet face to face and confirmed that we are not some complex ai or android by greetings followed by firm handshakes. that was\u2026 a relief. we had a brainstorming session to improve the qa team and to take it a step further. there were many brilliant ideas overall. it wasn\u2019t a shock, though, considering the brilliance of the people who came together. it was a great 3-days event filled with eating, activities, eating, and\u2026 eating. we had great breakfasts\u2026 and, of course, a bbq is a must in geek tech teams :d and since these calories needed to be burned, we went trekking. if you are good, one day you can see the smurfs. but hey, that's us you see in the photo below. one of the best activities we did during our short trip was canoeing. many of us hadn\u2019t tried it before, so it was very exciting. and then, something unexpected happened. a storm hit. we knew gargamel was up to something. i mean, he usually brews something, but we didn't know brewing up a storm was a thing! the main cons of working remotely are a lack of social interaction and communication difficulties. we felt the lack of social interaction but not the communication difficulties. we thought that the harmony between us was pretty good before this event, but when we came together, we realized that were an orchestra in fact! and you may be asking what are the pros of working remotely, then? well, that\u2019s the topic of another blog post! overall, this was a memorable experience that i will never forget in my whole career and i wanted to share it with you too! catch you later! end of the post. don\u2019t mind me, just resting my eyes\u2026"
},
{
"title":"Implementing Datasync with Debezium by Leveraging Outbox Pattern",
"body":"What is CDC (Change Data Capture)? While working on a project, there may be multiple problems such as synchronizing different data sources, duplicating data and maintaining synchronization between microservices. The CDC can help us solve such problems. Change Data Capture is a software design model used to capture and monitor change data in Databases. Thus, it can perform operations with...",
"post_url":"https://www.kloia.com/blog/implementing-datasync-with-debezium-by-leveraging-outbox-pattern",
"author":"Hikmet Semiz",
"publish_date":"13-<span>Oct<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/hikmet-semiz",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/data-sync-with-debezium-01.jpeg",
"topics":{ "debezium":"Debezium","datasync":"Datasync","cdc":"CDC","change-data-capture":"Change Data Capture" },
"search":"13 <span>oct</span>, 2022implementing datasync with debezium by leveraging outbox pattern debezium,datasync,cdc,change data capture hikmet semiz what is cdc (change data capture)? while working on a project, there may be multiple problems such as synchronizing different data sources, duplicating data and maintaining synchronization between microservices. the cdc can help us solve such problems. change data capture is a software design model used to capture and monitor change data in databases. thus, it can perform operations with the changed data. these operations take place in real-time. debezium debezium is an open-source platform for cdc built on top of apache kafka. debezium reads the transaction logs in the database, when there is a change, these events can be carried by stream services such as kafka and consumed by another system. it reads the transaction logs and sends them to apache kafka. in case of any stop, restart or downtime, it consumes all missed events again. also, debezium supports multiple datastores and debezium can be used in systems as embedded or distributed. debezium architecture most commonly, you deploy debezium by means of apache kafka connect. the architecture simply consists of two connectors; sink and source. the source connector reads the data from the transaction logs and sends the incoming record to kafka. sink connectors, on the other hand, are connected to another system, receive the event from the kafka and operate through this event in the system it is connected to. source connectors are officially available in debezium.(connectors) why use debezium? when synchronous updating of a database and another system is desired, it must be consistent in both systems. if the operation fails in the system, it is expected to fail in the other system. failure of the operation on the system causes the system to be inconsistent. as another solution, the cost will increase when a scheduler structure does set up. in addition, it is necessary to run the scheduler at short intervals. here, too, there may be a consistency problem. by using debezium such problems can be minimized. let's think of another sample. there is a service, it has no api and has to be used somehow. for this, by using the debezium, the records from the other source service can be sent to the message queue and processed in this service. outbox pattern with debezium when there is asynchronous communication between microservices, it is important to ensure that the sent messages are transmitted, prevent data loss and ensure consistency between data. the outbox pattern is an approach for executing these transactions safely and consistently. simply, when a crud operation arrives and these events need to be dispatched, it does so within the same transaction. write these events into an outbox table. a relay reads these events and forwards them to other services via the message broker. this architecture can be implemented in different ways. when we do this with debezium, debezium will do the relay duty here. it will receive events written to the outbox table with kafka connect and send them to apache kafka. other services will consume these events via apache kafka. embedded debezium fault tolerance and reliability may not be desired in some applications. instead, those applications may want to place debezium connectors directly within the application area. it can be requested to write directly to the other system instead of staying permanently in kafka. in such cases, debezium connectors can be configured very easily using the debezium engine and the provided api can be used. in the example below, data will be transferred from postgres to redis. first, debezium dependencies are added in pom.xml. io.debezium debezium-api 1.4.2.final io.debezium debezium-embedded 1.4.2.final io.debezium debezium-connector-postgres 1.4.2.final basically, you create the embeddedengine with a configuration file that defines the environment for both the engine and the connector. @bean public io.debezium.config.configuration authorconnector() { return io.debezium.config.configuration.create() .with(\"name\", \"author-connector\") .with(\"connector.class\", \"io.debezium.connector.postgresql.postgresconnector\") .with(\"offset.storage\", \"org.apache.kafka.connect.storage.fileoffsetbackingstore\") .with(\"offset.storage.file.filename\", \"\/tmp\/offsets.dat\") .with(\"offset.flush.interval.ms\", \"60000\") .with(\"database.hostname\", host) .with(\"database.port\", port) .with(\"database.user\", username) .with(\"database.password\", password) .with(\"database.dbname\", database) .with(\"database.include.list\", database) .with(\"include.schema.changes\", \"false\") .with(\"database.server.name\", \"author-server\") .with(\"database.history\", \"io.debezium.relational.history.filedatabasehistory\") .with(\"database.history.file.filename\", \"\/tmp\/dbhistory.dat\") .build(); } the kafka connector class to be extended is defined in the connector.class field. here is the usage for postgres. when the kafka connect connector runs, it reads information from the source and periodically records \"offsets\" that define how much of that information it processes. if the connector is restarted, it uses the last recorded offset to know where it should continue reading in the source information. fileoffsetbackingstore specifies the class to be used to store offsets. \/tmp\/offsets.dat path to save offsets. after the configuration is complete, we create the engine. public debeziumlistener(configuration authorconnectorconfiguration, authorservice authorservice) { this.debeziumengine = debeziumengine.create(changeeventformat.of(connect.class)) .using(authorconnectorconfiguration.asproperties()) .notifying(this::handlechangeevent) .build(); this.authorservice = authorservice; } the embeddedengine is designed to be executed asynchronously by an executor or executorservice. private final executor executor = executors.newsinglethreadexecutor(); @postconstruct private void start() { this.executor.execute(debeziumengine); } @predestroy private void stop() throws ioexception { if (this.debeziumengine != null) { this.debeziumengine.close(); } } engine created via configuration file sends all data change to handlechangeevent(recordchangeevent sourcerecord>) method. in this method, the incoming record is modified as desired to use and data is sent for processing in redis. private void handlechangeevent(recordchangeevent sourcerecordchangeevent) { sourcerecord sourcerecord = sourcerecordchangeevent.record(); struct sourcerecordchangevalue = (struct) sourcerecord.value(); if (sourcerecordchangevalue != null) { operation operation = operation.forcode((string) sourcerecordchangevalue.get(operation)); if (operation != operation.read) { string record = operation == operation.delete ? before : after; struct struct = (struct) sourcerecordchangevalue.get(record); map payload = struct.schema().fields().stream() .map(field::name) .filter(fieldname -> struct.get(fieldname) != null) .map(fieldname -> pair.of(fieldname, string.valueof(struct.get(fieldname)))) .collect(tomap(pair::getkey, pair::getvalue)); this.authorservice.replicatedata(payload, operation); } } } note: in case of any error, incoming data cannot be saved and queues other data for processing. in this case, consistency with the database cannot be achieved. summary: this blog post tried to give information about cdc and the relationship between debezium and cdc in general terms. debezium architecture was mentioned, in what situations we should use it and information about the use of embedded debezium was given. debezium's distributed usage will be explained in the next blog post. https:\/\/github.com\/kloia\/debezium-embedded references: https:\/\/debezium.io\/documentation\/reference\/stable\/development\/engine.html https:\/\/debezium.io\/documentation\/reference\/stable\/architecture.html"
},
{
"title":"Kubernetes secret management using the External Secrets Operator-EKS",
"body":"In this blog post, I am going to examine the External Secrets Operator and demonstrate how to store your secrets externally on the AWS Secrets Manager. Kubernetes includes native capabilities for managing secrets in the form of Kubernetes Secrets to satisfy the requirement of safely delivering secrets to running applications. To improve security, administration, and the ability to track ...",
"post_url":"https://www.kloia.com/blog/kubernetes-secret-management-using-the-external-secrets-operator-eks",
"author":"Cem Altuner",
"publish_date":"28-<span>Sep<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/cem-altuner",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/externalsecrets-operator.jpeg",
"topics":{ "aws":"AWS","kubernetes":"Kubernetes","k8s":"k8s","eks":"EKS","external-secrets-operator":"external secrets operator" },
"search":"28 <span>sep</span>, 2022kubernetes secret management using the external secrets operator-eks aws,kubernetes,k8s,eks,external secrets operator cem altuner in this blog post, i am going to examine the external secrets operator and demonstrate how to store your secrets externally on the aws secrets manager. kubernetes includes native capabilities for managing secrets in the form of kubernetes secrets to satisfy the requirement of safely delivering secrets to running applications. to improve security, administration, and the ability to track how secrets are used, centralized secret management can be carried out outside of kubernetes clusters using an external secret store such as hashicorp vault, aws secrets manager, etc. one way to store secrets outside the k8s cluster is with the help of the external secrets operator open-source project. what is the function of the external secrets operator? the external secrets operator's objective is to synchronize secrets from external apis with kubernetes. the eso manages secrets via custom resource definitions. externalsecret, secretstore, and clustersecretstore are user-friendly wrappers around the external api that store and manage secrets on your behalf. your secrets are managed by the \"externalsecret\" crd and the controller uses externalsecret\u2019s data to create secrets. when you use external secrets to read secrets from an external secret store, the data is stored in the kubernetes control plane as native kubernetes secrets. secretstore and externalsecret crds are important to understand this demo and how eso works. secretstore crd is used for the authentication and access management of the external secret store. externalsecret crd is used for the specific secret that is going to be pulled. the controller uses externalsecret\u2019s data to create secrets and sync them at a time interval you choose. let's practice a little bit this tutorial will show you how to sync a secret from the aws secrets manager to your eks cluster using the external secrets operator. to properly follow this tutorial, make sure you have installed the following tools: helm version 3 kubectl eksctl aws cli creating an eks cluster if you already have an eks cluster, you can skip this step. i am going to deploy my own eks cluster using eksctl, which is a simple cli tool for creating and managing eks clusters. the following commands will create my eks cluster. eksctl create cluster -f cluster-config.yaml the code example below provides an overview of the cluster-config.yaml file. apiversion: eksctl.io\/v1alpha5 kind: clusterconfig metadata: name: eso-test-cluster region: eu-central-1 nodegroups: - name: ng-1 instancetype: t3.medium desiredcapacity: 2 volumesize: 20 ssh: allow: true use the following command to get kube-config. aws eks update-kubeconfig --name=eso-test-cluster --region=eu-central-1 after these steps are complete, run the following command to verify. kubectl get nodes name status roles age version ip-192-168-15-101.eu-central-1.compute.internal ready 3m31s v1.22.12-eks-ba74326 ip-192-168-79-64.eu-central-1.compute.internal ready 3m31s v1.22.12-eks-ba74326 as there are no errors, the eks cluster is ready for usage. installing the external secrets operator the external secrets operator offers helm charts for deployment convenience, and i am going to use helm for deploying the external secrets operator. the following commands will deploy an external secrets operator to my eks cluster. helm repo add external-secrets https:\/\/charts.external-secrets.io helm install external-secrets \\ external-secrets\/external-secrets \\ -n external-secrets \\ --create-namespace \\ --set installcrds=true \\ --set webhook.port=9443 after the `external-secrets has been deployed successfully!` message, run the following command to verify external secret operator resources. kubectl get pods -n external-secrets expected output: name ready status restarts age external-secrets-7886b578b4-k27m4 1\/1 running 0 2m32s external-secrets-cert-controller-5578bf565f-nxhs7 1\/1 running 0 2m32s external-secrets-webhook-84c7d457f4-r6cp6 1\/1 running 0 2m32s iam roles for service accounts (irsa) for the external-secrets operator to be able to get secrets from the aws secrets manager, i have to set a few configurations. you can manage the credentials for your applications with eks's feature irsa. this is similar to how amazon ec2 instance profiles give credentials for amazon ec2 instances. instead of distributing aws credentials or using the amazon ec2 role, you can map an iam role to a kubernetes service account and set up your pods to use this service account. to use irsa, it is mandatory to create an iam oidc provider for the eks cluster. the following command will create an oidc provider and associate it with my cluster. $ eksctl utils associate-iam-oidc-provider --cluster=eso-test-cluster --approve --region eu-central-1 i am going to use eksctl to create a service account. the following command 'eksctl create iamserviceaccount' takes an iam policy arn as an argument, creates an iam role associated with the given policy, and maps a service account to that role. eksctl create iamserviceaccount \\ --name esoblogsa \\ --namespace default \\ --cluster \\ --role-name \"esoblogrole\" \\ --attach-policy-arn \\ --approve \\ --override-existing-serviceaccounts i have already created my secret in the aws secrets manager and an iam policy that lets it be retrieved and decrypted. the following image shows a secret with the name test\/eso\/testsecret. test\/eso\/testsecret esoblogsecretreadpolicy { \"version\": \"2012-10-17\", \"statement\": [ { \"sid\": \"visualeditor0\", \"effect\": \"allow\", \"action\": [ \"secretsmanager:getsecretvalue\", \"secretsmanager:describesecret\" ], \"resource\":\u2264secret_arn\u2265 } ] } after these steps are completed, run the following command to verify the service account. $ kubectl describe sa esoblogsa name: esoblogsa namespace: default labels: app.kubernetes.io\/managed-by=eksctl annotations: eks.amazonaws.com\/role-arn: arn:aws:iam:::role\/esoblogrole image pull secrets: mountable secrets: esoblogsa-token-9l2zj tokens: esoblogsa-token-9l2zj events: sync external secrets to the aws secrets manager secret i am going to create a secretstore that references aws secrets manager with a service account that i have already created, 'esoblogsa'. run the following command to create the secretstore. cat <"
},
{
"title":"Alternative Strangler Fig approaches on AWS",
"body":"In this blog post, I will explain the pattern called Strangler Fig, which became popular for splitting the Monolith. I will try to address how AWS services can facilitate this during the AWS MAP (Migration Acceleration Program) Modernization phase. Before going deeper, let's remember the phases of AWS MAP: Assess Mobilize Migrate&Modernize In this article, I will be focusing on the \u201C...",
"post_url":"https://www.kloia.com/blog/alternative-strangler-fig-approaches-on-aws",
"author":"Derya (Dorian) Sezen",
"publish_date":"25-<span>Sep<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/derya-dorian-sezen",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/modernizing-your-monolith-with-aws-map.jpg",
"topics":{ "aws":"AWS","cloud":"Cloud","migrationtocloud":"migrationtocloud","map":"map","debezium":"Debezium","monolith":"Monolith","strangler-fig":"Strangler Fig" },
"search":"24 <span>oct</span>, 2022alternative strangler fig approaches on aws aws,cloud,migrationtocloud,map,debezium,monolith,strangler fig derya (dorian) sezen in this blog post, i will explain the pattern called strangler fig, which became popular for splitting the monolith. i will try to address how aws services can facilitate this during the aws map (migration acceleration program) modernization phase. before going deeper, let's remember the phases of aws map: assess mobilize migrate&modernize in this article, i will be focusing on the \u201Cmodernize\u201D part of the last phase. the modernization phase is usually suggested after migrating to aws. considering the \u201Cdo one thing at once\u201D principle, it\u2019s acceptable first to concentrate on \u201Cjust\u201D migrating, which is also referred to as \u201Clift&shift\u201D. although this is the desired path, we have experienced that certain companies are unwilling to progress with \u201Clift&shift\u201D on their legacy on-premises infrastructure. there may be several reasons for this: on-premises infrastructure may possess several operational risks related to missing know-how over the years, which makes it risky to migrate. insufficient or missing documentation the current on-premises licenses are not allowed to run on the cloud or are not supported by aws byol. financially, the on-premises investment amortization period still continues. those companies are still willing to adapt to cloud services, but only for their modernized workloads. map assess and mobilize phases are positioned to decide on a migration strategy and address solutions for all technology-wise or cost-wise concerns. so far, we have discussed two types of companies with different approaches and strategies for the map: migrating to aws and then modernize migrating only the modernized workloads. let\u2019s assume the customer is convinced with the map offering, and the migration phase is finalized. now we are focusing on modernization. although there may be several aspects of modernization, in this article, as i mentioned, we will focus on \u201Cstrangler fig\u201D, which is used for \u201Csplitting the monolith\u201D monolithic software can be defined as referring to the applications using a single central relational db(database), where a bounded-context or domain can directly access the data of the other domain. as most monolithic applications have an existing rdbms database, and there are existing relations between the tables. here is a real-world example of the relations between the tables of a real-life project: (each dot represents a table, and each line represents a relation) comment below which table is easier to split?\uD83D\uDE00 our complexity is not only the relations but also the different pieces (bounded contexts) of the software that are currently accessing the data of another piece directly by executing sql queries or sps(stored procedures). our mission is to split a bounded context from that monolith. you may already have heard about the strangler fig pattern, where there are several articles around it. strangler fig defines a pattern where you split the pieces of software one by one and redirect the traffic to the split pieces. the splitting continues until all pieces are done, and the monolith fades away. the common articles around the strangler fig pattern are usually explaining the standard definition of the pattern but do not suggest to you a solution regarding the data dependency i have defined above regarding the table relations and domains accessing the data of the other domain. aws migration hub refactor spaces also helps you to apply the strangler fig pattern by orchestrating aws transit gateway, vpcs, aws resource access manager, and aws api gateway but does not address a solution regarding the dependency on the split data of the existing legacy. let me try to explain this problem in more detail: consider you have split a piece from the monolith to an external independently deployable microservice, and you have redirected the requests coming to that piece on the api gateway with uri path-based routing. we have been in parallel with the strangler fig, but in reality, this will not work most of the time. but why?? consider the bounded context you have splitted is generating data(this data is now on the new database of the splitted microservice) which the other bounded contexts should also consume. and naturally, the other bounded contexts may be expecting this data on the existing monolith database tables. you may argue that we need to refactor the existing bounded contexts so that they query to the api rather than querying the database. in such a situation, it is usually advised to refactor the affected bounded contexts, but you may experience a high cost doing that. so you are your own to find a solution! i will be suggesting some alternative solutions to address this data dependency. before joining in deeper, event-driven or\/and event sourcing approaches are not in the scope of this article. the reason is those probably will need relatively major refactoring on the existing software architecture. let\u2019s assume there is no \u201Cevent-driven\u201D architecture in place. there may be several solutions to address the data dependency problem of the legacy software on the legacy database. first, i will discuss the approach called \"parallel run\", which means calling the legacy and the new split api microservice simultaneously. we can achieve this with the following techniques: 1- aws vpc traffic mirroring: the purpose of this feature is not tailored for strangler fig but potentially can be used for that purpose. https:\/\/aws.amazon.com\/blogs\/networking-and-content-delivery\/using-vpc-traffic-mirroring-to-monitor-and-secure-your-aws-infrastructure\/ here is how to create the mirror traffic on aws: 2- api-gateway level traffic mirroring: current aws api gateway does not support mirroring, but alternative api gateways can be used for mirroring 3- service mesh level traffic mirroring: istio service mesh has mirroring capability. ref: https:\/\/istiobyexample.dev\/traffic-mirroring\/ as you can see, mirrored traffic responses are ignored. guess what type of risks this architecture has? (comment please if you have found any) what may be the downside of this approach? i can feel you are saying \u201Crollbacks\u201D. in case the http response from the legacy and the new microservice is not the same (one is http 200 and the other is http 5xx), in such cases, we may need to roll back the one with the http 200 response. if you see risks related to \u201Cparallel run\u201D, it is possible to begin your splitting journey by \u201Cdark launching\u201D your splitted microservice, mirroring the traffic but keeping that domain in a passive state until you get satisfied with the results. let me also suggest another alternative solution from a data consistency perspective. each application running on the database may not have the same data consistency requirement. the nature of some applications may be more tolerant of data inconsistency between the bounded contexts. let me elaborate on this with some examples: an order of the customer does not necessarily be on the backoffice\/warehouse application in real-time. until the data of the order is synchronized with the warehouse applications database, there is data inconsistency, and the business accepts this delay. the splitted microservice is responsible for the sms operations and keeping the status(sent\/failed\/error) of the sms sent to the customers. the status of the sms is also used by another bounded context(domain), but this domain is expecting the status of the sms on the existing monolith database. this domain may be tolerant to seeing the status of the sms in delay. if the application is tolerant to such inconsistencies, there may also be another solution to keep the monolith database and the new microservice database in sync. using aws dms, you are able to define source and target databases, together with what type of changes to synchronize. here are some screenshots from aws dms: as you can see, dms is limited to rds2rds or mongo2documentdb. this may not be what we are looking for in a real-world scenario. if the splitted microservice would have a documentdb, in such case, dms would not work. as an alternative, debezium can be positioned in such a situation: as i initially mentioned in this article, i have not referred to an event-driven approach architecture to solve the problem. this is not because i am against it, but it is a separate topic and needs to be in a separate blogpost. maybe next time:) in conclusion, if you decide to split your monolith, referring to the strangler fig pattern, consider how to solve the data consistency and dependency in case refactoring the monolith has a high cost. i have provided some options without implementing an event-driven model."
},
{
"title":"AWS MAP Mobilize phase containerization for Windows workloads",
"body":"AWS MAP(Migration Acceleration Program) has three phases, and the second phase is called Mobilize. During this phase, we are focusing on several topics, including A detailed migration plan All development activities required for the next migration phase, including CI\/CD pipelines, infrastructure-as-code together with PoCs of the technologies planned to be used, A common approach for the ...",
"post_url":"https://www.kloia.com/blog/aws-map-mobilize-phase-containerization-for-windows-workloads",
"author":"Derya (Dorian) Sezen",
"publish_date":"23-<span>Sep<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/derya-dorian-sezen",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws-map-mobilize.jpg",
"topics":{ "aws":"AWS","migration-acceleration-program":"Migration Acceleration Program" },
"search":"02 <span>oct</span>, 2022aws map mobilize phase containerization for windows workloads aws,migration acceleration program derya (dorian) sezen aws map(migration acceleration program) has three phases, and the second phase is called mobilize. during this phase, we are focusing on several topics, including a detailed migration plan all development activities required for the next migration phase, including ci\/cd pipelines, infrastructure-as-code together with pocs of the technologies planned to be used, a common approach for the migration of the windows workloads is lift&shift which means migrating the windows workloads as-is. although this may seem more straightforward, it requires you to find solutions for the ci\/cd and for the infrastructure management. customers who are in their aws map journey may be unwilling to continue with their existing on-premises practices for ci\/cd and infrastructure management, which means lift&shift is not something they are looking for as they have the desire to benefit from native aws services and industry practices. increasing the automation levels may seem like modernization (which is the last phase of the map). in kloia, our approach for the modernization phase focuses on software refactoring or rearchitecting. here are some of the challenges of the windows-based migrations in our experience: - complexity in ci\/cd: to integrate with the \"code family\" with windows, you may need to deep-dive into powershell. besides, in some cases, not only powershell, but you may need to develop custom solutions for ci\/cd with serverless lambda or with custom development. - complexity in os provisioning: there will also be several windows os or iis-related configuration and provisioning requirements. in summary, all of those i have mentioned above may relatively make the mobilize phase more challenging for windows workloads. one could argue that those challenges may also exist for linux-based workloads. my answer would be \u201Cit depends\u201D, but very limited and not as complex as the windows workloads. besides, containerization for linux workloads is pretty common and straightforward compared to windows. kloia is suggesting approaching windows workloads also from the containerization perspective to bring more value to the migration. without containers, you have to deal with windows-specific ci\/cd and os provisioning challenges. by leveraging containers, your approach to managing windows workloads will be similar to linux workloads: - infrastructure-as-code: you will be able to use kubernetes for both. eks now also supports windows workloads. - ci\/cd: you will be approaching ci\/cd similar to other linux-based container workloads. for instance, if you have existing helm charts, you will also have helm charts for your windows workloads. to be able to containerize the windows workloads, you first need to put your application into a windows container, and now the container image becomes the artifact of your ci. you need to enhance your ci(continuous integration) pipeline accordingly. our initial suggestion would not be leveraging windows containers. the decision regarding windows containers depends on the application requirements. if the cost of porting or refactoring your application to be linux container compatible is feasible, you may not need to continue with windows anymore. we refer to this process as \u201Cdewindowsification\u201D. as we are in the mobilize phase, one of our targets is to agree on a migration plan. if windows dependency is in place, those workloads need to be dockerized. here is an example of a dockerfile that we used in one of our projects: here is the runbuildcommand output building and pushing the image to ecr: here are the completed steps: next would be to define your ci steps. here is a sample screenshot from codepipeline: here is the list from ecr: as you noticed above, the main downside of windows containers is the size. but the benefits would be worth giving it a try! in conclusion, approaching your workloads the same way will bring operational efficiency. in our case, having containers for all your applications, including windows, means you may have a standard approach for scaling provisioning, and infrastructure-as-code deploying and releasing deploy-time configuration secret management ha(high availability) and dr(disaster recovery)."
},
{
"title":"A quick introduction to the Docker desktop alternative: Rancher desktop",
"body":"Rancher Desktop is the compatible solution for creating container images and running a Kubernetes cluster on a local computer. When it comes to hosting a Kubernetes cluster locally, there are a handful of options, such as Docker Desktop and Minikube. However, why do you utilize Rancher Desktop rather than one of the other options? Rancher Desktop is a desktop-based container development ...",
"post_url":"https://www.kloia.com/blog/a-quick-introduction-to-the-docker-desktop-alternative-rancher-desktop",
"author":"Cem Altuner",
"publish_date":"23-<span>Sep<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/cem-altuner",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/rancher-desktop.jpeg",
"topics":{ "docker":"Docker","kubernetes":"Kubernetes","rancher":"rancher","linux":"Linux" },
"search":"23 <span>sep</span>, 2022a quick introduction to the docker desktop alternative: rancher desktop docker,kubernetes,rancher,linux cem altuner rancher desktop is the compatible solution for creating container images and running a kubernetes cluster on a local computer. when it comes to hosting a kubernetes cluster locally, there are a handful of options, such as docker desktop and minikube. however, why do you utilize rancher desktop rather than one of the other options? rancher desktop is a desktop-based container development environment for windows, macos, and linux that is currently at version 1.5.1. it is a kubernetes-based solution that utilizes a virtual machine to host a minimal k3s cluster. the container runtimes for docker and containerd are also supported. rancher desktop is an electron-based program that encompasses other tools while providing a simple user interface. the vm is hosted by qemu on macos and linux, and the windows subsystem for linux v2 is used for the windows support. rancher desktop comes with the following tools: helm kubectl nerdctl moby docker compose nerdctl nerdctl is a docker-compatible command-line interface (cli) for containerd. if you are working with kubernetes, nerdctl is a better option than docker or kim because it is from the containerd organization, which developed the kubernetes containerization standard. additionally, nerdctl has a number of advantages over other containerization platforms; supports rootless mode. it allows nerdctl to do operations without root user privileges. supports lazy-pulling. it is an approach to running containers without waiting to pull images, and it improves speed. supports encrypted images. it allows running a container from encrypted images using ocicrypt. installing to install the rancher desktop, go to the official releases page. select a compatible install file for your system and install the rancher desktop. welcome to rancher desktop welcome to rancher desktop page comes when launching rancher desktop for the first time. users have the choice of the kubernetes version. additionally, users have the choice of using containerd, which offers namespaces for containers and supports nerdctl, or dockerd (moby), which supports the docker cli and api. at any given moment, only one container runtime will be active. after the initialization of the cluster is finished, you can check your cluster with the following command. $ kubectl get nodes my cluster is ready to use. features preferences the kubernetes tab allows you to set the kubernetes version and port number for the kube-apiserver. the advantage of this tab is how simple it is to alter the cluster's kubernetes version via the dropdown menu. the virtual machine settings tab allows you to set memory size and cpu speed. rancher desktop menu by default, the images tab contains a few images, such as coredns, metrics-server, and so on. additionally, you can upload images. you can also scan your images for vulnerabilities and configuration issues. rancher desktop uses trivy to scan your images. just click \u22EE > scan to image which you want to scan. the port forwarding tab enables you to forward kubernetes service ports with one click. the troubleshooting tab enables you to view log files and perform a factory reset. you can also switch to the kubernetes context you want by simply selecting the kubernetes context. just click the rancher icon on the menu bar at the top of the screen. comparison the approach used by rancher desktop is comparable to that of docker desktop, but rancher desktop is free and open-source. the figure below provides a comparison between docker desktop and rancher desktop. solution docker desktop rancher desktop open source \u2718 \u2714 gui \u2714 \u2714 docker cli & dockerd \u2714 \u2714 version selection for k8s \u2718 \u2714 nerdctl & containerd \u2718 \u2714 port forwarding using gui \u2718 \u2714 conclusion rancher desktop could be the compatible solution if you are searching for a solution to build container images and run the kubernetes cluster locally, but still, it is a new product and i believe the future of this product is going to be more valuable."
},
{
"title":"Kloia joins to AWS Public Sector Program",
"body":"AWS Public Sector Partner Program (PSP)is an initiative to support the project on public sectors including government, non-profits, public education and space. PSP also validates AWS Partners who have Public-sector experience. Kloia has been accepted under this program by our existing footprint on Public Sector companies. In this article, I will not repeat the program details but rather ...",
"post_url":"https://www.kloia.com/blog/kloia-joins-to-aws-public-sector-program",
"author":"Derya (Dorian) Sezen",
"publish_date":"14-<span>Sep<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/derya-dorian-sezen",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws-public-sector-partner.jpeg",
"topics":{ "aws":"AWS","aws-partner":"AWS Partner","psp":"PSP" },
"search":"14 <span>sep</span>, 2022kloia joins to aws public sector program aws,aws partner,psp derya (dorian) sezen aws public sector partner program (psp)is an initiative to support the project on public sectors including government, non-profits, public education and space. psp also validates aws partners who have public-sector experience. kloia has been accepted under this program by our existing footprint on public sector companies. in this article, i will not repeat the program details but rather will emphasize the challenges and requirements for any partner who is considering to be entitled under this program. the following statements are all my personal experience and may not be relevant for you or for your organization. pre-requirements to work with public sector: trustpublic sector is more sensitive to trust. imagine how sensitive data and information is present in those projects. for that reason, the public sector usually requires additional security clearance compared to other sectors. for instance in uk, government projects need security clearance which requires you to be a resident for some certain period in the uk with proven experience. referencethis comes to a chicken-egg problem but having an existing reference on a public sector customer is always an advantage for succeeding in the tenders. onsite > remotethe reality of certain public sectors requires you to be more onsite rather than working remotely. some government authorities or agencies may not even give you remote access and may ask you to work onsite, which means your team should be ready for commuting! challenges working with public sector: tendersan invitation to tender is a formal procedure for generating competing offers from the different potential aws partners looking to obtain an award of business activity in works, supply, or service contracts. while this process is not unique to the public sector, the duration, approvals, paperwork and procedures are quite more detailed and procedural compared to other sectors. imagine a tender which takes ages to finalize and at the time the final decision is given, the technologies and solutions offered are already becoming legacy. budgetingtraditional approach to tenders is usually requesting a turnkey project quotation. the reality in software development projects is already converging to a time&material approach rather than scope oriented approach. there are still tenders that are expecting a turnkey project proposal which challenges the bidders to make estimations for the quoting. legacypublic sector companies or agencies are in service for a long period, unlike a regular company who usually has a limited history. being in service for such a long time also may mean that there are more legacy systems compared to a company which was founded within the last decade. this brings the challenge to work with legacy systems which usually comes with its manual processes. even automating or modernizing those legacy systems will bring the challenge to be confident of that particular technology stack. all of the pre-requirements and the challenges that i have mentioned above may be also a part of your current project, which is not a public sector. i tried to emphasize that in the public sector those exist more than the other sectors upon my experience. although those pre-requirements and challenges may be conceived as a negative side of the public sector, there are also interesting side to be in a public sector project: loadthe customers or consumers of a public sector company or agency are usually mass. sometimes you may be targeting all citizens of a country. this also brings the option to be a part of an architecture which needs to scale. this experience is not always possible for all businesses. innovationif your project is part of a university project, there is high possibility that this is an research project, which probably will includes experimental techniques and approaches. we consulted the aws infrastructure as a part of a covid research project at oxford university and we learned a lot regarding the research methods and academic environment during our oxford visits. intrinsic motivationpublic sector projects are usually done for a good purpose, for public, health or free service for the citizens. we usually felt the responsibility to be a part of such a project because we know it will be used, rather than a commercial project which we do not know whether it will survive or not in long term because of the commercial reasons. at kloia, we are feeling the responsibility to be a part of public sector projects and we are fully committing to give our maximum effort to deliver high quality outputs."
},
{
"title":"Lacework vs AWS Security Services",
"body":"With the prevalence of cloud computing, we are able to build our apps that scale quicker all over the world, more redundant, and more cost-effective. To understand the predominance of cloud services, take a quick glance at Gartner's 2022 report: Worldwide end-user spending on public cloud services is forecast to grow 20.4% in 2022 to total $494.7 billion, up from $410.9 billion in 2021, ...",
"post_url":"https://www.kloia.com/blog/lacework-vs-aws-security-services",
"author":"Emre Oztoprak",
"publish_date":"29-<span>Aug<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/emre-oztoprak",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/lacework-vs-aws-security-services.jpeg",
"topics":{ "aws":"AWS","security":"security","cloudtrail":"CloudTrail","software":"Software","lacework":"Lacework","services":"Services","guardduty":"GuardDuty" },
"search":"29 <span>aug</span>, 2022lacework vs aws security services aws,security,cloudtrail,software,lacework,services,guardduty emre oztoprak with the prevalence of cloud computing, we are able to build our apps that scale quicker all over the world, more redundant, and more cost-effective. to understand the predominance of cloud services, take a quick glance at gartner's 2022 report: worldwide end-user spending on public cloud services is forecast to grow 20.4% in 2022 to total $494.7 billion, up from $410.9 billion in 2021, according to the latest forecast from gartner, inc. in 2023, end-user spending is expected to reach nearly $600 billion. security is one of the critical topics that is being overlooked as cloud computing becomes more popular. while businesses transition to the cloud, they may believe that security concerns are solely the responsibility of cloud providers. to avoid making this error, we should study the shared responsibility model. this diagram is published by aws and can be seen here. we can identify which security-related concerns are under our control by looking at this model. if we list the most typical cloud security mistakes; notleastprivilige means fullaccess :d hard-coded credentials using long live credentials public endpoints like public s3 buckets publicly open ports unencrypted data lack of compliance lack of monitoring bad network design not patching vulnerable packages and os and the list continues\u2026 but how can we meet all of these security requirements? how can we be alerted when someone violates our rules? how can we centrally monitor all environments, rules, systems, and regions from a single location? how can we identify when there is an anomaly with the system? this is where lacework comes to our rescue. lacework collects and evaluates cloud security metrics and data. this allows you to monitor your cloud security status from a single location. you can view these in the ui and create different alarms and reports. in this article, i will compare the differences and similarities between lacework and aws security services. then i'll discuss which one could be used in different scenarios. i\u2019ll compare lacework with guardduty, inspector, config, and security hub. lacework vs amazon guardduty amazon guardduty is a threat detection service that continuously monitors data sources, such as aws cloudtrail data events for amazon s3 logs, cloudtrail management event logs, dns logs, amazon ebs volume data, amazon eks audit logs, and amazon vpc flow logs for malicious activity and unauthorized behavior to protect your amazon web services accounts, workloads. lacework amazon guardduty predefined rules rules optional anomalous detections (detect unknown unknowns + known bads) aws developed ruleset against threats common to all aws customers using known attack tactics region builds a model for every aws account for workloads in all regions region-specific service. needs to be enabled in all 21 regions of an account. findings are isolated to region investigation investigation built into the alerting uses aws detective for further investigation agent application\/container\/k8s visibility = agent application\/container\/host visibility requires additional tools and a different interface lacework vs amazon inspector amazon inspector is a vulnerability management service that continuously scans your aws workloads for vulnerabilities. amazon inspector automatically discovers and scans amazon ec2 instances and container images residing in amazon elastic container registry (amazon ecr) for software vulnerabilities and unintended network exposure. lacework amazon inspector run time protection to detect compromised instances for container-based applications easy setup in the aws console (a few clicks) data aggregation across instances in all regions continuous scanning for vulnerabilities on hosts and in container images in ecr os package manager installed software plus non-os language libraries scanned (i.e. packages installed with pip, npm, etc.) for containerized workloads at launch, doesn\u2019t support windows os for ec2 or distroless container images vulnerabilities in container registries are correlated with containers running in your environment no correlation between vulns in ecr and running containers lacework vs aws config aws config is a cloud auditing service that offers an inventory of current resources as well as tracking aws resources to examine compliance and security levels. lacework aws config learns user and entity behavior (ueba) for each account and alerts when anomalous behavior occurs rules-based approach to discovering when resource configuration has changed config and compliance are out of the box features provides information about whether your resources are compliant with configuration rules you specify after account integration, all activity is available in all regions regional service that needs to be enabled in every account and every region discovery and detection made easy with high context events for troubleshooting and remediation easy tracking for change management lacework vs aws security hub aws security hub collects and compares security data from aws services and other vendor security services to best practices and compliance standards. lacework aws security hub platform approach to security with aggregated data for all your aws accounts single place that aggregates, organizes, and prioritizes your security alerts or findings from multiple aws services and vendor services provides customer-centric evaluation of your security posture with baseline behavioral analytics trying to bring \u201Csingle-pane-of-glass\u201D consolidation of scattered security tools multiple accounts supported after integrations are set up with multi-region resource assessment region-based service, which must be set up in every individual region you may have workloads in conclusion lacework may be handy if you are working with more than one cloud provider and have many environments. from a centralized place, you can monitor the security status of all environments and cloud providers. however, if you simply use aws, aws native services will function. you can receive data from other accounts if you define an account as a security account and make it a delegated administrator."
},
{
"title":"Getting the best value from migrating to AWS: Application Modernization",
"body":"AWS MAP (Migration Acceleration Program) has been around for several years. Before the program was announced, the most common terminology used for migrations was \u201CLift&Shift\u201Dwhich means migrating \u201Cas-is\u201D. The following other approaches are usually considered as \u201CReplatforming\u201D and \u201CRefactoring\u201D. The following diagram is showing different path alternatives for the migrations: Ref: htt...",
"post_url":"https://www.kloia.com/blog/getting-the-best-value-from-migrating-to-aws-application-modernization",
"author":"Derya (Dorian) Sezen",
"publish_date":"22-<span>Aug<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/derya-dorian-sezen",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/migration-acceleration-program.jpeg",
"topics":{ "aws":"AWS","cloud":"Cloud","kloia":"kloia","migrationtocloud":"migrationtocloud","migration-acceleration-program":"Migration Acceleration Program","map":"map" },
"search":"24 <span>oct</span>, 2022getting the best value from migrating to aws: application modernization aws,cloud,kloia,migrationtocloud,migration acceleration program,map derya (dorian) sezen aws map (migration acceleration program) has been around for several years. before the program was announced, the most common terminology used for migrations was \u201Clift&shift\u201Dwhich means migrating \u201Cas-is\u201D. the following other approaches are usually considered as \u201Creplatforming\u201D and \u201Crefactoring\u201D. the following diagram is showing different path alternatives for the migrations: ref: https:\/\/aws.amazon.com\/blogs\/enterprise-strategy\/cloud-native-or-lift-and-shift\/ as kloia, we have been focusing on application modernization projects by 2015 which including code refactoring or rearchitecting. together with a such software modernization, the opportunity for replatforming (hosting the software on container\/kubernetes\/serverless) becomes also possible. the reality is, without refactoring, there is low value coming with \u201Clift&shift\u201D of the legacy applications. by 2018, aws included application modernization as a program under \u201Cmicrosoft workloads competency\u201D. the following is a reference from an aws blogpost regarding .net modernization pathways: ref: https:\/\/aws.amazon.com\/blogs\/modernizing-with-aws\/why-you-should-modernize-legacy-net-applications-on-aws-and-how-we-can-help\/ as one of the first four \u201Capplication modernization\u201D partners of aws around the world, we have completed several modernization projects. three of them are published under the aws blog: kloia approach for .net application modernization is not limited only with .netcore transformation, but also includes several solutions to generic software architectural problems. our typical approach for modernization begins with understanding the business domain and preparing a modernization roadmap that supports the business. the following describes the steps of our approach: event storming: an event storming workshop helps us and the customer to understand the domain model, bounded contexts, aggregate roots, actors and flows. risk-value(complexity-value) graph: this graph guides us about which bounded context to begin splitting from. here is an example: applying strangler-fig: based on the decision given in the previous section, we decide on how to split the context. splitting strategy may include parallel-run, data synchronization or an event-driven approach. validation: for acceptance tests, we are enforcing to increase the coverage of the functional tests to validate the functionality of the splitted context. after the validation, we repeat the steps from the 3th step and begin splitting for the next context. during modernisation projects, kloia leverages an iteration based approach. and all projects have a dedicated cross functional team (including analysts, developers, devops engineers and qa engineers ) for instance, a transition from on-premise desktop client to web application has short term and long term solutions. for short-term, there are several techniques for webrun, which do not need code change. for a long term solution, we first set an anti-corruption layer(acl) to let the incremental changes for further iterations. and using strangler fig pattern, we move the functionality step step to the web based app. based on the requirements, we usually suggest react based ui with reactive backend wherever applicable. regarding mitigating potential issues, we always propose several alternativesand choose the appropriate one together considering the risk, budget and strategy. this type of modernisation involves replatforming and refactoring , which affects the infrastructure by including new aws services. this usually results in transition from iaas to more paas services, including managed dbs, managed container services, even including serverless lambda services. in our experience, resource utilization of ec2 for non-container environments is low, usually 5% to 15% on average. after transition to containers, resource utilization increases, and depending on your policies, it can go up. on the other side, together with the new .netwhich works on container\/linux os, dewindowsification also results in cost reduction. there will also be some cost increases, together with new webui, as desktop applications does not require a server, but with new ui technology, we need to introduce new servers as well. besides, based on the data model, we may be using new database alternatives including documentdb or qldb which may bring additional aws services. having such new technologies will bring flexibility and robustness to the infrastructure. it will also be possible to implement disaster recovery for the new infrastructure much easier than with the existing legacy set up. referring to our previous experience of hybrid-cloud migrations and environments where the workload is spread between on-prem and aws, companies usually tend to benefit their existing invested technologies and tools, including: backup\/restore firewall\/security file server storage database ha\/dr procedures hypervisor this list is not exhaustive, and may grow depending on the existing tools. our recommendation is that a transition should not be limited with the functionality of the existing tools\/technologies. besides, based on our previous track record on database modernisation , both on rdbms and non-relational alternatives, including - graph db imdg (in-memory data grid) qldb (quantum-ledger database) key-value stores column-based (like cassandra) message queues (like kafka) to store events time series dbs database modernization depends on the data model and data requirements. in our experience, no business should exist only with an rdbms! introducing qldb to track records which helped them for compliance or modernizing from mssql to serverless aurora and documentdb are some examples we have been through. aws map structure the current aws map has restructured the migration process in to 3 phases: ref: https:\/\/aws.amazon.com\/migration-acceleration-program\/ here are what is expecting in each step: assess: rapid discovery tco report migration readiness assessment briefings & workshops immersion day mobilize discovery & planning migration plan business case \/ migration evaluator migration&modernization experience skills\/center of excellence landing zone operating model security & compliance migrate & modernize migrate operate & optimize modernize here is a visualized version of map steps: as i have described in the beginning of this blogpost, our previous experience regarding modernize (within the last section in map) helps us to find an optimal experience for the aws migration projects. there is no single modernization project where we didn\u2019t introduce a new database or a new approach for cross-cutting concerns. as a conclusion, our approach to modernization has been evolving with a collective experience that we have gained through the projects. we can observe in the market that still ~>99% of the applications are acid (atomic, consistent, isolated, durable) ~>95% of the applications are monolith ~>80% of the applications in enterprise companies more than 10 years old are legacythese are applications whose frameworks are no longer supported or there have been major framework updates which have not yet been implemented. these numbers show that cloud adaptation also needs a major focus on software refactoring to get the maximum benefit from the cloud."
},
{
"title":"Cloud Governance with AWS Control Tower",
"body":"AWS Control Tower simplifies the process of setting up a new baseline for multi-account AWS environments that is secure, well-architected, and ready to use with a few clicks. This includes the configuration of AWS Organizations, centralized logging, federated access, mandatory guardrails, and networking. Control Tower is one of the best ways to start with AWS, it helps to start with buil...",
"post_url":"https://www.kloia.com/blog/cloud-governance-with-aws-control-tower",
"author":"Mehmet Bozyel",
"publish_date":"22-<span>Aug<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/mehmet-bozyel",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws-governance-with-aws_control-tower.jpeg",
"topics":{ "aws":"AWS","cloud":"Cloud","log":"log","aws-control-tower":"AWS Control Tower","sso":"SSO" },
"search":"22 <span>aug</span>, 2022cloud governance with aws control tower aws,cloud,log,aws control tower,sso mehmet bozyel aws control tower simplifies the process of setting up a new baseline for multi-account aws environments that is secure, well-architected, and ready to use with a few clicks. this includes the configuration of aws organizations, centralized logging, federated access, mandatory guardrails, and networking. control tower is one of the best ways to start with aws, it helps to start with built-in governance and best practices. source: aws control tower aws control tower is based on a number of aws services, such as aws organizations, aws identity and access management (iam) (including service control policies), aws sso, aws config, aws cloudtrail, and aws service catalog. aws control tower structure shared accounts aws control tower creates accounts that provide separated environments for specialized roles in your organization as a best practice for a well-architected multi-account environment. these accounts are for management, log archival, and security auditing. management used for billing for all accounts in an organization, creating new accounts, and managing access to all accounts log archive used as a repository of logs of api activities and resource configurations from all accounts. audit a restricted account for your security and compliance teams to gain read and write access to all accounts. aws single sign-on (sso) aws control tower sets up aws single sign-on (sso) to make it easy to centrally manage access to multiple aws accounts. additionally, it gives users single sign-on access to all of the assigned accounts from a single location. guardrails guardrails are rules that provide ongoing governance for your overall aws environment. each guardrail enforces a single rule and is expressed in plain language. guardrails have two behaviors as preventive and detective guardrails. preventive guardrails maintain your accounts' compliance by explicitly denying permission to disable or make any change to critical policy, configuration settings or resources. this is implemented by using service control policies in your aws organizations. detective guardrails detect non-compliance of resources within your accounts, such as policy violations, and provide alerts through the dashboard. these are implemented using aws config rules aligned with aws lambda functions. there are three types of guardrails. mandatory guardrails are always enforced. strongly recommended guardrails are designed to enforce some common best practices for well-architected, multi-account environments. elective guardrails enable you to track or lock down actions that are commonly restricted in an aws enterprise environment. guardrail examples account factory account factory is essentially an aws service catalog product which helps to automate and standardize the secure provisioning of new accounts according to defined security principles such as region selection and network configuration. in the create account section, account and aws sso details can be set separately. also, terraform can be used to provision and customize your accounts with \"aws control tower account factory for terraform\" (aft). summary aws control tower is a great way to start aws and govern multi-account aws environments. building and maintaining a long-term multi-account structure is simpler with aws control tower. it builds a landing zone with accounts and services needed to manage aws environments securely and easily. aws control tower helps to start with built-in governance and best practices on the cloud journey to aws."
},
{
"title":"Longhorns\u2019 Actual Size Grows Much Larger Than Given PVC Size",
"body":"Once upon a time, CSI wasn\u2019t an option for us. So we went with another storage solution and something happened. Its huge horns were rising beyond the horizon\u2026 It was Longhorn and its \u201CActual size\u201D\u2026 Just joking\u2026 As kloia SRE team we want to talk about an incident that occured with Longhorn on one of our projects. TLDR; Longhorn volumes' actual size was getting larger and larger and we sol...",
"post_url":"https://www.kloia.com/blog/longhorns-actual-size-grows-much-larger-than-given-pvc-size",
"author":"Tunahan Dursun",
"publish_date":"30-<span>Jun<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/tunahan-dursun",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/longhorn-actual-size-grows-much-larger-than-given-pvc-size_copy.jpeg",
"topics":{ "sre":"sre","longhorn":"Longhorn","postmortem":"Postmortem","outage-management":"Outage Management","pvc":"PVC" },
"search":"01 <span>aug</span>, 2024longhorns\u2019 actual size grows much larger than given pvc size sre,longhorn,postmortem,outage management,pvc tunahan dursun once upon a time, csi wasn\u2019t an option for us. so we went with another storage solution and something happened. its huge horns were rising beyond the horizon\u2026 it was longhorn and its \u201Cactual size\u201D\u2026 just joking\u2026 as kloia sre team we want to talk about an incident that occured with longhorn on one of our projects. tldr; longhorn volumes' actual size was getting larger and larger and we solved it by enabling recurrent snapshots. environment we had an rke2 cluster with 3 worker nodes. we were using longhorn as our storage provider for workloads. each node had several physical disks attached to them and an lvm configuration was used for scalability. normally we would use csi for storage provisioning but this wasn\u2019t an option because we didn\u2019t have access to vsphere api. so we chose longhorn for this job. environment schema incident as prometheus wrote new monitoring data at every scraping duration longhorn volumes\u2019 actual size was getting larger. even though prometheus was removing old wal files (prometheus retention policy was in place) longhorn volumes\u2019 actual size wasn\u2019t shrinking. at one point it exceeded the given pvc size and started to fill up nodes' disk space critically. nodes were about to be unavailable because of the disk pressure. how did we hear about this? zabbix was actively monitoring worker nodes\u2019 disk usage and it alerted our sre team at defined thresholds (80% - 95%). why was the longhorn actual size getting larger? prometheus was writing all over the place randomly, it was using different blocks in our block devices. so the block device couldn\u2019t know which parts were not used anymore. also there weren't any features available for space reclamation like fstrim. prometheus pvc size (100gb) was large (actually this was the expected pvc size). so longhorn filled up our nodes\u2019 disk spaces pretty quickly. what went well? we haven\u2019t experienced any downtime. prometheus' data wasn\u2019t critical. our node disk sizes weren\u2019t homogeneous and longhorn replica rebalancing feature works well. lvm configuration was in place for scalability, so adding more disks was an option if necessary. solution first, giving less pvc size could be better to slow down disk usage if you can. but our expected pvc size for prometheus was already 100gb minimum. we enabled recurrent snapshots for prometheus longhorn volume. we set concurrent snapshot count to \u201C1\u201D. so everyday at 00:00 a.m longhorn took a new snapshot and removed the old one. everytime longhorn did that recurrent snapshot, it merged the old snapshot and the new one together, reducing the actual size of longhorn volume like the linux fstrim command."
},
{
"title":"Kafka Connect Elasticsearch Sink Connector",
"body":"If you have events\/messages that you want to store in elasticsearch, Kafka Connect is the way to go. It allows you to store the Kafka messages in elasticsearch with the help of elasticsearch sink connector using custom configurations. There is not much documentation available online but don\u2019t worry, I will walk you through how you can publish messages to a specific kafka topic and have t...",
"post_url":"https://www.kloia.com/blog/kafka-connect-elasticsearch-sink-connector",
"author":"Baran Gayretli",
"publish_date":"10-<span>Jun<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/barangayretli",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/kafka-connect-elasticsearch-sink-connector.jpeg",
"topics":{ "docker":"Docker","grafana":"grafana","kafka":"kafka","elasticsearch":"Elasticsearch","connect":"Connect","software":"Software","kafkaconnect":"kafkaconnect" },
"search":"01 <span>aug</span>, 2024kafka connect elasticsearch sink connector docker,grafana,kafka,elasticsearch,connect,software,kafkaconnect baran gayretli if you have events\/messages that you want to store in elasticsearch, kafka connect is the way to go. it allows you to store the kafka messages in elasticsearch with the help of elasticsearch sink connector using custom configurations. there is not much documentation available online but don\u2019t worry, i will walk you through how you can publish messages to a specific kafka topic and have them stored in elasticsearch easily. requirements you need to have the following installed: docker docker-compose i will run kafka, zookeeper, kafka-connect and elasticsearch using docker. if you haven\u2019t changed your docker configuration before, i would recommend you to increase your memory to 6-8gb just to be safe. get started change \/etc\/hosts file for kafka since in docker-compose.yml file kafka_advertised_host_name is set to \u201Ckafka\u201D, i need to do a small change in \/etc\/hosts file version: '2' services: zookeeper: container_name: zookeeper image: wurstmeister\/zookeeper ports: - 2181:2181 - 2888:2888 - 3888:3888 kafka: image: wurstmeister\/kafka:2.12-2.5.1 container_name: kafka depends_on: - zookeeper ports: - \"9092:9092\" environment: kafka_zookeeper_connect: zookeeper:2181 kafka_broker_id: \"42\" kafka_advertised_host_name: \"kafka\" kafka_advertised_listeners: plaintext:\/\/kafka:9092 elasticsearch: container_name: elastic image: docker.elastic.co\/elasticsearch\/elasticsearch:7.10.2 ports: - \"9200:9200\" - \"9300:9300\" environment: - xpack.security.enabled=false - discovery.type=single-node - bootstrap.memory_lock=true - \"es_java_opts=-xms512m -xmx512m\" - cluster.routing.allocation.disk.threshold_enabled=false ulimits: memlock: soft: -1 hard: -1 connect: container_name: kafka-connect image: confluentinc\/cp-kafka-connect:3.3.1 ports: - \"8083:8083\" depends_on: - zookeeper - kafka volumes: - $pwd\/connect-plugins:\/connect-plugins environment: connect_bootstrap_servers: kafka:9092 connect_rest_port: 8083 connect_group_id: \"connect\" connect_config_storage_topic: connect-config connect_offset_storage_topic: connect-offsets connect_status_storage_topic: connect-status connect_replication_factor: 1 connect_config_storage_replication_factor: 1 connect_offset_storage_replication_factor: 1 connect_status_storage_replication_factor: 1 connect_key_converter: \"org.apache.kafka.connect.json.jsonconverter\" connect_key_converter_schemas_enable: \"false\" connect_value_converter: \"org.apache.kafka.connect.json.jsonconverter\" connect_value_converter_schemas_enable: \"false\" connect_internal_key_converter: \"org.apache.kafka.connect.json.jsonconverter\" connect_internal_value_converter: \"org.apache.kafka.connect.json.jsonconverter\" connect_producer_interceptor_classes: \"io.confluent.monitoring.clients.interceptor.monitoringproducerinterceptor\" connect_consumer_interceptor_classes: \"io.confluent.monitoring.clients.interceptor.monitoringconsumerinterceptor\" connect_rest_advertised_host_name: \"connect\" connect_zookeeper_connect: zookeeper:2181 connect_plugin_path: \/connect-plugins connect_log4j_root_loglevel: info connect_log4j_loggers: org.reflections=error classpath: \/usr\/share\/java\/monitoring-interceptors\/monitoring-interceptors-3.3.0.jar after i got my docker-compose ready, docker-compose up -d docker ps you should see that containers are up and running: your urls are: http:\/\/localhost:9200\/ - elasticsearch http:\/\/localhost:8083\/ - kafka connect configure kafka connect after you have given it some time to stabilize and waited a little, you should send the following request: post http:\/\/localhost:8083\/connectors content-type: application\/json { \"name\": \"elasticsearch-sink\", \"config\": { \"connector.class\": \"io.confluent.connect.elasticsearch.elasticsearchsinkconnector\", \"tasks.max\": \"1\", \"topics\": \"example-topic\", \"key.ignore\": \"true\", \"schema.ignore\": \"true\", \"connection.url\": \"http:\/\/localhost:9200\", \"type.name\": \"_doc\", \"name\": \"elasticsearch-sink\", \"key.converter\": \"org.apache.kafka.connect.json.jsonconverter\", \"key.converter.schemas.enable\": \"false\", \"value.converter\": \"org.apache.kafka.connect.json.jsonconverter\", \"value.converter.schemas.enable\": \"false\", \"transforms\": \"insertts,formatts\", \"transforms.insertts.type\": \"org.apache.kafka.connect.transforms.insertfield\\$value\", \"transforms.insertts.timestamp.field\": \"messagets\", \"transforms.formatts.type\": \"org.apache.kafka.connect.transforms.timestampconverter\\$value\", \"transforms.formatts.format\": \"yyyy-mm-dd't'hh:mm:ss\", \"transforms.formatts.field\": \"messagets\", \"transforms.formatts.target.type\": \"string\" } } go to http:\/\/localhost:8083\/connectors to make sure your connector is created. things to note: \"topics\": \"example-topic\" \u2192 your index name for elasticsearch \"connection.url\": \"http:\/\/localhost:9200\" \u2192 your elasticsearch url \"value.converter\": \"org.apache.kafka.connect.json.jsonconverter\" \u2192 the type of the value \"transforms.formatts.field\": \"messagets\" \u2192 this is the formatted timestamp. grafana requires \"yyyy-mm-dd't'hh:mm:ss\" format. all the fields that start with \u201Ctransforms\u201D are there to convert the timestamp. messages stored in elasticsearch after you have sent you post request, simply run: docker exec -i kafka bash -c \"echo '{\\\"request\\\": {\\\"userid\\\" : \\\"23768432478278\\\"}}' | \/opt\/kafka\/bin\/kafka-console-producer.sh --broker-list kafka:9092 --topic example-topic\" this will publish a dummy message to kafka. i should be able to see this message on elasticsearch with the help of kafka connect. let\u2019s check if we successfully sent a kafka message and stored it in elasticsearch. go to http:\/\/localhost:9200\/example-topic\/_search?pretty you should see the following return: { \"took\" : 256, \"timed_out\" : false, \"_shards\" : { \"total\" : 1, \"successful\" : 1, \"skipped\" : 0, \"failed\" : 0 }, \"hits\" : { \"total\" : { \"value\" : 3, \"relation\" : \"eq\" }, \"max_score\" : 1.0, \"hits\" : [ { \"_index\" : \"example-topic\", \"_type\" : \"_doc\", \"_id\" : \"example-topic+0+0\", \"_score\" : 1.0, \"_source\" : { \"request\" : { \"userid\" : \"23768432478278\" }, \"messagets\" : \"2022-04-13t20:42:05\" } }, { \"_index\" : \"example-topic\", \"_type\" : \"_doc\", \"_id\" : \"example-topic+0+1\", \"_score\" : 1.0, \"_source\" : { \"request\" : { \"userid\" : \"23768432432453\" }, \"messagets\" : \"2022-04-13t20:42:14\" } }, { \"_index\" : \"example-topic\", \"_type\" : \"_doc\", \"_id\" : \"example-topic+0+2\", \"_score\" : 1.0, \"_source\" : { \"request\" : { \"userid\" : \"23768432432237\" }, \"messagets\" : \"2022-04-13t20:42:23\" } } ] } } bash script to perform all the operations mentioned above you can also use my docker-compose file and simply run startup.sh to avoid all the blood and tears. git clone https:\/\/github.com\/barangayretli\/kafka-connect-sink-connector.git \/bin\/bash startup.sh hooray, that\u2019s it! go to my github repository to check out the source code! possible errors note: if you ever face the flush timeout error while you are trying to process a massive amount of data, just increase flush.timeout.ms field. it is 5 seconds by default. [2022-05-13 21:38:04,987] error workersinktask{id=log-platform-elastic-0} commit of offsets threw an unexpected exception for sequence number 14: null (org.apache.kafka.connect.runtime.workersinktask:233) org.apache.kafka.connect.errors.connectexception: flush timeout expired with unflushed records: 15805 e.g \u201Cflush.timeout.ms\u201D: 100000 this will allow kafka connect enough time to send the data to elasticsearch without having timeout errors. bonus this part is optional. if you have completed the steps above, now you are ready to visualize the messages with grafana by adding elasticsearch as the datasource! go to configuration\u2192data sources\u2192add data source\u2192select elasticsearch and fill out the settings as the following. you need to keep in mind that your kafka topic name corresponds to the index name in elasticsearch. after filling the required fields, now you are ready to see your elasticsearch logs on grafana!"
},
{
"title":"Capybara vs Karate UI Comparison",
"body":"Introduction Over a decade, the throne of UI Testing has been occupied by Selenium WebDriver. Many solutions strengthened Selenium\u2019s hold on the throne by making it more accessible, and easier to use but some dared to challenge Selenium, to dethrone it and to take over its stead. In this blog post, we will tell the story of these courageous contenders. The first challenger is Capybara. W...",
"post_url":"https://www.kloia.com/blog/capybara-vs-karate-ui-comparison",
"author":"Muhammet Topcu",
"publish_date":"07-<span>Jun<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/capybara-vs-karate.jpeg",
"topics":{ "test-automation":"Test Automation","software-testing":"Software Testing","bdd":"BDD","cucumber":"Cucumber","selenium":"Selenium","capybara":"Capybara","unittesting":"unittesting","karate":"Karate","qa":"QA","json":"JSON" },
"search":"26 <span>jan</span>, 2024capybara vs karate ui comparison test automation,software testing,bdd,cucumber,selenium,capybara,unittesting,karate,qa,json muhammet topcu introduction over a decade, the throne of ui testing has been occupied by selenium webdriver. many solutions strengthened selenium\u2019s hold on the throne by making it more accessible, and easier to use but some dared to challenge selenium, to dethrone it and to take over its stead. in this blog post, we will tell the story of these courageous contenders. the first challenger is capybara. with its powerful dsl and useful utilities, usually used with cucumber to strengthen readability and usability, capybara is a solid candidate for the throne. the second challenger is karate. it defies the page object model and wants to rewrite the formulaic ways of selenium and create a brand new framework containing everything required for a complete testing environment without depending on any other framework. now, before we get into the arena and see the clash of these two champions, let us take a quick glance at the fundamentals. what is ui testing? to sum up in simple terms, ui testing is about ensuring two things: whether the user actions performed by mouse, keyboard, and other input devices are handled properly. whether the visual elements (buttons, images, links, etc.) function and are displayed as intended. and here, ui testing gets divided into two parts: manual testing and automated testing. manual testing is basically the foundation of automation testing, since the latter follows the scenarios created by the former. but not everything can be automated and not everything might be feasible to automate. thus, manual testing is still kicking and it is an indispensable part of the testing process. it is not possible to talk about ui testing without mentioning selenium. selenium is an open-source automated testing framework, which is used to validate web applications across different platforms and browsers. it is compatible with different programming languages such as java, c#, php, ruby, perl, and python. the framework has an undeniable authority in the ui testing - it is so fundamental that you get to know it even before you choose your starter pok\u00E9mon. with the fact that it\u2019s supported across many programming languages and different web browsers, it wouldn\u2019t be wrong to say that selenium is the most popular web automation tool at the moment. sure, ui testing is much more than these, and we could write hundreds of pages about it, but we don\u2019t want to agitate the spectators more than we already did, do we? what is capybara? capybara is a ruby framework that helps to create automation scenarios for user stories in behavior-driven development. in the projects that follow behavior driven development (bdd), we use capybara and cucumber. by combining cucumber and capybara, it\u2019s possible to create infinite scenarios for your test suites with the help of predefined steps. some highlights about capybara: tons of cool helpers such as click_button, click_link, fill_in, enabling you to locate elements and make actions without writing css or xpath. it has been around for a fair amount of time and if you use it with selenium webdriver, then pretty much every question has been asked already. it supports different kinds of web drivers. capybara automatically waits for the content to appear on the page, eliminating the need for manual waits. follows page object model. speaking of which\u2026 have you seen our bdd helper gem which enables you to develop test suits swiftly yet? what is karate ui? karate is an open-source test automation framework that provides both api and ui testing tools. in kloia, we use karate for mainly api testing purposes, since it covers pretty much everything we need for api testing. it uses the gherkin syntax, making writing code as easy as writing plain text. however, since its ui side takes slow but firm steps forward, it provides a solid alternative for other frameworks such as selenium, and capybara. this blogpost written by peter thomas, the creator of the karate framework may give you better insight about what it actually tries to accomplish. some highlights about karate: it\u2019s a pretty new framework, thus there are not enough resources to look upon. it uses the gherkin syntax, making it easy to write and understand the code. provides different kinds of waiting options to suit your needs. it uses chrome devtools protocol for automation. it makes it possible to mix api and ui tests within the same script. debug option is available only for visual studio code users at the moment. no need to create step definitions. it does not follow the page object model. direct comparison it might give a better insight to see how the implementation of a same scenario is handled by different frameworks. example scenario: clicking a random product on a website and verifying its category name. karate (.feature file): @kloiacase feature: browser automation background: * configure driver = { type: 'chrome', showbrowserlog: false, showprocesslog: false, showdriverlog: false} * def mainpage = read('classpath:tests\/data\/mainpage.json') * def searchpage = read('classpath:tests\/data\/searchpage.json') * def productpage = read('classpath:tests\/data\/productpage.json') given driver 'https:\/\/n11.com' and optional(mainpage.surveydeny).click() and input('#searchdata', 'deodorant') when submit().click(\".searchbtn\") * def first = searchpage.searched_items_by_position_css * replace first.number = \"1\" then click(first) #waitfor is enough normally. it may be used as assertion since it fails when can not find any element * waitfor(productpage.add_to_cart_button) * if (!exists(productpage.add_to_cart_button)) karate.fail(\"add to cart button does not exist.\") capybara (.feature file): feature: browser automation background: given visit homepage and decline survey scenario: click random product and hover random category on categories menu when select random sub category on sub categories and click random product on category detail page then verify category name note that it takes less time to create scenarios on karate at first, since implementing step definitions is not needed. however, after initial struggle and reaching certain milestones of step definitions, capybara might seem more swift and easy to read. but hey, you know what they say, it\u2019s different strokes for different folks. capybara (webelements) since we follow the page object model, the web elements and function definitions for each page are stored in separate files. here is an example of how it looks: class mainpage def initialize @survey_decline_css=\".dn-slide-buttons.horizontal > .dn-slide-deny-btn\" @search_box_id=\"searchdata\" @search_button_css=\".searchbtn\" @cat_menu_items_css=\".catmenuitem>[title='%s']\" @cat_sub_menu_items_css=\".subcatmenuitem>[title='%s']\" @cat_menu_items_list_css=\"li.catmenuitem\" @cat_sub_menu_items_list_css=\"li.catmenuitem.active li.subcatmenuitem a[title]\" end def decline_survey find(@survey_decline_css).click if page.has_css?(@survey_decline_css) click_link end def search_item(arg) fill_in(@search_box_id, with: arg) end def click_search_button find(@search_button_class).click end def hover_on_category_by_name arg find(@cat_menu_items_css % arg).hover end end karate (webelements) in karate, it is possible to define webelements as key-value pairs in a json format and declare them as variables by calling them in the feature file with read function. see the feature file example above. { \"surveydeny_css\": \".dn-slide-buttons.horizontal > .dn-slide-deny-btn\", \"submit_id_css\": \"#submitbutton\", \"search_box_id_css\": \"#searchdata\", \"search_button_css\": \".searchbtn\", \"cat_menu_items_css\": \".catmenuitem>[title='']\", \"cat_sub_menu_items_css\": \".subcatmenuitem>[title='']\", \"cat_menu_items_list_css\": \"li.catmenuitem\", \"cat_sub_menu_items_list_css\": \"li.catmenuitem.active li.subcatmenuitem a[title]\" } changing generic webelement locators to use specific names or the item order seems simpler in capybara, only using % operator to send the text to be replaced. since using new lines for different actions is encouraged in karate, using the replace() method makes the scenario body longer. when it comes to other features such as wait functions, karate introduces a variety of options. although most of them are useful, it\u2019s worth mentioning that some of them do not work as intended. for example submit() function might let you down at an unexpected time since it is unreliable most of the time. however with the expanding community and increasing usage of the framework, these kind of functions can be corrected or removed entirely to ensure more consistent function in the future updates. the main disadvantage of karate against capybara is the limited amount of helpers which allows you to find elements by using identifiers other than css and xpath such as name, id, value, title, alt text of image, etc. summary here is our quick comparison table: capybara karate do you want to use pom? \u2713 \uD800\uDD02 do you need an all-in-one framework (api, ui, benchmark, etc) and to be able to use them in the same script? \uD800\uDD02 \u2713 do you need a webdriver other than selenium? \u2713 \u2713 do you need to engage with something fresh, and contribute to its growth? \uD800\uDD02 \u2713 do you want to locate webelements by attributes other than css and xpath? \u2713 \uD800\uDD02 do you want to use a bdd approach? \u2713 \u2713 in the end, these two champions have their own strong suits and some of these strengths may not apply to your case. ultimately it falls to your preferences and specific needs. to ensure efficiency, specific tools must be chosen for every task. a baseball bat can drive a nail as well as a hammer but you may hurt the hammer\u2019s feelings along the way. so pick wisely, hammer watches..."
},
{
"title":"Kloia Becomes a Partner of Tigera",
"body":"Tigera is an active Cloud-Native Application Platform provider with observability for containers and Kubernetes. Tigera is also creator and maintainer of Project Calico - an open source networking and security solution for containerized workloads and Kubernetes environments. Tigera offers Calico in three packages: Calico Open Source Calico Cloud Calico Enterprise Calico Open Source You c...",
"post_url":"https://www.kloia.com/blog/kloia-becomes-a-partner-of-tigera",
"author":"Emin Alemdar",
"publish_date":"12-<span>May<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/emin-alemdar",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/kloia-becomes-a-partner-of-tigera.jpeg",
"topics":{ "kubernetes":"Kubernetes","cncf":"CNCF","partner":"partner","tigera":"tigera","calico":"calico" },
"search":"12 <span>may</span>, 2022kloia becomes a partner of tigera kubernetes,cncf,partner,tigera,calico emin alemdar tigera is an active cloud-native application platform provider with observability for containers and kubernetes. tigera is also creator and maintainer of project calico - an open source networking and security solution for containerized workloads and kubernetes environments. tigera offers calico in three packages: calico open source calico cloud calico enterprise calico open source you can run calico open source and have standard and core features of calico for networking and security. you can also use ebpf dataplane and windows dataplane with calico open source. the support is community-driven in calico open source. calico cloud calico cloud is a saas offering of calico enterprise. it provides a pay-as-you-go offering for an active security platform for cloud-native applications running on kubernetes. calico cloud has all the features of calico open source and offers additional features like siem integration, image assurance, malware protection and flow visualizer. the best part of calico cloud is that you don\u2019t have to manage and maintain the platform. you can get it up and running in minutes and it provides easy installation and configuration for users. calico enterprise calico enterprise is a self managed platform offering for calico with all the features of calico open source and most of the features of calico cloud. with calico enterprise you can install and manage the platform whichever environment you use in your infrastructure. this option brings some operational burden but you have full control over the platform and infrastructure. you can find the architecture of calico enterprise below. this diagram is from the tigera website. you can use calico products on different kubernetes platforms both on-premise and cloud. for example, you can integrate rancher, amazon eks, openshift and many more kubernetes implementations. with the power of ebpf, you can have both performance and security benefits of calico everywhere. being able to have all these security features in your kubernetes environments is becoming more crucial every day. with tigera and calico products you can secure your cloud-native applications on multi-cluster, multi-cloud and hybrid cloud platforms. as kloia, we are proudly announcing our new partnership with tigera. with this partnership, we aim to help our clients everywhere to implement and support calico's cloud-native application security platform for containers, kubernetes, and cloud!"
},
{
"title":"Meet with kloia @ KubeCon + CloudNativeCon in Valencia!",
"body":"The kloia team is excited to be part of KubeCon + CloudNativeCon Valencia next month and we hope to see you there! If you aren\u2019t currently registered, we\u2019d love to hook you with a 20% off badge code. We have a lot of exciting things happening in our booth SU20 - Pavilion 2 in person and online: See the demo of our latest solutions Chat with our DevOps experts and solutions architects Get...",
"post_url":"https://www.kloia.com/blog/meet-with-kloia-kubecon-cloudnativecon-in-valencia",
"author":"Derya (Dorian) Sezen",
"publish_date":"28-<span>Apr<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/derya-dorian-sezen",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/kloia-kubecon-cloudnative.jpeg",
"topics":{ "cncf":"CNCF","valencia":"valencia","kubecon":"kubecon","longhorn":"Longhorn","keda":"keda" },
"search":"12 <span>may</span>, 2022meet with kloia @ kubecon + cloudnativecon in valencia! cncf,valencia,kubecon,longhorn,keda derya (dorian) sezen the kloia team is excited to be part of kubecon + cloudnativecon valencia next month and we hope to see you there! if you aren\u2019t currently registered, we\u2019d love to hook you with a 20% off badge code. we have a lot of exciting things happening in our booth su20 - pavilion 2 in person and online: see the demo of our latest solutions chat with our devops experts and solutions architects get a chance to win an meta quest 2 and amazon alexa! grab some kloia swag - in-person only the platform team in kloia has been investing in the cncf ecosystem for several years, and we have been a cncf silver member since 2020. being a member has provided us with several benefits like: monthly cncf cadence calls networking within cncf participating in board elections (even being a candidate for the board!) pre-acknowledge of early-stage cncf candidates our team focuses on broadly applicable solutions. those solutions are usually modules or libraries built with reuse in mind. this approach gives us speed and snowballs our learnings into components that are tried-and-tested in many industries and companies of various sizes. community building activities kloia consultants are one of the founders of cncf istanbul and have organized a series of cncf meetups since 2020. cncf projects contributions kloia is active on cncf projects such as longhorn and keda. we have been using several cncf projects actively, such as opentelemetry, litmus, grpc, knative, crossplane, falco, cilium, argocd, harbor, apo, helm, envoy, jaeger, and rook. visit our booth at kubecon! you\u2019re welcome to visit us and meet our team at booth su20. here is a spoiler: we are planning to announce two new solutions and two new tools at kubecon."
},
{
"title":"Tyk Gateway vs Amazon API Gateway",
"body":"An API gateway is in its simplest form, a bridge between the clients and your APIs. API Gateways act as a reverse proxy to client requests by routing them to appropriate services and returning an appropriate response. There are many different benefits of using API gateways, but the most common are : Enhancing security by disabling direct access to API\u2019s, being able to white and black lis...",
"post_url":"https://www.kloia.com/blog/tyk-gateway-vs-amazon-api-gateway",
"author":"Veysel Pehlivan",
"publish_date":"21-<span>Apr<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/veysel-pehlivan",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/aws-api-gateway-vs-tyk.jpeg",
"topics":{ "aws":"AWS","lambda":"lambda","amazon":"Amazon","api":"API","gateway":"gateway","tyk":"Tyk" },
"search":"21 <span>apr</span>, 2022tyk gateway vs amazon api gateway aws,lambda,amazon,api,gateway,tyk veysel pehlivan an api gateway is in its simplest form, a bridge between the clients and your apis. api gateways act as a reverse proxy to client requests by routing them to appropriate services and returning an appropriate response. there are many different benefits of using api gateways, but the most common are : enhancing security by disabling direct access to api\u2019s, being able to white and black list ips simplifying the client by moving logic for calling multiple services from the client to api gateway reducing latency by handling possible round trips to many services as a single point of entry translating from web-friendly api protocols to many other protocols used internally handling authentication to control which data is transmitted to your apis. ensuring the api does not get flooded with requests by setting up cache, rate limiting & quotas for this post, we decided to compare tyk gateway to amazon api gateway, because of the wide use of aws as one of the biggest cloud providers globally and tyk api gateway, as it's the open source gateway that powers the highest rated api management platform according to gartner peer insights, with a market leading 4.7 out of 5. production-ready authentication authentication enables companies to keep secure their public apis and networks by allowing them to be used by only authenticated users. the user or computer has to prove its identity to the requested resource. in client-server architectures, this mechanism is best practice and almost inevitable. tyk gateway tyk gateway oss supports lots of industry-standard authentication and authorization options to lock-down apis and handles the traffic securely. tyk gateway supports basic authentication, bearer tokens, json web tokens(jwt), multiple auth, oauth 2.0, openid connect, custom plugins, written in many languages for complex authentication and authorization scenarios, including legacy auth servers. tyk gateway is seamlessly integrated with almost every access control solution. tyk gateway works with redis, and it is a highly available and consistent key-value store. redis stores in-memory tyk session objects. the session objects can include metadata, access rights, policies, tags, and many other things. you can use stored in-memory session objects in custom tyk gateway custom plugins and middlewares, and it is convenient to process this data for any requirement. there is no alternative that can be used as open source software that supports all of these features, which makes tyk stand out compared to its competitors. aws gateway amazon api gateway supports aws cognito or lambda authorizers for access management. aws cognito is a managed service and it supports sign-in with social identity providers and enterprise identity providers via saml 2.0 and openid connect. lambda authorizers are aws lambda functions constructed to control access to the api gateway with user-defined logic. users are required to create and maintain their lambda authorizers. this lambda function can programmatically fine-tune your authorization process, or it can connect to already existing authentication mechanisms running on your, let\u2019s say, on-premises. only disadvantage about using custom authorizer lambda functions is that users are required to code and maintain their custom lambda function. tyk gateway aws gateway jwt and bearer tokens \u2705 supports through aws lambda authorizers or aws cognito oauth 2.0 \u2705 supports through aws cognito openid connect \u2705 supports through aws cognito custom authentication \u2705 supports through aws lambda authorizers policies api gateways policies give granular authorization control after authentication mechanism. these controls are very important in terms of controlling, limiting and invoicing customers. tyk gateway policies are json-based documents that give you granular control of apis for rate limiting, access rights, and quotas, and are applied as soon as with hot reload support. they can be used with trial keys as a temporary policy with a fixed expiration date. we can apply different policies for each environment. also, tyk gateway policies have a granular path and method based control feature which allows you to define policies for each api\u2019s version. aws gateway on the aws api gateway, you can allow and restrict user-based access with iam user and group policies. however, for this, these access rights must be defined for each iam user. if we want to control rate and quota limits separately, we can also use aws gateway resource policy and usage plan services. aws api gateway allows you to build api versions with a gateway like tyk. there\u2019s a feature on amazon api gateway called stage variables. stage variables act like environment variables and can be used to change the behavior of your api gateway methods for each deployment stage; for example, making it possible to reach a different back end depending on which stage the api is running on. keep in mind that applying policies on aws has some caveats, you may define your policies at the api level, usage plan level, iam level and\/or resource policies level. tyk gateway aws gateway access rights \u2705 iam user and group policies ip level rate limiting \u2705 supports through usage plans key level rate limiting \u2705 supports through resource policies versions \u2705 supports through aws api gateway stage variables white-black list for url paths for api versioning, enabling or disabling access to paths or http methods is important. white-black listing allows you to block the access instead of vanishing the paths. tyk gateway tyk gateway has black and white list features that allow or block specified paths and methods (post, get). adding a path or method to a blacklist will force it to be blocked. by using the tyk blacklisting feature, you can depreciate your resources easily. this feature allows you to block access to paths or methods. in this way, tyk gateway makes api versioning easier for you. adding a path to a whitelist will cause other paths to become blacklisted. this means you can open specific endpoints and close others by just adding paths that will open to the whitelist. moreover, there is regex support to define white and black lists. aws gateway api gateway resource policies allow users to define access policies at the api and method levels. resource policies can be used to white\/black list the access to the entire api or selected methods. however, resource policies does not support regex, users would be required to define each ip address or cidr blocks. tyk gateway aws gateway white-black list for url paths \u2705 supports through resource policies white-black list regex support \u2705 \u274C rate limit and quota for consumer api key rate limiting is one of the fundamental aspects of the api gateway. it is used to control the rate of requests to the servers. it protects services from being overloaded. quota is similar to rate limiting. however, it is not used to protect services from overwhelming api resources. it is used to regulate the usage of api resources. tyk gateway tyk gateway has key level and api level rate limiting. key level rate limiting is focused on controlling traffic from individual sources. as a use case, let's say there is an api service that has different pricing plan such as silver and gold and platinum. this way users who need to consume more of the service faster, can pay more for the service in the form of a plan. when they try to use more than a limit, the tyk gateway key level rate-limiting feature stops them. api level rate limiting is used to defend our services from dos and ddos attacks. tyk gateway has a quota feature. let's say, you want to offer 5,000 requests to the api per month. you can implement that by just adding quota to tyk gateway. tyk gateway handles resetting and managing. aws gateway api gateway allows users to configure usage plans to allow customers to access selected apis, and begin throttling requests to those apis based on defined limits and quotas. api gateway also allows users to block ip addresses using resource policy definitions. tyk gateway aws gateway ip level rate limiting \u2705 supports through aws resource policies key level rate limiting \u2705 supports through usage plans api level rate limiting \u2705 \u2705 request manipulation (middleware) \/ transform tyk gateway tyk gateway has a powerful middleware scripting custom plugin. in the middleware, you can intercept and manipulate requests pre and post execution chains by javascript functions. request contexts, session, and specs objects are exposed in the middleware pipeline. request context manipulation is not limited to the header section, it can apply to body, url, and all other contextual attributes. by changing the body, url, query string of the request, you are able to change upstream endpoints and fulfill business requirements. post-middleware have access to session objects (metadata, quota, policies), so after execution, you are able to do anything with these objects. in addition to middleware scripts, the transform feature can cover many cases, it can change every attribute of the request, including method type. you can also convert soap services to rest endpoints so you can keep using your legacy services while modernizing them. moreover, if you compile tyk gateway yourself, you can use an external cli tool like jq, a popular and powerful json processor aws gateway amazon api gateway does not have the middleware scripting support tyk gateway has. in middleware, you can intercept and process pre- and post-execution requests, along with lambda functions. for input manipulation, after integrating the api gateway with a lambda function, by default, the request is delivered as-is. if you want to intervene and manipulate the input, you need to change the configuration of the integration request and not use \u201C lambda proxy integration\u201D. also if you need manipulating with api gateway, you would create a mapping template after integrating lambda and amazon api gateway. custom middleware lambda has some disadvantages such as users having to code and maintain lambda functions. on top of that, for each api gateway request will invoke the middleware function and the actual lambda function, doubling the cost. tyk gateway aws gateway request manipulation \u2705 supports through lambda functions custom jsvm middleware tyk gateway tyk gateway includes a javascript virtualization environment called jsvm to execute javascript code without a browser, which can also be used with the middlewares. this enables tyk gateway to have a unique feature called virtual endpoints that allows you to define javascript functions as api endpoints. virtual endpoints are added on top of the existing api methods, and can be used to aggregate data from different resources, or to produce a dynamic response object that converts or computes data from upstream services. aws gateway aws api gateway does not include a javascript virtualization environment, because aws lambda provides execution runtime for most of the programming languages. so, api gateway does not support virtual endpoints as tyk gateway does. every api endpoint should be defined explicitly with a target lambda function that can use different programming languages, including javascript. tyk gateway aws gateway virtual endpoints \u2705 supports through lambda functions what else? so far, we\u2019ve only compared tyk gateway component with amazon\u2019s api gateway, but there are lots of other feature sets available with tyk, including their different deployment options (on prem, hybrid and cloud\/multi-cloud) as well as the open and closed source components. for example pump, sync, and identity broker are open source while dashboard and dev portal are closed source components. let\u2019s dive into these components. pump tyk pump provides observability which is really important in order to take full control of apis and identify bottlenecks, including security and operational issues. it has rich integrations with external analytic storage such as elasticsearch, kafka, prometheus, and traffic analysis can be done with these tools. also, tyk pump supports sharded analytics, so every api or organization can use its own analytics tool. sync tyk-sync is a command-line tool and library to manage and synchronize with version control systems (vcs). tyk-sync used to dump all the api\u2019s and policies to vcs as well as publish it back to another environment. it has also support for using swagger\/openapi json files to publish apis. graphql support one of the great features of tyk gateway is universal data graph. it offers us a graphql service to aggregate data from multiple services. you do not need to write code. you just need to create your schemas and configure data sources amazon api gateway does not support graphql because aws offers its graphql service separately, called aws appsync. both apigateway and appsync are services that help users create apis, so users would be required to use both the appsync and the apigateway to attain graphql features. dashboard the tyk gateway dashboard is a gui and visual analytics platform. it provides an easy alternative for developers to set up resources in the tyk gateway. moreover, the dashboard provides a customizable developer portal for api documentation, developer auto-enrolment, and usage tracking. in the tyk gateway dashboard, the developer portal is exposed as a separate component of the application. so, it is up to you to deploy it as an internet-facing application or admin application. developer portal tyk developer portal is used to expose a facade of your apis. it lets third-party developers register and consume your apis. it has swagger support. by just adding swagger content into the code editor, or via a link to a public swagger hosted url, you can expose swagger ui to third-party developers. the final verdict based on the different feature sets compared above, it\u2019s fair to say if we compare tyk gateway with amazon\u2019s api gateway alone, tyk gateway leads the front by a huge margin. this is mostly because tyk gateway oss is a standalone api gateway tool that has a wide set of features, and amazon api gateway is a lightweight service that has to integrate with other aws services to solve the same issues as tyk gateway, but without much flexibility. in this article alone, we have mentioned the use of 8 different aws services to solve the same issues that tyk gateway solves alone. below are various other reasons to choose tyk: the number of aws services have grown massively over the years. the learning curve gap required to solve the same issues as tyk gateway does on aws is massive. tyk has great documentation and is just easy to work with. tyk oss is free! aws gets costly very quickly as most of our clients also have to spin up other services. tyk has great support, and the power of open source! any and all problems you come across, you can find your answers to problems on either on tyk docs or contact tyk support for assistance. if that just doesn\u2019t cut it, you can always ask the community for help on guidance. you have the tyk source code at hand. you can fork the latest version of tyk to develop tailor-made solutions for your use-case. kloia\u2019s opinion, as an advanced aws consulting partner, and a tyk partner would be that even though both of these tools serve the same purpose, they both have different use cases and as an always valid response in software engineering, there is no silver bullet, and you should always choose a solution based on your needs."
},
{
"title":"Production-Ready EKS Cluster With Crossplane",
"body":"In this blog post, I am going to examine Crossplane and demonstrate how to set up a production-ready EKS cluster using Crossplane. Crossplane allows you to manage cloud resources with the Kubernetes API using the kubectl. It helps developers claim cloud resources with just \".yaml\" files, similar to other Kubernetes resource definitions. To match developers' claims, Crossplane lets you de...",
"post_url":"https://www.kloia.com/blog/production-ready-eks-cluster-with-crossplane",
"author":"Cem Altuner",
"publish_date":"07-<span>Mar<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/cem-altuner",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/production-ready-eks-cluster-with-crossplane-1.jpeg",
"topics":{ "devops":"DevOps","kubernetes":"Kubernetes","cncf":"CNCF","github":"Github","eks":"EKS","api":"API","crossplane":"Crossplane","terraform":"terraform","deploy":"deploy" },
"search":"08 <span>mar</span>, 2022production-ready eks cluster with crossplane devops,kubernetes,cncf,github,eks,api,crossplane,terraform,deploy cem altuner in this blog post, i am going to examine crossplane and demonstrate how to set up a production-ready eks cluster using crossplane. crossplane allows you to manage cloud resources with the kubernetes api using the kubectl. it helps developers claim cloud resources with just \".yaml\" files, similar to other kubernetes resource definitions. to match developers' claims, crossplane lets you define infrastructure declaratively without writing any code and without revealing the underlying infrastructure of the specific vendor. there are two reasons why this tool is significant: the crossplane highlights the underlying kubernetes control plane's powerful and flexible execution environment. there is no practical limit to the number of custom resources that may be provided. crossplane offers an alternative to terraform, cdk, and pulumi. crossplane has a predetermined list of providers for major cloud services that cover the most typically deployed services. it is not attempting to be a general-purpose infrastructure-as-code (iac) solution, but rather a companion to kubernetes workloads. it's possible to construct new custom resources using the crds provided by crossplane by extending a kubernetes cluster with ready-to-use crds and then connecting them into your ci\/cd or gitops processes. crossplane is a free and open-source software project. it was initiated by upbound and was subsequently accepted as a sandbox project by the cncf and a couple of months ago the project has been approved to move to the next phase as a cncf incubating project. granular assets may be combined into higher-level abstractions, which are then managed, distributed, and consumed through various means. this blog post includes a crossplane demo, and code samples can be found in this github repository. concepts of crossplane crossplane introduces the managed resource (mr) idea, which is a kubernetes custom resource definition (crd) that defines an infrastructure resource made available by a cloud provider. the figure below shows a few of the mr that crossplane utilizes from the provider-aws api. and also, you can check the complete list of managed resources provided by provider-aws. while managed resources are useful for managing cloud resources, managing a large number of them on a regular basis may quickly become overwhelming. for example, an application developer may not care about the specifics of how an eks cluster is built and maintained, such as vpc and node group settings. kube-config might be the only thing they are interested in. on the other hand, an infrastructure operator may not want to provide developer access to all of the eks options, but just to the ones that are required. crossplane provides techniques for composing managed resources, which enables platform teams to design a new kind of custom resource known as a composite resource (xr). an xr consists of one or more managed resources. crossplane defines and configures this new custom resource using two special resources: like a crd, a compositeresourcedefinition (xrd) specifies the schema for an xr. xrds are cluster scoped. to facilitate the creation of a namespaced xr, the corresponding xrd may optionally include a composite resource claim (xrc). a composition that defines the managed resources that will be included in an xr and their configuration. how does crossplane work? crossplane works as a kubernetes operator. it has reconciliation built-in, so it always makes sure that the infrastructure is in the right state. you can't make manual changes to the infrastructure at this time. this method eliminates the possibility of configuration drift. the figure below describes how crossplane works. if you're familiar with terraform, an xrd is similar to the variable blocks of a terraform module, but the composition is the rest of the module's hcl code that defines how to utilize those variables to produce the slew of resources. in this comparison, the xr or claim is similar to a `tfvars` file that provides inputs to the module. platform teams can use rbac to give their development teams access to \"a postgresql database,\" instead of having to deal with access to things like rds instances and subnet groups. a platform team can easily support many teams of application developers in a single control plane because crossplane is built on the kubernetes rbac system. in crossplane, self-service goes even further because each xr can offer different kinds of service. compositions are used to describe how an xr works. this is the primary crossplane api type that controls how crossplane composes resources into a higher level \u201Ccomposite resource\u201D. a composition directs crossplane to build resources y and z when someone creates composite resource x. a new xr can be created either directly in crossplane or via a claim. a platform or sre team is usually the only one with the authority to directly construct xrs. everyone else uses a resource called a \"composite resource claim\" to handle xrs or claims. after creating your own xrds, xrs, and compositions, it is possible to create and push them to dockerhub as packages. packages provide additional features to crossplane, such as support for new types of composite resources and claims, or new types of managed resources. configurations and providers are the two categories of crossplane packages. check the link for further information about how to create and use packages. provisioning a production-ready amazon eks cluster using crossplane let's look at how to provision a production-ready eks cluster using crossplane. users who want to utilize crossplane for the first time have two options. the first option is to utilize a hosted crossplane solution such as upbound cloud. the second is for users who want more freedom and may also install crossplane on their own kubernetes cluster. in this blog post, i will use a minikube running with version v1.23.2. also, kind or existing eks clusters can both be used to provision the management cluster. prerequisites: please install the following tools on your machine before moving on. minikube, kind, or eks cluster kubectl the figure below provides an overview of the configuration of the demo. the figure below provides an overview of the structure of the repository. \u251C\u2500\u2500 assets \u2502 \u2514\u2500\u2500 conf.png \u251C\u2500\u2500 aws-creds.conf \u251C\u2500\u2500 aws-eks.yaml \u251C\u2500\u2500 crossplane-config \u2502 \u251C\u2500\u2500 config-k8s.yaml \u2502 \u251C\u2500\u2500 provider-aws.yaml \u2502 \u251C\u2500\u2500 provider-config-aws.yaml \u2502 \u251C\u2500\u2500 provider-helm.yaml \u2502 \u2514\u2500\u2500 provider-kubernetes.yaml \u251C\u2500\u2500 packages \u2502 \u2514\u2500\u2500 k8s \u2502 \u251C\u2500\u2500 crossplane.yaml \u2502 \u251C\u2500\u2500 definition.yaml \u2502 \u251C\u2500\u2500 eks.yaml \u2502 \u2514\u2500\u2500 readme.md \u2514\u2500\u2500 readme.md i am going to create a namespace for crossplane components with the following command: kubectl create namespace crossplane-system after creating the \"crossplane-system\" namespace, i will create a secret with my aws credentials to integrate with aws. export aws_access_key_id=$your_aws_access_key_id$ export aws_secret_access_key=$your_aws_secret_access_key$ echo \"[default] aws_access_key_id = $aws_access_key_id aws_secret_access_key = $aws_secret_access_key \" >aws-creds.conf kubectl -n crossplane-system \\ create secret generic aws-creds \\ --from-file creds=.\/aws-creds.conf after these steps are completed, i am going to install the crossplane on my cluster via helm chart with the following command: helm upgrade --install \\ crossplane crossplane-stable\/crossplane \\ --namespace crossplane-system \\ --create-namespace \\ --wait there are no errors, so the crossplane is ready for usage. after these steps, i am going to install the provider configuration files. provider-aws.yaml apiversion: pkg.crossplane.io\/v1 kind: provider metadata: name: crossplane-provider-aws spec: package: crossplane\/provider-aws:v0.22.0 kubectl apply \\ --filename crossplane-config\/provider-aws.yaml provider-config-aws.yaml apiversion: aws.crossplane.io\/v1beta1 kind: providerconfig metadata: name: default spec: credentials: source: secret secretref: namespace: crossplane-system name: aws-creds key: creds kubectl apply \\ --filename crossplane-config\/provider-config-aws.yaml if the output is \"unable to recognize,\" wait a couple of seconds and re-run the previous command. provider-helm.yaml apiversion: pkg.crossplane.io\/v1 kind: provider metadata: name: crossplane-provider-helm spec: package: crossplane\/provider-helm:v0.9.0 kubectl apply \\ --filename crossplane-config\/provider-helm.yaml provider-kubernetes.yaml apiversion: pkg.crossplane.io\/v1 kind: provider metadata: name: crossplane-provider-kubernetes spec: package: crossplane\/provider-kubernetes:main kubectl apply \\ --filename crossplane-config\/provider-kubernetes.yaml i have already created my composite resource definition, composition, and configuration for a production-ready kubernetes cluster and published it to docker hub as an oci image. config-k8s.yaml apiversion: pkg.crossplane.io\/v1 kind: configuration metadata: name: crossplane-k8s spec: package: cemaltuner\/crossplane-k8s:v0.2.14 kubectl apply \\ --filename crossplane-config\/config-k8s.yaml the code example below provides an overview of custom resource definition, composition and configuration of my package. you can get the code at this github repository. definition.yaml apiversion: apiextensions.crossplane.io\/v1 kind: compositeresourcedefinition metadata: name: compositeclusters.prodready.cluster spec: connectionsecretkeys: - kubeconfig defaultcompositionref: name: cluster-aws group: prodready.cluster names: kind: compositecluster plural: compositeclusters claimnames: kind: clusterclaim plural: clusterclaims versions: - name: v1alpha1 . .. ... eks.yaml apiversion: apiextensions.crossplane.io\/v1 kind: composition metadata: name: cluster-aws labels: provider: aws cluster: eks spec: compositetyperef: apiversion: prodready.cluster\/v1alpha1 kind: compositecluster writeconnectionsecretstonamespace: crossplane-system patchsets: - name: metadata patches: - fromfieldpath: metadata.labels resources: - name: ekscluster . .. ... crossplane.yaml apiversion: meta.pkg.crossplane.io\/v1 kind: configuration metadata: name: k8s spec: crossplane: version: \">=v1.6\" dependson: - provider: crossplane\/provider-aws version: v0.22.0 - provider: crossplane\/provider-helm version: v0.9.0 run the following command and wait until all packages are ready. kubectl get pkgrev there are no errors, so the configuration of the providers and the package is ready. i am going to create a namespace \"team-a\" for team-a. kubectl create namespace team-a now it is time to provision our production-ready eks cluster. i am going to use the \"aws-eks.yaml\" file to provision the eks cluster. apiversion: prodready.cluster\/v1alpha1 kind: clusterclaim metadata: name: team-a-eks labels: cluster-owner: cem spec: id: team-a-eks compositionselector: matchlabels: provider: aws cluster: eks parameters: nodesize: small minnodecount: 3 run the following command to provision our production-ready eks cluster. kubectl -n team-a apply -f aws-eks.yaml after these steps are complete, run the following command to check the status of our resources and wait until all resources are ready. our cluster is ready to use. the last step is to get the \u201Ckubeconfig\u201D file to integrate with the eks cluster. run the following commands to get and set the \u201Ckubeconfig\u201D file. kubectl --namespace crossplane-system \\ get secret team-a-eks-cluster \\ --output jsonpath=\"{.data.kubeconfig}\" \\ | base64 -d >kubeconfig.yaml export kubeconfig=$pwd\/kubeconfig.yaml kubectl get ns destroy after all of these steps, don\u2019t forget to destroy your resources. run the following command to destroy your resources. unset kubeconfig kubectl --namespace team-a delete \\ --filename examples\/aws-eks.yaml conclusion crossplane works as a kubernetes operator and enables you to use the kubernetes api to deploy, create, and consume infrastructure for any cloud service provider. it allows you to define infrastructure declaratively without having to write any code. the number of custom resources that may be offered has no practical limit. because of this, it gets a thumbs up from me."
},
{
"title":"My First Month at kloia",
"body":"Have you ever wondered what it's like to work at kloia? If you are used to working in corporate systems like me, oh, you are in for a treat surely. Joining kloia may be as exciting as opening the wardrobe into Narnia, though you may find yourself facing not Aslan but Python this time. Come join me in my journey and let\u2019s figure out what happens behind the closed doors of kloia together. ...",
"post_url":"https://www.kloia.com/blog/my-first-month-at-kloia",
"author":"Muhammet Topcu",
"publish_date":"03-<span>Mar<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/muhammet-topcu",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/my-first-month-at-kloia-1.jpeg",
"topics":{ "remote":"Remote","kloia":"kloia","bootcamp":"bootcamp","summit":"summit","brownbags":"brownbags","bullshits":"bullshits","onboarding":"onboarding" },
"search":"08 <span>mar</span>, 2022my first month at kloia remote,kloia,bootcamp,summit,brownbags,bullshits,onboarding muhammet topcu have you ever wondered what it's like to work at kloia? if you are used to working in corporate systems like me, oh, you are in for a treat surely. joining kloia may be as exciting as opening the wardrobe into narnia, though you may find yourself facing not aslan but python this time. come join me in my journey and let\u2019s figure out what happens behind the closed doors of kloia together. just kidding, there are no doors at kloia! who am i and why programming? honestly, i have never been into programming until recent years. i mean, i love computers like everyone else, but i didn\u2019t have anything to do with programming. have i played skyrim over +300 hours throughout my high school years? yes. have i gotten into the wormhole of youtube countless times and found myself watching unfathomable videos like the one about grown-up teletubbies sun baby? definitely. do i start screaming and screeching when i need to be away from the keyboard for the tiniest amount of time? sure, who doesn\u2019t, right\u2026? right!? jokes aside, i have been spending half of my days on computers for approximately eighteen years now and i haven\u2019t seen any addiction symptoms yet\u2026 the thing is, i despise repetitive, time consuming tasks. no, i don\u2019t want to copy and paste multiple excel files into one. no, i don\u2019t want to spend hours of my day doing certain routines which do not deviate at all! nope, i love my keyboard and do not want my \u201Cc\u201D and \u201Cv\u201D keys to fade away due to excessive copy & paste. i do not want my fingers to get cramped over nothing! so you need to believe me when i tell you that i started learning programming for the sake of my keyboard. i mean, everybody does things for the ones they cherish, right? that\u2019s how i learned python, that\u2019s how i got to know selenium\u2026 for the light of my life\u2026 which is rgb by the way. damn, what a lovely keyboard! the bootcamp my journey started not in a village, but in a bootcamp. i wish i could tell you that a grey wizard came to me and told me that i got what it takes to be the savior of the programming world and that i am the one. still, i found out about kloia bootcamp via a newsletter that stated that i have the matching skill set for the qa automation engineer vacancy. the background of the webpage was grey at least; not sure if it means anything special, though. after the registration period, i completed a qualifying test consisting of some java and python questions assessing automation and selenium knowledge and some multiple-choice questions about general programming. even though i couldn\u2019t answer the java questions, i passed the test and got the chance to join the bootcamp. i can say that the bootcamp was a kind of tutorial, in which you can learn the inner workings of things, what to be expected from qa engineers and what kind of mobs\u2026 i mean, bugs you may face while performing certain tasks. for every topic, we had a brief explanation and various examples called \u201Ckata,\u201D with which we tested our understanding of the topic and polished our skills. the bootcamp lasted for only one weekend, but i can say it was as concentrated as any two week-long course you can find on the internet. don\u2019t get me wrong, it was a bit exhausting but i got to know things that i didn't even know existed and got the chance to find answers to any questions that popped up during the courses. i assure you that this was the best bootcamp i have ever attended without hesitation, and it has nothing to do with the fact that it was my first and only one! after the bootcamp courses, i was given a case study with a certain deadline to demonstrate the knowledge i gained during bootcamp. it consisted of various tasks under the two main titles called \u201Cui testing\u201D and \u201Capi testing\u201D. i finished the case study and went through the hiring process, which was completely remote, by the way. the hiring process consisted of four meetings: an hr meeting, a technical interview regarding the case study, personality inventory, and job offer. all of them were carried out in a friendly manner and i didn\u2019t stress out even a bit! so this is how i got to be a kloian. now, let\u2019s find out what is to be kloian and what differentiates kloia from the other companies i have worked at, shall we? first meeting and the summit if you have ever worked remotely, you know that the biggest handicap is sometimes it feels like you work just by yourself. i mean i have worked in my previous company remotely for about one year and there were only a bunch of colleagues i met face to face. sometimes i even suspected that i might be dealing with some npcs while communicating with customers. even the emails were generic! i don\u2019t know if some of them were bots, since they only exist as emails for me, but some of them had some poor ai, unfortunately. luckily, just after my hiring process, 2021\u2019s last summit was scheduled and i was invited to go and meet with the team at sapanca. it was indeed a great opportunity to get to know people i would be working with beforehand. i was a little bit anxious honestly, i thought i would be the odd one and it would be hard to become acquainted with the rest of them. but soon after i learned that the company expanded significantly last year and the number of the employees tripled at least and the summit was also the first time for them to meet the rest of the team. relief: obtained. the hotel was great, it was not crowded but it was not shining-empty either. we had an intense schedule ahead, so i made sure to make use of breakfast. we had orientation meetings to learn the company's inner workings, financial briefings about the company\u2019s profit of the year, which might seem strange to you for financial topics to be spoken out loud with the employees. it was unexpected for me, but it is not taboo in kloia and i\u2019ll discuss this in detail below. the first day flew by as we had presentations after presentations and spent the rest of the evening getting to know each other. the next day we had a hackathon. a hackathon is a friendly competition where different teams try to accomplish a certain task in a limited amount of time. there were four teams in our hackathon and i joined the event as a spectator, which was appropriate as the terms they used were something between elven and gibberish for someone like me. the event lasted until the evening and each member of the winning team was rewarded with oculus quest 2, which i envied a little bit. i mean, more than a little... the evening continued with the music in the disco in accompaniment of dj helluri, whom you might have heard. no? then believe me, you should. after the disco closed, we carried the night on to the outside of the hotel. the weather was cold, and the temperature was below zero celsius. partly because we were dancing still and partly because alcohol was running in our veins, but surely because of the warmth of people, despite the freezing cold, i felt the warmth inside. my impressions about kloia i just finished my first month at kloia and i would like to share my first impressions with you. i haven\u2019t grasped the whole dynamics and inner workings yet but it will give you the idea. the culture if you\u2019ve ever visited kloia\u2019s website, you might have noticed the motto \u201Cno corporate-bullshit\u201D on the career page. i realized one thing at the summit and just after my first week at work. they mean it. there isn\u2019t any concrete hierarchy structure, there isn\u2019t any superiority complex, there aren't any bullshit procedures you need to go through to do the simplest of tasks. no one assumes, everyone asks. there aren\u2019t any stupid questions. i mean, in every company they claim the same thing, \u201Cshoot any questions,\u201D they say, \u201Cdo not be afraid to ask.\u201D, but they may shoot judgemental looks on your way in return. no one looks at you judgingly at kloia. and it has nothing to do with the fact that everyone works remotely! working remotely at the beginning of my post, i told you that there aren\u2019t any doors at kloia. it was actually a double-barrelled sentence. firstly, there being no doors means you don\u2019t need to go through certain doors to be at work. being kloian means being able to work anywhere you want. you don\u2019t have to spend your valuable time commuting back and forth. you want to work from your home but do not have an adequate setup for an appropriate working environment? no worries, kloia provides a remote budget for everyone to create a comfortable home office. even still, you prefer working somewhere other than your home? at that point workingtons come to rescue. ps. excessive time spent working while lying down might cause neck and back pain. proven by experience\u2026 not of mine but of a friend of mine, of course. open sessions, brown bags the second meaning of there being no doors is transparency. as far as i have seen, you can talk about anything without hesitation. there are even events called \u201Copen sessions\u201D in which you can write down your questions or opinions anonymously. and they are answered and addressed without beating around the bush. no dodging questions, no talking in circles. as i mentioned earlier, financial topics are not taboo either and they are spoken freely. you know the financial status of the company, its yearly profit. wage information is not confidential either, you know what you ought to earn at your current scale. no matter what your current assignment or seniority level is, your opinions are treated equally. what i believe is that where transparency is ensured, trust is materialized. kloia also provides room for their employees to improve themselves. it provides language courses, helps you to get suitable certificates for your work line, let\u2019s you manage your own schedule. with the weekly sessions called \u201Cbrown bag\u201D, one of us makes presentations about specific topics, varying from writing techniques to new software frameworks to spread knowledge. now, to sum up my impressions, please see the bullshits table below for a quick comparison. bullshits any other company (most of) kloia do i need permission from my supervisor to take time off? \u2713 \uD800\uDD02 do i need to spend my precious time to file everything i do? \u2713 \uD800\uDD02 do i need to follow strictly created working plans? \u2713 \uD800\uDD02 do i need to ask permission to manage my daily routines? \u2713 \uD800\uDD02 is transparency just a word? \u2713 \uD800\uDD02 is trust just a fairytale? \u2713 \uD800\uDD02 employee contributes and employer does nothing in return? \u2713 \uD800\uDD02 is everything just money in the end? \u2713 \uD800\uDD02 are you having withdrawal symptoms and do you need your daily dose of bullshit? sorry hon, you need to find somewhere other than kloia, for it is bullshit-free! my onboarding process my onboarding process is still continuing at the moment and i\u2019m beginning to see my roadmap with the help of my mentors. as i am originally an energy systems engineer, i have many gaps in my theoretical knowledge regarding software engineering. my mentors show me where i fall short, advise me on how to correct my mistakes, and advise me on which aspects of my mentality i should work on. we schedule two daily meetings, one of which is in the morning and the other one is in the evening. at the start of the day, we discuss what i am going to do during the day and i get advice or i\u2018m given some tasks to be completed about specific topics i\u2019ll need in the future. again, i get a say on my schedule to prioritize tasks and organize it to fit my day. in the evenings, i get answers to my questions and we solve problems together if i encounter any. my goal i can say that main and side quests have already started to become clear and i am starting to figure out which of my stats i should focus on. even though there is a long way to go, thanks to my mentors and teammates, i am sure the road won\u2019t be as steep as it could have been. now i aim to contribute to my company as soon as possible and strengthen my theoretical knowledge as much as i can in the meantime. i know i will get distracted with the side quests and try to learn irrelevant topics, but the flexibility of kloia makes it possible to use these in many different ways. i just hope that i won\u2019t end up with a stealth archer build\u2026 again. and we have come to the end of my journey so far. i hope i didn\u2019t bore you to death and helped you to get familiar with kloia. i don\u2019t know if we will cross paths in this lifetime, but if so, do not hesitate to send a party invite!"
},
{
"title":"OpenID Connect: Authentication between AWS and Bitbucket",
"body":"OpenID Connect: Authentication between AWS and Bitbucket Not using CI\/CD practices is impossible in today\u2019s software development world. Automating all testing, build, and deployment processes ensure that the products we develop are produced faster with better quality and fewer errors. We use various tools to automate these manual processes. But when you automate everything, you should no...",
"post_url":"https://www.kloia.com/blog/openid-connect-authentication-between-aws-and-bitbucket",
"author":"Emre Oztoprak",
"publish_date":"24-<span>Feb<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/emre-oztoprak",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/openid-connect.jpeg",
"topics":{ "aws":"AWS","security":"security","bitbucket":"bitbucket","authentication":"Authentication","openid":"OpenID","authorization":"Authorization" },
"search":"01 <span>aug</span>, 2024openid connect: authentication between aws and bitbucket aws,security,bitbucket,authentication,openid,authorization emre oztoprak openid connect: authentication between aws and bitbucket not using ci\/cd practices is impossible in today\u2019s software development world. automating all testing, build, and deployment processes ensure that the products we develop are produced faster with better quality and fewer errors. we use various tools to automate these manual processes. but when you automate everything, you should not forget one thing: security! we define access to these tools so that they can perform the necessary operations, and we grant these permissions through access and secret keys. although their access is limited, these keys grant access to many services and environments and we don\u2019t want them to be exposed! we rotate the keys at certain intervals and keep them in safe places to prevent this. these operations may be easily done in small environments, but you have a big problem if your environment is growing and you have hundreds of keys to manage. here openid connect or oidc for short comes to our rescue. authentication vs authorization oidc works by adding an extra layer on the oauth 2.0 protocol. oauth 2.0 is an authorization protocol, and oidc is an authentication protocol. we need to understand the distinction between these two. authentication is the stage when you log into a system, server or website. you use your username, password, 2fa code, or ssh key file to log in. the system either accepts or rejects you by making a validation here. authorization is what you can do after logging in. in other words, the permissions granted to your user. for example, you can view only certain pages on a portal or access only a specific directory in a server. oauth 2.0 handles authorization process, on the other hand oidc handles authentication process between applications. of course, oidc and oauth 2.0 don\u2019t use user credentials in any way. they handle this process with the tokens it produces. bitbucket pipelines now that we have understood the oidc protocol let\u2019s see how to use it. i use bitbucket pipelines for deployment, and bitbucket has oidc support. i will make this deployment to aws with oidc. in my bitbucket repo, i select the repository settings and openid connect at the bottom. this is the information needed to establish trust between my aws account and my bitbucket repo. i take note of the provider url and the audience variable on that screen. since this information is private to you, you should not share it. after that, i log in to my aws account and navigate to identity providers > add provider in iam. i choose openid connect as the type and paste the identity provider url, audience that i just got from the repository settings. i perform the verification process by clicking get thumbprint and completing the process by clicking add provider. i created trust between my aws account and my bitbucket repo by adding the identity provider. now i need to assign a role to this identity provider for the necessary permissions. after clicking on the identity provider i created, i select the assign role. i switch to the role creation screen by clicking create a new role. web identity will be selected as the select type of trusted entity, and the identity provider i just created will be selected. after selecting audience, i move on to adding permissions. you need to give the necessary permissions for which services your application uses. s3, elasticbeanstalk, ecs, lambda, etc. since i will demo on s3, i choose amazons3fullaccess and continue. click create role and complete. i have completed the things to do on the aws side. now let\u2019s go to bitbucket and set the pipelines.yml file in my repo. bitbucket has many integrations for the pipeline. you don\u2019t need to create a deployment image from scratch. for example, aws elastic beanstalk deployment; - step: oidc: true script: - pipe: atlassian\/aws-elasticbeanstalk-deploy:1.0.2 variables: aws_default_region: $aws_default_region aws_oidc_role_arn: 'arn:aws:iam::xxxxxxxxxx:role\/rolename' application_name: 'my-app-name' environment_name: 'production' zip_file: 'application.zip' or s3 deployment; - step: oidc: true script: - pipe: atlassian\/aws-s3-deploy:1.1.0 variables: aws_default_region: $aws_default_region aws_oidc_role_arn: 'arn:aws:iam::xxxxxxxxxx:role\/rolename' s3_bucket: 'my-bucket-name' local_path: 'build' both examples have oidc support. just write your role arn. you can see other pipeline integrations and examples here. bitbucket pipes integrations | bitbucket learn how to automate your ci\/cd development workflow with pipes. plug and play with over 50 integrations for hosting\u2026bitbucket.org as an example, i created a repo with a single index.html file. i create an artifact and send that artifact to the bucket. i defined role arn, region, and bucket as variables. now i can run pipeline. image: atlassian\/default-image:2 pipelines: default: - step: name: build artifact script: - mkdir artifact - cp index.html artifact\/ artifacts: - artifact\/* - step: name: deploy to s3 deployment: production oidc: true script: - pipe: atlassian\/aws-s3-deploy:1.1.0 variables: aws_oidc_role_arn: $oidc_role_arn aws_default_region: $aws_default_region s3_bucket: $s3_bucket local_path: 'artifact' deployment completed successfully. i completed the deployment without an access-secret key. but to increase security, i can restrict access to the role a little more. for example, i can allow only bitbucket pipelines ip addresses and certain repo uuids. in this way, other repos can not access that role, except the repos i have specified. i select the role and edit the trust relationships. i edit this policy by adding my repo uuid. no repository other than this repository can access this role. as an extra, only bitbucket pipeline ips will be allowed in this policy. i\u2019ve made it one layer more secure. conclusion in this blog post, i talked about authentication, authorization, oidc, and oauth 2.0. deployment with oidc is more secure and easier to manage. even if your role arn leaks, no one can access your aws resources. but if your access keys leak, they can be used anywhere. if your environment and ci\/cd platform support oidc, you can deploy aws resources with this method."
},
{
"title":"The New Era in Performance Testing: k6",
"body":"The k6 load testing tool, supported and developed by Grafana Labs, a very common monitoring software, has taken these performance test reports to a different dimension. Before starting k6 I recommend you read this blogpost to understand what performance testing is and its importance. What is k6? k6 is a modern open-source load testing tool from Grafana Labs. It can be integrated with man...",
"post_url":"https://www.kloia.com/blog/the-new-era-in-performance-testing-k6",
"author":"\u00DCmit \u00D6zdemir",
"publish_date":"08-<span>Feb<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/ümit-özdemir",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/new-era-in-qa-k6.jpeg",
"topics":{ "test-automation":"Test Automation","graphql":"graphQL","grafana":"grafana","k6":"k6","qa":"QA","reporting":"reporting","open-source-framework":"Open Source Framework","performance-testing":"Performance Testing","ci-cd":"CI\/CD","load-testing":"Load Testing","rest":"REST" },
"search":"22 <span>may</span>, 2024the new era in performance testing: k6 test automation,graphql,grafana,k6,qa,reporting,open source framework,performance testing,ci\/cd,load testing,rest \u00FCmit \u00F6zdemir the k6 load testing tool, supported and developed by grafana labs, a very common monitoring software, has taken these performance test reports to a different dimension. before starting k6 i recommend you read this blogpost to understand what performance testing is and its importance. what is k6? k6 is a modern open-source load testing tool from grafana labs. it can be integrated with many ci\/cd tools as well as protocols like rest, graphql, and grpc. k6 helps you create performance tests such as stress tests and load tests. how does it work? (here k6 must be installed on your system, click for installation) the biggest gain is time! actually, k6 is javascript-based. with a simple syntax built using javascript, you create your performance test script and run it in the cli. since it is based on javascript, it offers a flexible, manageable, and uncomplicated structure. here is a simple example. import http from 'k6\/http'; import { sleep } from 'k6'; export default function () { http.get('https:\/\/httpbin.org\/'); sleep(1); } you can use the cli to run this javascript-based k6 script: k6 run script.js then you will see that the tests are running. here is a report that k6 provides by default. it just specifies the http response times and properties of the load test script you create. in the following sections of our article, i will explain how you can customize the reports. virtual user & duration one of the purposes of load tests is to measure how the system responds in different usage scenarios. in this case, you can add or change parameters such as the number of incoming users and the duration of the test. if you run it as follows, you will make the test more meaningful. k6 run --vus 10 --duration 30s script.js you were able to run your test by changing the configuration of your test using the cli without changing your test script. as the output shows, k6 ran your tests for 10 users and 30 seconds. you can also choose to manage through the script you created; simply use: import http from 'k6\/http'; import { sleep } from 'k6'; export let options = { vus: 10, \/\/ duration: '30s', 10 user looping for 30 seconds }; export default function () { http.get('https:\/\/httpbin.org\/'); sleep(1); } after specifying the parameters, you can run it again with the first command you used in the terminal; k6 run script.js you were able to run the same parameters by changing your test script. as the output shows, k6 ran your tests for 10 users and 30 seconds. in addition, even if you have added options in your script, you can edit the configurations with the cli. k6 run --vus 5 --duration 40s script.js also, in some cases, you can ask your tests to behave differently for a certain time period. to do this, you can use the stages block to change behaviors at different periods during the test. import http from 'k6\/http'; import { sleep } from 'k6'; export let options = { stages: [ { duration: '3s', target: 15 }, \/\/ ramp up to 15 users { duration: '5s', target: 25 }, \/\/ ramp up to 25 users { duration: '3s', target: 5 }, \/\/ ramp up to 5 users ], }; export default function () { http.get('https:\/\/httpbin.org\/'); sleep(1); } here i showed 3 different behaviors with a single test script. it can be a feature that you can use frequently when doing stress testing. execution k6 offers three different execution models: local, cloud and clustered. local model runs on any server or on your own device, cloud model runs your tests using the k6 cloud infrastructure on the cloud. the clustered way of running has not been developed yet. we have explained the local part in the previous parts of our article. you don't need to add any configuration or code to run cloud tests. after registering app.k6.io\/, connect to the k6 cloud by running the token there in your own cli. change the execute command after your connection is successful. you can also use this token in cli by providing tokens via k6 cloud. k6 login cloud -t 6c898ce723bdbf0ae1c4ea01f53add13a6c720a7465a3e83324c9c3454edbb0f k6 cloud script.js when you run your tests, you will see that your test report is generated on your app.k6.io panel. you can run your tests again, change the configurations and make them work as planned through the cloud panel provided by k6. reporting k6 can display the outputs of a load test in three different ways: as a summary on the terminal at the end of tests, as reports on the cloud, or as reports on other tools such as grafana and influxdb. you can access a more detailed performance test report in the k6 cloud interface, where various metrics and request\/response times are displayed visually. you can use the --out parameter to get the report output in various ways. here, we can produce outputs such as json, csv, as well as metric tools such as grafana, influxdb. k6 run script.js \u2014out influxdb=http:\/\/kloia.com:8186 standard output; the load test includes the number of users, how long it took, detailed http reports. also to output data types such as json, csv as standard output it can be executed as --out json=test.json or --out json=test.csv k6 run script.js \u2014out json=test.json. tools you can use for reporting; amazon cloud watch apache kafka grafana cloud \/ prometheus influxdb + grafana statsd datadog conclusion k6 can be the right solution for you as it has many features such as detailed analysis, report grouping, recording and playback, cloud maintenance. in addition, also be easily integrated into monitoring and ci\/cd processes. check out k6 on https:\/\/k6.io\/ to much more than what we've covered here."
},
{
"title":"Bottlerocket: Operating System to Run Containers",
"body":"Containers are now widely used to package and scale applications. Using general-purpose operating systems as host operating systems brings some difficulties with security, overhead, and issues during updates. AWS designed a free, open-source, and Linux-based operating system called Bottlerocket to solve these problems. This blog post will cover the benefits of Bottlerocket, what problems...",
"post_url":"https://www.kloia.com/blog/bottlerocket-operating-system-to-run-containers",
"author":"Mehmet Bozyel",
"publish_date":"17-<span>Jan<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/mehmet-bozyel",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/bottlerocket-operating-system-to-run-containers.jpeg",
"topics":{ "aws":"AWS","eks":"EKS","bottlerocket":"Bottlerocket","containers":"Containers" },
"search":"08 <span>mar</span>, 2022bottlerocket: operating system to run containers aws,eks,bottlerocket,containers mehmet bozyel containers are now widely used to package and scale applications. using general-purpose operating systems as host operating systems brings some difficulties with security, overhead, and issues during updates. aws designed a free, open-source, and linux-based operating system called bottlerocket to solve these problems. this blog post will cover the benefits of bottlerocket, what problems it solves, how it is used with eks and the update process. bottlerocket is a stripped-down linux distribution with only essential software to run containers. os updates are applied all at once and can be rolled back. bottlerocket improves performance as it is lighter, increases security as it reduces the attack surface, improves uptime with update strategy and reduces management overhead and operational costs. aws-provided builds come with three years of support and aws support plans cover these builds. there is also community support on the bottlerocket github page. security bottlerocket improves security by reducing the attack surface with a minimal package set. in addition to this, it uses a read-only file system and it is checked at boot time with dm-verity. bottlerocket image has no ssh server and a shell to improve security, but there are options to use it as a typical linux system. bottlerocket has a control container and an administrative container that run outside of bottlerocket\u2019s container orchestrator in a separate instance of containerd. control container enabled by default and runs aws ssm agent to run commands or start shell sessions on bottlerocket instances on ec2. admin container is disabled by default. it has an ssh server to log in as ec2-user using the ec2-registered ssh key. it is useful when shell access to the underlying host is needed for development or troubleshooting scenarios because bottlerocket doesn\u2019t have a shell built into the os for security reasons. updates in general-purpose operating systems, update failures are common because of unrecoverable during package-by-package updates. unlike general-purpose linux distributions that include a package manager allowing you to update and install individual pieces of software, bottlerocket downloads a full filesystem image and reboots into it. it can automatically rollback if boot failures occur, and workload failures can trigger manual rollbacks. this update method simplifies the update processes and makes it easier, faster, and safer to perform updates through automation. using a bottlerocket ami with amazon eks in this part, i will set up an amazon eks cluster with bottlerocket and update the os. this procedure requires eksctl version 0.74.0 or later. the version can be checked with the following command: $ eksctl version first, i will create a key pair. $ aws ec2 create-key-pair --key-name bottlerocket --query 'bottlerocket' --output text > bottlerocket.pem i will create a file named bottlerocket.yaml with the following content. --- apiversion: eksctl.io\/v1alpha5 kind: clusterconfig metadata: name: bottlerocket region: eu-central-1 version: '1.21' iam: withoidc: true nodegroups: - name: ng-bottlerocket instancetype: m5.large desiredcapacity: 3 amifamily: bottlerocket ami: auto-ssm iam: attachpolicyarns: - arn:aws:iam::aws:policy\/amazoneksworkernodepolicy - arn:aws:iam::aws:policy\/amazonec2containerregistryreadonly - arn:aws:iam::aws:policy\/amazonssmmanagedinstancecore ssh: allow: true publickeyname: bottlerocket bottlerocket: settings: motd: \"hello from eksctl!\" enableadmincontainer: true i set the value for the amifamily field to bottlerocket and the ami field to auto-ssm so that eksctl automatically searches for the correct bottlerocket ami for the different regions. next, i will deploy the cluster with the following command. $ eksctl create cluster --config-file=bottlerocket.yaml it takes about 15 minutes to create the stacks in cloudformation. you can check the progress of the stacks in the cloudformation console: https:\/\/console.aws.amazon.com\/cloudformation\/home \u00A0 when the above command is completed, i can check the cluster with the following command. $ eksctl get cluster i will use the following command to check the os image. $ kubectl get nodes -o=custom-columns=name:.metadata.name,os:.status.nodeinfo.osimage updating bottlerocket first, i will need to connect to the admin container of my bottlerocket node via ssh by using the following command: $ ssh -i .pem ec2-user@ replace \u201C\u201D with the name of your keypair and the \u201C\u201D with the ip of your bottlerocket instance. i will use sheltie to drop into a root shell on your bottlerocket node. [ec2-user@ip-192-168-33-55 ~]$ sudo sheltie now, i will check for updates. bash-5.0# updog check-update an update is available, i can initiate the update. bash-5.0# updog update this will download the new update image and update the boot flags so that when you reboot it will attempt to boot to the new version. when that\u2019s complete, i need to reboot. bash-5.0# reboot and that\u2019s it. the node is now running the latest version of bottlerocket os. using the following commands i will check the version. as you can see we updated one of our nodes. bash-5.0# cat \/etc\/os-release $ kubectl get nodes -o=custom-columns=name:.metadata.name,os:.status.nodeinfo.osimage deleting resources run the following command to delete resources. $ eksctl delete cluster bottlerocket conclusion in this blog post, you've seen what bottlerocket is and its advantages over other linux distributions. bottlerocket is stripped down to decrease operational cost, reduce management complexity and improve security. updates are applied as a single unit and can be rolled back as needed. so you don't have the risk of \"botched\" updates which can make the system unusable. also, aws gives support for aws-provided builds."
},
{
"title":"LocalStack: AWS On Your Laptop",
"body":"LocalStack is a cloud application development tool that provides an easy-to-use test\/mock system. It creates a testing environment on your local computer that emulates the AWS cloud environment in terms of functionality and APIs. LocalStack's primary objective is to assist you in speeding up various procedures, simplifying testing, and saving money on development testing. For example, co...",
"post_url":"https://www.kloia.com/blog/localstack-aws-on-your-laptop",
"author":"Cem Altuner",
"publish_date":"13-<span>Jan<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/cem-altuner",
"featured_image":"https://4602321.fs1.hubspotusercontent-na1.net/hubfs/4602321/locallstack-aws-on-your-laptop-1.jpeg",
"topics":{ "aws":"AWS","cloud":"Cloud","github":"Github","api":"API","localstack":"localstack" },
"search":"01 <span>aug</span>, 2024localstack: aws on your laptop aws,cloud,github,api,localstack cem altuner localstack is a cloud application development tool that provides an easy-to-use test\/mock system. it creates a testing environment on your local computer that emulates the aws cloud environment in terms of functionality and apis. localstack's primary objective is to assist you in speeding up various procedures, simplifying testing, and saving money on development testing. for example, consider the time it takes to spin up an ec2 instance on amazon web services (aws). it can take a few minutes, but it can take 15 minutes or more to spin up an elastic kubernetes service (eks). that is acceptable for production environments, but this amount of time may be excessive when it comes to testing. also, while you are just getting started and testing things out, these procedures can lead to high costs. it is possible to run lambda functions, create dynamodb tables, ecs containers, and more, such as putting your application behind an api gateway with localstack. all of these functionalities are powered by your local machine without ever talking to the cloud. this blog post includes a serverless demo, and code samples can be found in this github repository. localstack has three different customer usage tier options: community edition, pro, and enterprise. each of the usage tiers has its available services and features. pro and enterprise options require payment, but the community edition option is free for personal usage. i am going to use the community edition option for this blog post. the diagram below depicts the various usage tiers (open source, pro, and enterprise) and the capabilities available in each. and also, you can check the complete list of features from this link here. integration with localstack localstack is compatible with a wide range of cloud development tools. there are numerous aspects to cloud development, and there is a large ecosystem of tools to handle them all. localstack allows you to execute your process entirely on your local workstation, whether you're using infrastructure-as-code (iac) to manage your aws infrastructure or developing apps with aws sdks like boto. localstack is compatible with a wide range of cloud development tools. i am going to use the serverless framework for this blog post. get localstack up and running in this section of this blog post, you will see how a simple serverless application can be deployed using localstack. there are a couple of ways you can install localstack. you can install it with a package manager, you can run it on docker with docker-compose, and it is also possible to run localstack in a kubernetes cluster. i am going to install localstack using docker-compose. you can create different configurations based on your requirements with the docker-compose configuration provided in the localstack repository. prerequisites: please make sure to install the following tools on your machine before moving on. docker docker-compose (version 1.9.0+) npm docker-compose yaml file template version: \"3.8\" services: localstack: container_name: \"localstack-test\" image: localstack\/localstack:latest network_mode: bridge privileged: true ports: - '4566:4566' environment: - services=${services- } - data_dir=\/tmp\/localstack - lambda_executor=docker-reuse - docker_host=unix:\/\/\/var\/run\/docker.sock - aws_execution_env=true volumes: - \".volume\/tmp\/localstack:\/tmp\/localstack\" - \"\/var\/run\/docker.sock:\/var\/run\/docker.sock\" after creating the `docker-compose.yml` file, get it up and running with the following command. $ docker-compose up localstack-test | localstack version: 0.13.0 localstack-test | localstack build date: 2021-11-23 localstack-test | localstack build git hash: b6046487 localstack-test | localstack-test | 2021-11-24 07:38:12,906 info success: infra entered running state, process has stayed up for > than 1 seconds (startsecs) localstack-test | starting edge router (https port 4566)... localstack-test | ready. as there are no errors, localstack services are ready for usage. serverless framework is a good option to integrate and use with localstack. let\u2019s install it with npm. $ npm install -g serverless $ npm install --save-dev serverless-localstack setting up the app i am going to create a simple backend that creates, deletes, lists, updates, and retrieves customers that can be used through a rest http api using api gateway, lambda, and dynamodb services. the figure below provides an overview of the structure of the repository. the code example below provides an overview of the `getcustomer.py` file. you can get the code at this github repository. import os import json import boto3 if 'localstack_hostname' in os.environ: dynamodb_endpoint = 'http:\/\/%s:4566'% os.environ['localstack_hostname'] dynamodb = boto3.resource('dynamodb',endpoint_url=dynamodb_endpoint) else: dynamodb = boto3.resource('dynamodb') def getcustomer(event, context): table = dynamodb.table(os.environ['dynamodb_table']) # fetch customer from the database result = table.get_item( key={ 'id': event['pathparameters']['id'] } ) # create a response response = { \"statuscode\": 200, \"body\": json.dumps(result['item']) } return response the code example below provides an overview of the `serverless.yml` file. service: serverless-rest-api-with-dynamodb frameworkversion: \">=1.1.0 <=2.70.0\" provider: name: aws runtime: python3.8 environment: dynamodb_table: ${self:service}-${opt:stage, self:provider.stage} ... functions: create: handler: customers\/createcustomer.createcustomer events: - http: path: customers method: post cors: true ... resources: resources: customersdynamodbtable: type: 'aws::dynamodb::table' deletionpolicy: retain ... after these steps are complete, the following command will deploy my application to localstack. $ serverless deploy --stage local the result should be similar to: i will use the given endpoint to create a customer with the following command. $ curl -x post http:\/\/localhost:4566\/restapis\/cfev9cngmk\/local\/_user_request_\/customers --data '{\"firstname\":\"cem\",\"lastname\":\"altuner\"}' the expected output: {\"id\": \"4b19151b-51ad-11ec-8e88-fbb757305d9e\", \"firstname\": \"cem\", \"lastname\": \"altuner\", \"createdat\": \"1638256527.26767\", \"updatedat\": \"1638256527.26767\"} serverless framework is a good option to create and test serverless applications on your local computer with localstack. but if you want to test your whole infrastructure, the infrastructure as a code approach like terraform would be a better option. conclusion localstack makes it easier to test, save money, and get things done faster when developing your infrastructure. if you want to test your infrastructure on your local computer, localstack can be the right solution for you."
},
{
"title":"Kloia Observability Platform Partnerships Named Top 3 Awards",
"body":"Kloia focuses on new technologies on hybrid cloud platforms, DevOps, and the development area. In these areas, parallel with the development, observability platform needs are increasing. The Cloud-Native universe\u2019s choice brings complexity and adoption to those technologies is important for optimizing the developer productivity while providing the DevOps cycle with the required governanc...",
"post_url":"https://www.kloia.com/blog/kloia-observability-platform-partnerships-named-top-3-awards",
"author":"Yetiskan Eliacik",
"publish_date":"11-<span>Jan<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/yetiskan-eliacik",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/ema-top3-awards.jpeg",
"topics":{ "instana":"instana","observability":"observability","humio":"humio","award":"award","partnership":"partnership" },
"search":"11 <span>jan</span>, 2022kloia observability platform partnerships named top 3 awards instana,observability,humio,award,partnership yetiskan eliacik kloia focuses on new technologies on hybrid cloud platforms, devops, and the development area. in these areas, parallel with the development, observability platform needs are increasing. the cloud-native universe\u2019s choice brings complexity and adoption to those technologies is important for optimizing the developer productivity while providing the devops cycle with the required governance and control to ensure cost-efficiency and continuous compliance. observability, availability, and security are the three primary devops challenges today. sre teams are not only focusing on the problem itself; they need to focus on the customer experience, system performance, and scalability of the system. based on those metrics, our engineering team has been investigating to find new products on the observability platform and support our customers. we are long-time partners with instana and humio and they are named for top 3 observability platforms by ema. why ibm observability by instana received the ema top 3 award instana received the ema top 3 award for the platform\u2019s ability to automatically discover and monitor cloud-native and traditional application stacks within the context of their orchestration platform (typically kubernetes), and the underlying data center or cloud infrastructure. instana\u2019s reinforcement learning models continuously learn to watch out for issues similar to the ones that were detected within comparable contexts in the past. instana automatically discovers new applications simply by adding a standard configuration code to your git repository to automatically place, configure, and manage the required agents in order to ensure comprehensive observability.. market segment: automatic end-to-end observability changes to the application stack, code, and release pipeline are the three key reasons for performance degradation and downtime in cloud-native apps as well as traditional enterprise applications. ema top 3 award-winning applications in the \u201Cautomatic end-to-end observability\u201D segment capture these changes in near-real time, at full resolution, and without requiring manual instrumentation, in order to provide: business-driven production insights targeted alerts with problem context automatic root cause analysis monitoring across application environments these components lead to better alignment between it and business through the ability of tuning optimization and resolution actions to optimize specific sets of business kpis. business impact: enhance developer and sre productivity decrease mttr lower operational risk by continuously optimizing the application stack resolve issues proactively and optimize applications offer complete observability for traditional and cloud-native apps across data center and cloud empower developers and operations staff to handle complex cloud-native applications independent from where they run why humio received the ema top 3 award humio\u2019s ability to ingest any data, structured or unstructured, from almost any source, including traditional storage, network, and compute (as well as data from business transactions, security sources, gps devices, smartphones, and iot devices), points toward the platform\u2019s ambition to \u201Canswer anything.\u201D humio\u2019s index-free data architecture enables fast real-time querying and alerting for massive amounts of data across numerous sources by intelligently narrowing down the amount of data to query based on the relevant query timeframe, situational data from the query context, and additional knowledge about the character and intent of the query. this dramatically lowers the threshold for developers, operators, and business staff to run queries that can uncover important correlations that were out of reach before. this opens the door to reveal and monitor previously hidden relationships between business kpis, user experience, application performance, code changes, infrastructure configuration, newly adopted technologies, and many other factors. when you log everything, you can basically ask any question. this is exciting. market segment: log management and observability ema top 3 award-winning products make it easy for enterprise customers to ingest, process, store, and analyze end-to-end operations data in full resolution, real time, and in a consolidated manner that simplifies troubleshooting, proactive planning, and application optimization. the \u201Clog everything\u201D paradigm underlines the importance of capturing all aspects of the application stack within context and without exception. this requires the product\u2019s pricing model, data architecture, analytics and machine learning engine, and management architecture to accommodate traditional enterprise apps and complex distributed applications alike. simple tools for developers, operators, and sres to quickly surface the required data are critical. the ema top 3 award-winning products in this category excel in all of these areas to help enterprises close any observability blind spots to enhance staff productivity and minimize operational risk. business impact index-free logging to enhance developer productivity compression for faster reading, writing, storing, and moving of data instant help for users in real-time business-driven optimization of it, devops, and secops machine learning-driven identification of important events built-in cloud-native log management (e.g., for kubernetes) continuous auditing to manage compliance accelerated software development our vision, technology knowledge, and customer need understanding are proved by this award and we are serving our customers with those visionary products."
},
{
"title":"New Way of HCI - Harvester",
"body":"In this blog post, I\u2019m going to dive into the on-premises world and talk about an innovative solution for managing different workloads and platforms from a single pane. HCI (Hyperconverged Infrastructure) is a software-defined infrastructure solution that virtualizes and combines compute, storage and networking layers together. It allows you to manage your traditional data center compone...",
"post_url":"https://www.kloia.com/blog/new-way-of-hci-harvester",
"author":"Emin Alemdar",
"publish_date":"06-<span>Jan<\/span>-2022",
"author_url":"https://www.kloia.com/blog/author/emin-alemdar",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/harvester-hci.jpeg",
"topics":{ "kubernetes":"Kubernetes","harvester":"harvester" },
"search":"08 <span>mar</span>, 2022new way of hci - harvester kubernetes,harvester emin alemdar in this blog post, i\u2019m going to dive into the on-premises world and talk about an innovative solution for managing different workloads and platforms from a single pane. hci (hyperconverged infrastructure) is a software-defined infrastructure solution that virtualizes and combines compute, storage and networking layers together. it allows you to manage your traditional data center components in a single hardware device. before hci, you would have to purchase and manage the storage layer separately. there are many traditional and proprietary hci options available from different vendors like vmware, hpe, nutanix and dell emc. these solutions add some new features to their current virtualization platform. specifically, these solutions use datacenter server hardware with locally attached storage devices and deliver the capacity to the upstream workloads with a software-defined approach. this structure eliminates the need for a separate san device and reduces the operational burden of the storage system. while these abstractions are helpful, these traditional solutions do not embody the modern it approaches in their architecture and their codebases. there are two main problems with these solutions. the first problem is the architecture. these solutions are meant to manage only virtual machines (vm) and traditional it components. but we are now using containers and kubernetes in our environments with all of the other cloud-native technologies for our application workloads. also, adding a software layer on top of the existing platform is not very innovative. it is like the automobile companies building ev models on the same platform with the company\u2019s old combustion engine model cars. it works but it is not completely modern. second problem with these traditional hci solutions is licensing options and pricing. these solutions are extremely expensive. you would have to pay for the hardware, virtualization software, hci software, centralized management software, support subscription separately and then you have to pay extra for some features in these solutions. furthermore, licensing options are complex with these solutions. for example, some of these solutions are quoted based on both the physical cpu count of the nodes and cpu core count of the cpus, separately. in 2020, suse announced a new solution called harvester. in december 2021, they announced the v1.0 ga release of harvester and it is now production-ready. so, what is harvester? harvester is an open source, simple, %100 free-to-use modern hci solution built for running vms and container workloads together. it\u2019s built on open source, cloud native, enterprise-grade technologies such as kubernetes, kubevirt and longhorn. you don\u2019t have to have prior knowledge for these technologies with harvester because it is designed to be easy to understand, operate and benefit from those cloud native technologies. this simplification opens up interesting opportunities with other technologies that can integrate with kubernetes. with harvester\u2019s small footprint, you can install and operate your workloads even at the edge. you can also use the official terraform provider for harvester to manage your virtual machine management platform with iac. as you can see from the architecture diagram above, harvester contains three main components. longhorn, kubevirt and opensuse leap 15.3 os. longhorn is a lightweight distributed block storage solution for kubernetes. kubevirt is a vm management toolkit for kubernetes environments. finally, opensuse leap is a linux distro optimized for running kubernetes clusters. harvester supports iso and pxe boot installation methods. you can download and install harvester on your bare metal servers easily. it also supports nested virtualization, so that you can try harvester on top of your existing virtualization platform. the iso image contains all the necessary packages for air-gapped installations. last but not least, i should mention the rancher integration. rancher is an open source kubernetes multi-cluster management platform. rancher integration is one of the most exciting features of harvester. with this integration you can now manage both virtual machine workloads and kubernetes workloads from a single platform. prior to this integration, you would have to manage different environments individually. now you can import harvester clusters to rancher\u2019s virtualization management page and benefit from rancher\u2019s authentication, authorization and rbac featureset for multi-tenant environments. also, you can now deploy rke and rke2 kubernetes clusters on harvester clusters. built-in harvester node driver support is added to rancher in v2.6.3. furthermore, you can get the load balancer and persistent storage support automatically with clusters provisioned on harvester. using these two solutions together will bring a lot of efficiencies with the consolidation of management and operation burden. now let\u2019s move on to the demonstration. in this demonstration, i will explain how to install a harvester cluster, how to join a node to the cluster, how to configure it and manage resources inside the cluster. finally, i will integrate the harvester cluster with rancher and show some features of that integration. i\u2019ve also created a github repository with example terraform codes for managing harvester resources like vms, networks and images that you can use but i will show you how to operate the environment from the ui with screenshots. let\u2019s start with the installation. i will use the iso installation method for this and i will setup a three node cluster. installation is actually pretty simple and straightforward. when the first node boots up, you have two options as you can see from the screenshot. i am choosing the first one with the first node. then i move on with network configuration, choosing the physical nic for the management and hostname. after configuring the network, i configure the dns as well. after configuring the dns, i am creating a cluster token for adding the additional nodes to the cluster. after reviewing the configuration, i select yes and the installation starts. after completion of the installation, the node\u2019s current status is ready. i can now login to the ui using the management url. after logging in to the system for the first time, i see this dashboard. the dashboard consists of general information about the hosts, vms, capacity information and cluster\/vm metrics. i will start configuring the cluster options starting with the host network. i will choose the physical nic and configure the required sections. then i move on to adding vm network configurations. i have created one vm network and chosen the vlan id of 110, but of course you can change it and add other networks. i will now add images to the cluster. there are two options to choose from: url and iso file upload. after creating the images, harvester downloads the first image from the global url and uploads the local iso file. now let\u2019s move on to adding the other nodes to the cluster. i will add the screenshots of the second node only but i will have a total of three nodes finally. i am choosing the join option and starting the process by configuring network options. after network configurations, i am going to configure the cluster details like management host and cluster token options and after reviewing all the options i will start the installation. after adding the nodes, i can now see the details of them in the ui as well. now let me create a vm from one of the images i\u2019ve created before. as you can see, i\u2019m creating an ubuntu vm with 2 cpus and 2 gb of memory. i will also add some cloud-config parameters to the vm from the advanced options section. i\u2019ve added some configurations to both user data and network data sections. you can also create templates for these configurations and use them repeatedly. after creating the vm it boots up and i can now connect to it. there are two options to connect the vm from the ui. the first one is the vnc console and the second one is the serial console. also, you can add ssh key to your vms to access them from the network but i haven\u2019t configured it in this demonstration. as you can see, it took an ip address from the dhcp and the vm is automatically placed on node 02. let me connect to the vm and test the internet connection. it works. perfect! as i\u2019ve mentioned before, harvester comes with a preinstalled monitoring stack that includes prometheus and grafana. there are even preconfigured dashboards in grafana and you can explore these from both harvester\u2019s and grafana\u2019s ui. you can also manage these dashboards and add some custom ones according to your needs. now it\u2019s time for the rancher integration. i have installed a single node rancher server for this demonstration. the first step of this integration is importing clusters. in the rancher ui, there is a section called virtualization management that you can import the harvester clusters. i am going to import the cluster now. after creating the cluster from the rancher ui, it pops up a registration url and shows me the steps i need to take to complete the process. from the harvester ui, i will add the url to the required section. now it\u2019s time to create a kubernetes cluster on the imported harvester. i will use the built-in harvester node driver to create a node template and create the cluster with these templates. as you can see, it\u2019s now provisioning the rke cluster from scratch on the harvester cluster. with this integration, managing both vm and kubernetes environments from a single platform is now possible and very straight-forward. conclusion we are living in the modernization era right now and modernizing our infrastructure is at the core of this transformation. having a cloud-native hci solution like harvester will really help with this. harvester has huge potential and has benefits like being open source and %100 free. you can stop using the old, proprietary, expensive, traditional hci solutions and start the innovation at the core of your platform with harvester."
},
{
"title":"How to Use Git Hooks for Shift Left on Continuous Integration",
"body":"Githooks serve a cross-cutting role, where any operation can be performed in Git, which is one of the most used version control systems. With these hooks, developers can perform the stages of Continuous Integration Pipelines on their machines. In this way, they can catch errors that may be encountered during any step as early as possible. Hence, they can apply the Shift Left Principle wh...",
"post_url":"https://www.kloia.com/blog/how-to-use-git-hooks-for-shift-left-on-continuous-integration",
"author":"Muhammed Said Kaya",
"publish_date":"26-<span>Dec<\/span>-2021",
"author_url":"https://www.kloia.com/blog/author/muhammed-said-kaya",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/How-to-Use-Git_Hooks-for-shift-Left-on-Continuous-Integration.jpeg",
"topics":{ "continuous-integration":"Continuous Integration","git":"Git","hooks":"hooks" },
"search":"06 <span>jan</span>, 2022how to use git hooks for shift left on continuous integration continuous integration,git,hooks muhammed said kaya githooks serve a cross-cutting role, where any operation can be performed in git, which is one of the most used version control systems. with these hooks, developers can perform the stages of continuous integration pipelines on their machines. in this way, they can catch errors that may be encountered during any step as early as possible. hence, they can apply the shift left principle which is to take a task that's traditionally done at a later stage of the process and perform that task at earlier stages. for example, i manage the ci process with gitlab and ensure that the tests i have written are running. running tests can be done before committing or pushing the changes to the remote repository automatically. as a result, git hooks are an ideal solution to implement the shift left principle in the ci process. git offers client-side hooks such as pre-commit, prepare-commit-msg, pre-push, post-checkout, pre-rebase, pre-applypatch, post-applypatch, post-rewrite, post-commit, post-merge . with these hooks, you can build your code, run your tests, and analyze code with sonarqube before pushing any changes to your git repo or committing to your local environment. in fact, decisions can be made about the commit message format like a policy, and the commit message can be automatically edited with the prepare-commit-msg hook. in this way, developers will be able to take faster action regarding the content of the commits and the issue in which the change was made. hooks can work on the developer's machine, or they can work on the machine where your remote repo is located. these are called server-side hooks. there are hooks such as pre-receive, update, post-receive. server hooks are used to enforce that commits and changes conform to the project\u2019s policies. hence, any unwanted code will not make it to your remote repo. to give an example, if the branch that the developer has just opened in her local does not comply with the project\u2019s branch name policy, you can prevent this branch from being pushed to the remote repo with the update hook. installing hooks hooks are located in the hooks sub-folder under the .git folder in the main directory of the repository. these files are created automatically when the git repo is initiated. if you make these files executable and delete the .sample extension, they will start working. while the hook samples are shell scripts, they can be written with any language. let\u2019s see git hooks in action! in this example, i have developed a client-side hook on the kloia_exporter pip package repository, which enables us to easily target the prometheus monitoring toolkit and create a client that yields our metrics, with the prometheus monitoring toolkit i developed under kloia. i will ensure that unit tests run through hooks. this is the final structure of my directory. requirements pre-commit(optional). you can use the pip package if you want to write your pre-commit hooks or to create your own executable files without using the package. in the following steps, both methods are described. with flake8, i will deal with formatting issues of python codes. pip3 install flake8 pre-commit i have two hook samples. the first one is prepare-commit-msg. i will ensure that the name of the branch is automatically included in the commit messages and that the commit message is automatically formatted, such as what the change is (feature\/fix\/refactor). in my pre-commit hook, i will expect whether the python code i have developed complies with the flake8 rules and the unit tests written will run successfully. prepare-commit-msg suppose my branch names contain the type of enhancement to be made and the issue number. (like fix\/sre-150). my goal is to make the commit message \"[sre-150] : fix - change collector name\" together with the message entered by the developer when i commit the changes. git will automatically give the commit-msg as a parameter to the hook i wrote. in the script below, i split the branch name according to the \u201C\/\u201D delimiter and place them in the commit message and update our file. thus, my commit message will be formatted according to the structure i have determined. #!\/bin\/bash commit_msg_file=$1 branch_name=$(git symbolic-ref --short head) hint=$(cat \"$commit_msg_file\") delimiter=\"\/\" s=$branch_name$delimiter array=(); while [[ $s ]]; do array+=( \"${s%%\"$delimiter\"*}\" ); s=${s#*\"$delimiter\"}; done; if [ -z ${array[0]} ] || [ -z ${array[1]} ] then echo \"${hint}\" > \"$commit_msg_file\" else echo \"[${array[1]}]: ${array[0]} - ${hint}\" > \"$commit_msg_file\" fi pre-commit i will deal with the unit tests and formatting of the python code i have written in the pre-commit part. there are two different methods here. the first one is to create your own executable files and give them as a hook. the second one is to create a yaml config file with pre-commit, which is a pip package, and to run my hook. let's go step by step. way 1 - writing your executable files i run our tests written with the unit test library, which comes as a built-in package on python3, by searching the python files that start with \"test_\" recursively in the directory. then i run flake8. if there is an error in the output of these two functions, i write \"exit 1\" and prevent the commit process from continuing. if there is no problem, if the tests are running properly and there is no problem with formatting, my hook will return \"exit 0\" and will continue the commit process. #!\/bin\/bash function test_code { python3 -m unittest discover -s . -v -f -p \"test_*.py\" } function check_format_of_code { flake8 } test_code && check_format_of_code result=$? if [[ $result -ne 0 ]]; then exit 1 fi first of all, i make the scripts (hooks) i have written executable. then i copy them under the \u201C.git\/hooks\u201D folder, which is a hidden file in my repository. i checkout a branch from the master according to the branch name i have determined. i name it \u201Cfix\/sre-150\u201D. then i check my git stage\/cache. i create a config file named .flake8 in the main directory and let flake8 ignore my comments and exclude the init file of the kloia_exporter module. [flake8] ignore = e501, e123 exclude = __init__.py i am making some changes to our files. next, i examine the stage and my logs. it's time for the commit part. i enter my commit message as \u201Ctest\u201D. then i see that there are 24 unit tests written and they all executed properly with the pre-commit hook. there were no formatting issues. (flake8 did not give any output). then, with the prepare-commit-msg hook, i can see that my commit message is formatted as \u201C[sre-150]: fix - test\u201D. when i view my logs, i see that i could format my commit properly. way 2 - using pre-commit pip package another method of using a pre-commit hook is to take the configurations from the yaml file with pre-commit, which is already a pip package, and to run my hook according to entry points. first, i create a yaml file named .pre-commit-config in the main directory of my repo and give my repo list to it. in the yaml file, i can use the already existing repos on github, as well as run scripts or processes that i wrote in my local environment. in my example, i will use flake8\u2019s own github repository. also, i will define an entry for unit tests to be able to run my shell command. repos: - repo: https:\/\/github.com\/pycqa\/flake8 rev: \u20184.0.1\u2019 hooks: - id: flake8 - repo: hooks: - id: unit-test name: unit-test-hook entry: python3 -m unittest discover -s . -v -f -p \u201Ctest_*.py\u201D always_run: true pass_filenames: false then, i give my configuration with the install command of the pip package i have downloaded. this command will run my repos in the yaml file one by one. pre-commit install first of all, i examine my stage for changes. i have some changed files. when i commit them, i see that my flake8 and unit tests are running and outputting, just like pipeline stages, step by step. finally, when i examine my git logs, i see that my changes have been successfully committed. conclusion you can avoid sending unformatted and untested code by using git hooks. just create your pre-commit, pre-push, prepare-commit-msg files and put them under the .git\/hooks folder of your repository. thus, on the more left pipeline of ci, you will be able to face the problems. you will apply the shift left principle. you can give git hooks a try."
},
{
"title":"Kloia Has Acquired Holizon: Now Stronger In The Observability Domain",
"body":"Founded in 2015 with the assistance of Teknogiri\u015Fim Capital, Holizon is a company that specializes in Application Performance Monitoring and Automation (APM). It has made a name for itself in the business world as a forward-thinking technology company that places a high priority on research and development and strives to provide comprehensive solutions to its customers. For example, it h...",
"post_url":"https://www.kloia.com/blog/kloia-has-acquired-holizon-now-stronger-in-the-observability-domain",
"author":"Yetiskan Eliacik",
"publish_date":"26-<span>Dec<\/span>-2021",
"author_url":"https://www.kloia.com/blog/author/yetiskan-eliacik",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/Holizon-kloia-merge-launch.jpeg",
"topics":{ "observability":"observability","kloia":"kloia","holizon":"holizon","domain":"domain","humio":"humio","grafana":"grafana" },
"search":"26 <span>dec</span>, 2021kloia has acquired holizon: now stronger in the observability domain observability,kloia,holizon,domain,humio,grafana yetiskan eliacik founded in 2015 with the assistance of teknogiri\u015Fim capital, holizon is a company that specializes in application performance monitoring and automation (apm). it has made a name for itself in the business world as a forward-thinking technology company that places a high priority on research and development and strives to provide comprehensive solutions to its customers. for example, it has completed numerous projects using infrastructure and application performance monitoring products, starting with energy monitoring and progressing through r&d projects. vismon.io informatics valley also completed the development of the service-based and monitoring product in turkey and was successful in importing this product into other countries. it was through the recommendation of a former holizon customer that kloia and holizon's paths crossed again. as a result of their previous monitoring experience, two of the company's founding partners decided to collaborate and join forces under the kloia umbrella in order for the company to concentrate more on this field and improve the quality of tracking for its customers. the initial goal of kloia was to take a \"full potential\" approach to identify all potential areas of improvement and implement those changes. in contrast to the traditional approach of simply integrating holizon to capture the most obvious synergies, a full-potential approach generates improvements for holizon while simultaneously capturing all synergies and capitalizing on the opportunity to upgrade kloia. the distributed system design, microservice architecture, and devops processes used by kloia, as well as traceability, are critical to ensuring transformational continuity in the organization. we were able to take advantage of kloia's expertise in this area by establishing cooperation in this area to increase the site's end-to-end traceability from the customer infrastructure to the service provided. this meant strengthening together and creating a perfect customer experience, which we utilized. it was the goal of this collaboration to improve the process not only with the products developed in collaboration with kloia, but also with the products that were already installed by performing the necessary monitoring and infrastructure analysis in accordance with the needs of the clients. is there anything we can expect from this merger in the near future? as a result of this union of forces, the most experienced teams in turkey and uk and around the world in products such as the instana observability platform, the humio log management product, and grafana have been established. we will continue to provide our customers with world-class service. in addition, we conduct in-depth customer research in a variety of areas and continue to invest in new products and technological advancements. we were able to realize the first benefits of this merger by taking on the renewal of the instana apm license as well as a local maintenance and consulting service project for one of turkey's leading financial institutions."
},
{
"title":"Creating Prometheus Custom Exporters with kloia_exporter Pip Package",
"body":"Creating Prometheus Custom Exporters is easy. If you are using Prometheus, which is one of the most popular and most used monitoring and alerting toolkits, you don\u2019t need to worry about creating Prometheus Client when you want to yield metrics. kloia_exporter, which is the pip package that can be used to create REST API and yield metrics to Prometheus so that you can just focus on your a...",
"post_url":"https://www.kloia.com/blog/creating-prometheus-custom-exporters-with-kloia_exporter-pip-package",
"author":"Muhammed Said Kaya",
"publish_date":"26-<span>Dec<\/span>-2021",
"author_url":"https://www.kloia.com/blog/author/muhammed-said-kaya",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/Prometheus-Custom-Exporter-with-kloia_exporter.jpeg",
"topics":{ "prometheus":"Prometheus","customexporter":"CustomExporter" },
"search":"29 <span>dec</span>, 2021creating prometheus custom exporters with kloia_exporter pip package prometheus,customexporter muhammed said kaya creating prometheus custom exporters is easy. if you are using prometheus, which is one of the most popular and most used monitoring and alerting toolkits, you don\u2019t need to worry about creating prometheus client when you want to yield metrics. kloia_exporter, which is the pip package that can be used to create rest api and yield metrics to prometheus so that you can just focus on your applications. you can import kloia_exporter and easily create http api as a target to prometheus, and whatever you want to yield, you can. for example, metrics might be request times for a server or a number of total connected users. let\u2019s see custom exporter in action! example - custom exporter for couchbase metrics i will develop a custom exporter which gets metrics from the couchbase server by using python sdk and yields them as a target for prometheus. there are some requirements to be able to develop a custom exporter. firstly, i will install kloia_exporter from the github kloia repository and couchbase python sdk. pip3 install git+https:\/\/github.com\/kloia\/prometheus-custom-exporter pip3 install couchbase ansible will be used for installing our custom exporter. i will create these files step by step. after that, i am going to run just the playbook. this playbook will create a systemd service which serves on a defined port as a target for prometheus. thus, i will scrape the metrics. this is the final structure of my directory. step 1 - create exporter.py here is the script to help you develop a custom exporter. from kloia_exporter import api, config from data_layer import datalayer couchbase_config = config.get_config_info(\"service_check.ini\", \"couchbase\") dao = datalayer(couchbase_config) metric_inputs = [ { \"metricname\": \"totalusers\", \"helptext\": \"total users\", \"labels\": [\"labelkey\"], \"collect\": lambda metricfamily: metricfamily.add_metric( [\"labelvalue\"], dao.get(\u201Cselect count(*) from kloia\u201D)[0][\u201C$1\u201D] ) } ] api(int(couchbase_config[\"port_number\"]), metric_inputs=metric_inputs).listen() let\u2019s go over these step by step: from kloia_exporter import api, config from data_layer import datalayer i will import the api and config classes from kloia_exporter to create rest api, which yields our metrics and reads our config files. i will also create data_layer.py in the following steps to be able to connect to the couchbase server. couchbase_config = config.get_config_info(\"service_check.ini\", \"couchbase\") i will get the couchbase section by reading the configuration file which is named service_check.ini. it includes some credentials and ports. dao = datalayer(couchbase_config) i will connect to the couchbase server. metric_inputs = [ { \"metricname\": \"totalusers\", \"helptext\": \"total users\", \"labels\": [\"labelkey\"], \"collect\": lambda metricfamily: metricfamily.add_metric( [\"labelvalue\"], dao.get(\u201Cselect count(*) from kloia\u201D)[0][\u201C$1\u201D] ) } ] i will define a list that includes objects. these objects must contain some keys metricname, helptext, labels, and the collect lambda function. by defining a collect function, prometheus client calls it and yields the metric from the port. in our case, i will get the count of total users on the system by querying couchbase server. this number will be represented as my metric. api(int(couchbase_config[\"port_number\"]), metric_inputs=metric_inputs).listen() i will give the list to the api classes, which are imported from kloia_exporter package, as a parameter. it will create the prometheus client for me. step 2 - create data_layer.py here is the script to connect couchbase server by using couchbase python sdk. i will give the credentials, that is on service_check.ini, as a parameter to the datalayer object on the exporter.py. it will create a connection. this will allow me to get my metrics by writing n1ql queries. from couchbase.cluster import cluster from couchbase.auth import passwordauthenticator import logging class datalayer(): def __init__(self, args): self.args = args try: self.cluster = self.__connect_db() self.bucket = self.cluster.bucket(\"kloia\") self.collection = self.bucket.default_collection() except exception as exp: logging.error(exp) def __get_authenticator(self): if self.args[\"user_name\"] and self.args[\"password\"]: return passwordauthenticator(self.args[\"user_name\"], self.args[\"password\"]) return none def __get_conn_str(self): if self.args[\"cluster\"]: return \"couchbase:\/\/\" + self.args[\"cluster\"] return none def __connect_db(self): try: authenticator = self.__get_authenticator() conn_str = self.__get_conn_str() return cluster(conn_str, authenticator=authenticator) except exception as exp: logging.error(exp) return none def get(self, queryprep): try: res = self.cluster.query(queryprep) return res.rows() except exception as exp: logging.error(exp) return [] the custom exporter is ready. i need to install it by using ansible as a systemd service. step 3 - create service_check.ini the config file is as follows. it includes the prometheus client\u2019s port number and some credentials for connecting to the couchbase server. [couchbase] port_number= cluster= user_name= password= step 4 - give variables from default vars and group vars. i need to define couchbase credentials to connect properly. these variables must be updated before running the playbook. also, from default vars, i am declaring prometheus client\u2019s port number. ansible\/group_vars\/all.yaml ( we need to update ) couchbase_user_name: couchbase_user_name couchbase_password: couchbase_password ansible\/couchbase_exporter\/defaults\/main.yaml couchbase_exporter_path: \/opt\/couchbase-exporter couchbase_exporter_port_number: 9900 couchbase_exporter_cluster: localhost step 5 - create systemd service file ansible\/couchbase_exporter\/templates\/couchbase-exporter.service [unit] description=metric exporter service [install] wantedby=multi-user.target [service] user=monitoring group=monitoring workingdirectory= execstart=python3 \"\/exporter.py\" restart=always step 6 - create ansible handlers after tasks are done about the systemd files, i need to reload the daemon. so, i need a handler for notifying ansible. this will allow me to restart the systemd service. ansible\/couchbase_exporter\/handlers\/main.yaml --- - name: \"restart couchbase-exporter\" systemd: name: couchbase-exporter daemon_reload: true state: restarted step 7 - create ansible tasks here are the tasks to install a custom exporter on a target as a systemd service. ansible\/couchbase_exporter\/tasks\/main.yaml monitoring user and group are created. - name: create monitoring user user: name: monitoring - name: create monitoring group group: name: monitoring exporter path with correct permissions and the correct user group is created. - name: create exporter directory file: state: directory owner: monitoring group: monitoring path: \"\/\" mode: 0750 custom exporter python files are uploaded. - name: upload exporter lib files copy: src: lib\/ dest: \"\/\" mode: 0644 directory_mode: \"0755\" owner: monitoring group: monitoring configuration file is uploaded. - name: upload service_check.ini file template: src: service_check.ini dest: \"\/\" mode: u+rw,g-wx,o-rwx owner: monitoring group: monitoring systemd service file is uploaded. - name: upload exporter systemd files template: src: couchbase-exporter.service dest: \/usr\/lib\/systemd\/system\/couchbase-exporter.service mode: 0644 owner: monitoring group: monitoring notify: \"restart couchbase-exporter\" systemd service enabled. - name: enable exporter systemd systemd: name: couchbase-exporter daemon_reload: true state: started enabled: true service status is checked. - name: flush handlers meta: flush_handlers - name: get services status ansible.builtin.service_facts: - name: check if couchbase-exporter is running ansible.builtin.assert: quiet: true that: ansible_facts.services['couchbase-exporter.service']['state'] == 'running' fail_msg: couchbase-exporter.service is not running step 8 - create a playbook and host.ini i will create a playbook that is named application_exporters.yaml and update host.ini ansible\/application_exporters.yaml - hosts: couchbase_exporter become: true roles: - couchbase_exporter ansible\/host.ini [couchbase_exporter] localhost step 9 - run ansible playbook just run the playbook, then the custom exporter will be ready. it will yield metrics on the \u201C9900\u201D port. ansible-playbook -i hosts.ini application_exporters.yaml step 10 (optional) - add exporter as a target to prometheus you can add exporter\u2019s 9900 port to prometheus as a target with the following snippet: scrape_configs: - job_name: 'couchbase_exporter' metrics_path: \/metrics scheme: http static_configs: - targets: { % for host in groups['couchbase_exporter'] % } - \":9900\" { % endfor % } relabel_configs: - source_labels: [__address__] regex: \"([^:]+):.+\" target_label: \"instance\" replacement: \"$1\" after that, prometheus will scrape this exporter\u2019s metrics. conclusion it is important to present the metrics and live with the data, make inferences from the data, and then make decisions accordingly. it is very easy to create a prometheus target that collects our data with the kloia_exporter pip package. give kloia_exporter a try."
},
{
"title":"Karpenter Cluster Autoscaler",
"body":"There are three main options for autoscaling in Kubernetes clusters. HPA (Horizontal Pod Autoscaling), VPA (Vertical Pod Autoscaling) and Cluster Autoscaling. I will talk about Cluster Autoscaling in this blog post. As you already know, many Kubernetes operators have been using the Official Cluster Autoscaler for years. Cluster Autoscaler works perfectly for most environments. AWS has de...",
"post_url":"https://www.kloia.com/blog/karpenter-cluster-autoscaler",
"author":"Emin Alemdar",
"publish_date":"24-<span>Dec<\/span>-2021",
"author_url":"https://www.kloia.com/blog/author/emin-alemdar",
"featured_image":"https://lh4.googleusercontent.com/E2f_YXx2Hy2F_dYkAQ1meSTKsCkoMIu9zLsGm7cDYSvIdWA7jxuhNuQdyJwiYvo9SBFFzQdfFTL_r6dm25t2tZafcU9yWhOQMX4GHPDYBuilEEQ_BmNSE92ajHjHMmaLqV0JerFR",
"topics":{ "cluster":"cluster","karpenter":"karpenter" },
"search":"01 <span>aug</span>, 2024karpenter cluster autoscaler cluster,karpenter emin alemdar there are three main options for autoscaling in kubernetes clusters. hpa (horizontal pod autoscaling), vpa (vertical pod autoscaling) and cluster autoscaling. i will talk about cluster autoscaling in this blog post. as you already know, many kubernetes operators have been using the official cluster autoscaler for years. cluster autoscaler works perfectly for most environments. aws has developed and published an open source cluster autoscaling tool called karpenter and released the ga version at re:invent 2021. right now, karpenter only supports aws as the underlying cloud provider but i believe this will extend in the future with contributions from the community. the tool aims to simplify the autoscaling configurations. with karpenter, we don\u2019t have to worry about configuring node pools beforehand, we don\u2019t have to worry about right-sizing the compute resources beforehand. that improves the application availability and minimizes the operation overhead. also, this helps with cost optimization. karpenter watches the events in the kubernetes cluster and resource requests for workloads. after detecting unschedulable pods karpenter makes the decisions about creating and terminating the nodes. it doesn\u2019t use node groups, instead it uses launch templates for nodes and you can configure custom launch templates for your needs. with karpenter, as operators we only need to configure provisioner crds. we can configure provider options, annotations, taints and most importantly ttl second values for node termination. with the ttl options karpenter cordons the nodes, drains all the pods and finally safely deletes the node. we can also add multiple provisioners with different configurations to a single cluster and separate our workloads within these provisioners. so, the question is how karpenter differentiates from cluster autoscaler. mostly these two tools are doing the same thing, that is managing nodes in your cluster on your behalf. but there are three main differences between the two. let me explain those. karpenter allows you to use all the flexibility of the cloud. that means you use all of the ec2 instance types aws has to offer. also, you can choose the purchase options like on-demand and spot, availability zone options. karpenter does not use node groups. karpenter manages each instance directly without configuring any other orchestration mechanism. in cluster autoscaler, you need to configure node groups for each instance type, purchasing options. that brings operational overhead. with karpenter you don\u2019t need to rely on the kube-scheduler. when cluster autoscaler launches a node, it doesn\u2019t bind the pods to those nodes. kube-scheduler makes that decision. but karpenter uses a scheduler plugin for these operations. this plugin creates a v1\/binding object and injects the node\u2019s information into the pod without waiting for the node to become ready. by doing that, when a node becomes ready, pod schedules on that node immediately. this approach saves some time off of latency. let\u2019s see karpenter in action! i\u2019ve prepared a github repository for this demonstration. you can follow the instructions in this blog post and codes from the repository to try karpenter. in the repo, there are some terraform codes to deploy an eks cluster, necessary iam roles for karpenter and of course karpenter controller itself. also, there are some yaml files for example provisioner and application deployments as well. i\u2019ve deployed the eks cluster and karpenter controller already. i only have one node group in the cluster and one node inside the node group. i\u2019ll only use this node for karpenter resources. as you can see from the screenshot, the karpenter helm chart creates a namespace, two deployments, and some configmaps. the kubectl patch configmap config-logging -n karpenter --patch '{\"data\":{\"loglevel.controller\":\"debug\"}} command changes the log level to debug. karpenter also creates some role, rolebinding, clusterrole and clusterrolebinding resources. after deploying karpenter, you can inspect those resources too. by the way, i\u2019m using an alias for kubectl as k . next, i am going to create provisioners for different node types. provisioner resources are just crds and easily configurable as kubernetes yaml definition files. you can find the examples in the github repository as well. first one is for amd64 type architecture. as you can see, in the spec.requirements section i\u2019ve defined three parameters. first one is purchasing options, the second one is for aws availability zones and the final one is for node architecture. i\u2019ve also added the instance profile to the nodes that karpenter will create. i haven\u2019t added any instance types for this provisioner but you can add them to the requirements section if you want specific instance types for your workloads. with this configuration, karpenter will choose the right instance type and size for my application. i will create a deployment for this amd64 provisioner. i will create the deployment with 0 replicas and scale it afterward. in the screenshot above, you can see i\u2019ve added a nodeselector that has the necessary parameter to match the amd64 provisioner. now, let\u2019s scale the deployment and see what happens. it\u2019s time to see the logs of the karpenter controller. i use the kubectl logs -f -n karpenter $(kubectl get pods -n karpenter -l karpenter=controller -o name) command for this. here is the output. let me explain what is happening in this output. first, karpenter detects there are some unschedulable pods in the cluster and triggers the node provisioning action. it reads some information from aws like ec2 instance types with amd64 architecture in the specified region, subnets, and security groups with the tag kubernetes.io\/cluster\/karpenter-cluster. next, karpenter excludes some instance types because it detects that my workload will not fit into those instances. it decides to launch 1 instance for 10 pods from the listed instance type options. that means i won\u2019t under-provision or over-provision anything in my aws account. that's one of the golden rules in cost optimization when using public cloud providers and karpenter does it for me. then karpenter creates a launch template for the right instance and launches it. finally, karpenter schedules the pods into the newly launched ec2 instance and starts waiting for unschedulable pods. as you can see, the node is added to the cluster. it is not in a node group and i didn\u2019t have to configure any node groups before deploying the application for it. also, i didn\u2019t have to configure the instance type or anything related to instance configuration. that saves a lot of time. perfect! now i\u2019m moving on to another provisioner. this time i\u2019ve created a provisioner for arm64 architecture. let me deploy a new application for this provisioner. i\u2019m going to scale this application from 0 to 10. let\u2019s see the logs again. starting from line 3, karpenter detects the scaling of the application and starts provisioning a node with arm processor. it again excludes the tiny instance types, creates another launch template for this node, launches the instance, and schedules the pods into that node. again, i didn\u2019t have to configure anything prior to deploying the application except creating the provisioner. like i\u2019ve mentioned before, karpenter can also provision spot instances. i\u2019m going to follow the same procedures as before. create a provisioner for spot instances, deploy an application with the right node selector configuration, and scale the deployment from 0 to 5. as you can see, karpenter does its job and launches the spot instance. you can see the node architecture details and instance type details with this command: kubectl get no -l node.kubernetes.io\/instance-type,kubernetes.io\/arch,karpenter.sh\/capacity-type when i look at the instance details, i can see that the last created node is actually a spot instance. finally, let me delete the deployments and see how karpenter handles that. as you\u2019ve already seen, my provisioner configurations have the ttlsecondsafterempty parameter set to 30. with this configuration parameter, karpenter adds the ttl (time to live) to nodes, drain the nodes, and after ttl period safely removes the nodes. when i run the k get nodes command, i can see the nodes are in scheduling disabled status. the process can be seen from the logs output screenshot above. conclusion the karpenter project is in the early stages right now. but if you are running your kubernetes clusters on aws and tired of configuring different node groups for every workload that needs something different or managing the configuration of cluster autoscaler, you can give karpenter a try. it is simple, efficient and promising."
},
{
"title":"Kloia Is Named APN Social Impact Partner Of The Year",
"body":"Kloia Software and Consulting Ltd. (Kloia) is recognized as the 2021 Amazon Web Services, Inc. (AWS) Social Impact Partner of the Year in EMEA for their support of communities that help to improve the AWS know-how, DevOps approach, and Cloud-Native culture. The APN Partner Awards recognize members of the Amazon Web Services (AWS) Partner Network (APN) who are leaders in the channel and p...",
"post_url":"https://www.kloia.com/blog/kloia-is-named-apn-social-impact-partner-of-the-year",
"author":"Serkan Bing\u00F6l",
"publish_date":"21-<span>Dec<\/span>-2021",
"author_url":"https://www.kloia.com/blog/author/serkan-bingol",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/AWS-Social-Impact-Partner-of-the-Year-1.jpeg",
"topics":{ "aws":"AWS","partner":"partner","apn":"APN" },
"search":"09 <span>mar</span>, 2022kloia is named apn social impact partner of the year aws,partner,apn serkan bing\u00F6l kloia software and consulting ltd. (kloia) is recognized as the 2021 amazon web services, inc. (aws) social impact partner of the year in emea for their support of communities that help to improve the aws know-how, devops approach, and cloud-native culture. the apn partner awards recognize members of the amazon web services (aws) partner network (apn) who are leaders in the channel and play a key role in helping customers to drive innovation and build solutions on aws. the apn social impact award is given to the partner that has created an innovative solution that positively impacts society, uses aws best practices, and demonstrates customer obsession. as kloians, our specific focus is on containerization and orchestration including kubernetes, ecs and eks, continuous integration & continuous delivery, observability, performance and cost optimization, infrastructure as code and devsecops. kloia have two aws community builders in the containers category within our team. also, kloia have an aws apn ambassador and an aws migration ambassador. our team members are always sharing their knowledge and experiences with the community by organizing events and webinars, publishing blog posts, attending online and face-to-face events for representing aws services and cloud-native technologies. kloia are proud of our leadership role in guiding communities toward innovation on cloud, and kloia will continue to support communities sharing the changes in technology, new trends, innovative solutions and help them successfully understand and implement aws cloud services to improve their business values on their cloud journey."
},
{
"title":"AWS re:Invent 2021: AN APN AMBASSADOR VIEW",
"body":"Many blogs have covered what was announced at AWS re:Invent this year. My favorites are from Resmo, Donkersgood and AWS Official Blog. I will l not repeat what has been already said in those posts; I will talk more about the untold or less mentioned things at the re:Invent from an APN Ambassador\u2019s perspective. Are the new Serverless Services really serverless? There were several serverle...",
"post_url":"https://www.kloia.com/blog/aws-reinvent-2021-an-apn-ambassador-view",
"author":"Derya (Dorian) Sezen",
"publish_date":"16-<span>Dec<\/span>-2021",
"author_url":"https://www.kloia.com/blog/author/derya-dorian-sezen",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/aws-reinvent-2021-an-apn-ambassador-view.jpeg",
"topics":{ "aws":"AWS","serverless":"serverless","reinvent2021":"Re:Invent2021","ambassador":"Ambassador" },
"search":"20 <span>dec</span>, 2021aws re:invent 2021: an apn ambassador view aws,serverless,re:invent2021,ambassador derya (dorian) sezen many blogs have covered what was announced at aws re:invent this year. my favorites are from resmo, donkersgood and aws official blog. i will l not repeat what has been already said in those posts; i will talk more about the untold or less mentioned things at the re:invent from an apn ambassador\u2019s perspective. are the new serverless services really serverless? there were several serverless services announced during re:invent, which is pretty good. on the other hand, we started seeing per-hour pricing, on services like managed kafka: https:\/\/aws.amazon.com\/msk\/pricing\/ this change gives an impression that due to the provisioning nature of clusters, newly announced serverless services do not run like lambdas. considering the provisioning and deprovisioning of kafka\/emr clusters, aws seems to be introducing hourly pricing for serverless. this is a major step forward. especially, for services like emr, where we run batch tasks, serverless makes a lot of sense. most popular question @apn booth to me: how can i find the right partner? i was appointed to stay in the aws partner network (apn) booth, where i had the opportunity to meet with the attendees and answer their questions. during those conversations, i noticed that finding the right partner is one of the most common problems that aws customers face. based on what i heard, i felt that the partner search page needed to be smarter, not only with partner skills but also with price and engagement model matching. engagement model differs from partner to partner. some partners prefer to provide managed services that include project management, but some partners provide dedicated teams to the customer, and the customer manages the project. there are other variations, and it seems that customers have difficulty identifying the engagement model that meets their needs. i believe those engagement models should be structured and categorized, which will let the customers find the relevant partner. price model also differs from partner to partner. although we can see that time & material (t&m) model is the preferred one, partners are still asked to provide time and budget estimates to the customer or to aws. partners also have a person\/day pricing which may be based on the engagement duration or dedication. all those can be modeled in a way that gives the customer an idea of how each partner does pricing. i have listened to multiple customers who were approached by the \u201Cbig four\u201D, yet they were not a good match.. there were a several cases where a new-age aws partner would have been a better fit. it seems that there is work to be done on the partner portal and partner search functions which will definitely help aws customers to find the right matches. outposts are still very limited! outposts of new 1u and 2u servers are really promising. i had a chance to discuss those directly with the product team, which was awesome! initially, i thought that those new units can be installed by the partners but it seems that only aws engineers could to that. i can see two use-cases for outposts: latency data protection laws i have not seen any customer who dropped or avoided aws because of latency, but there could be scenarios where low latency is critical. however, i have seen customers that failed to meet local data protection laws because there were no aws regions in the country. those customers seem to be interested in outposts. although outposts are announced, they are still not supported in several countries. for example, our customers in turkey are eager for the service, yet it seems that they would have to wait. emphasis on modernization you may have only noticed the mainframe modernization in the main announcements but i can say that aws\u2019s focus on modernization is huge! i have been invited to the enterprise workloads partner executive briefing luncheon, where certain numbers from market research have been presented under nda, which showed the potential for modernization. expo hunt! this is worth mentioningi am not an expo hunter but i noticed so many swag hunters around :) you had to be quick! to make it more challenging, you are provided a \u201Cquest list\u201D, which includes several missions to accomplish. there were many exclusive events and rare items too, such as: randomly given vouchers closed sessions (executive briefings, luncheons, ...) apn ambassador\/aws hero meet & greets there was so much swag to pick from, some of which could only be accessed by certain people or randomly, which was interesting. microservice extractor for .net finally, i would like to mention a new tool: microservice extractor for .net. this is important for us, because we provided so much feedback to the aws product team on this, and applied it on several .net projects, to report how it works and create issues out of it. we were announced as the launch partner for this tool which was kinda a surprise for us. (we were not prepared). this tool helps in extracting certain parts of a monolith into distinct apis. i cannot say it splits the monolith, because \u201Csplitting the monolith '' has further challenges such as changing data models and creating separate databases. the tool is a promising step forward for the companies who are stuck in a monolith which affects their development efficiency. more information for that tool can be found in prasad rao \u2018s and tom moore \u2018s post. re:play it will not be fair to conclude this post without mentioning the closing party: re:play. there were several retro game machines at the party, which people like my age were into :) dj zedd was awesome! after playing all those retro games, having his zelda mix was amazing! in this post, i wanted to share the under-mentioned aws re:invent experiences, from the perspective of an aws ambassador. if you have any questions regarding anything related to re:invent, don't hesitate to comment below. but don\u2019t forget: what happens in vegas, stays in vegas!"
},
{
"title":"Using Amazon\u2019s Kubernetes Distribution Everywhere with Amazon EKS Distro",
"body":"Using Kubernetes on Public Cloud is easy. Especially if you are using Managed Services like Amazon EKS (Elastic Kubernetes Service). EKS is one of the most popular and most used Kubernetes Distribution. When using EKS, you don\u2019t need to manage the Control Plane nodes, etcd nodes or any other control plane components. This simplicity allows you to focus just on your applications. But in r...",
"post_url":"https://www.kloia.com/blog/using-amazons-kubernetes-distribution-everywhere-with-amazon-eks-distro",
"author":"Emin Alemdar",
"publish_date":"30-<span>Nov<\/span>-2021",
"author_url":"https://www.kloia.com/blog/author/emin-alemdar",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/kloia-180.jpeg",
"topics":{ "kubernetes":"Kubernetes","eks":"EKS","amazon":"Amazon","eks-distro":"EKS Distro" },
"search":"09 <span>feb</span>, 2023using amazon\u2019s kubernetes distribution everywhere with amazon eks distro kubernetes,eks,amazon,eks distro emin alemdar using kubernetes on public cloud is easy. especially if you are using managed services like amazon eks (elastic kubernetes service). eks is one of the most popular and most used kubernetes distribution. when using eks, you don\u2019t need to manage the control plane nodes, etcd nodes or any other control plane components. this simplicity allows you to focus just on your applications. but in real life scenarios, you sometimes need to run kubernetes clusters in on-premises environments. maybe because of the regulation restrictions, compliance requirements or you may need the lowest latency when accessing your clusters or applications. there are so many kubernetes distributions out there. most of them are cncf certified. but that means you need to choose from many options, try and implement one that suits your needs. you need to check the security parts of the distribution or try to find a suitable tool for deployment. in other words, we all need a standardization. in december 2020, aws announced the eks distro. eks distro is a kubernetes distribution based on and used by managed amazon eks that allows you to deploy secure and reliable kubernetes clusters in any environment. with eks distro, you can use the same tooling and the same versions of kubernetes and its dependencies with eks. you don\u2019t need to worry about security patching the distribution too because with every version of the eks distro you will get the latest patchings as well and eks distro follows the same eks process to verify kubernetes versions. that means you are always using a reliable and tested kubernetes distribution in your environment. eks distro is an open source project that lives on github. you can check out the repo from this link: https:\/\/github.com\/aws\/eks-distro\/ you can install eks distro on bare-metal servers, virtual machines in your own data centers or even other public cloud provider environments as well. unlike eks, when using eks distro, you have to manage all the control plane nodes, etcd nodes and the control plane components yourselves. that brings some extra operational burdens but without the need of thinking about security or reliability of the kubernetes distro that you are using is a huge benefit. eks deployment options comparison table as you can see from the screenshot above each eks deployment option has its own features. on the right column there are options and features of eks distro. as i\u2019ve mentioned before when using eks distro you need to have your own infrastructure and you need to manage the control plane. also you can use different 3rd party cni plugins according to your needs. biggest difference is that unlike eks anywhere, there are no enterprise support offerings from aws with eks distro. the project is on github and supported by the community. when you have any problems or when you want to contribute to the projects you can file an issue or find solutions from the previous issues on the repository. let\u2019s see eks distro in action! when installing eks distro, you can choose a launch partner\u2019s installation options or you can use familiar community options like kubeadm or or kops. i will demonstrate the installation of eks distro with kubeadm in this blog post. first of all, for the installation with kubeadm, you need an rpm-based linux system. i am using a centos system for this demonstration. i have installed docker 19.03 version, disabled swap and disabled selinux on the machine. i will install kubelet, kubectl and kubeadm with the commands below on the machine. i will install the 1.19 version of the kubernetes in this demonstration. sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes cd \/usr\/bin sudo wget https:\/\/distro.eks.amazonaws.com\/kubernetes-1-19\/releases\/4\/artifacts\/kubernetes\/v1.19.8\/bin\/linux\/amd64\/kubelet; \\ sudo wget https:\/\/distro.eks.amazonaws.com\/kubernetes-1-19\/releases\/4\/artifacts\/kubernetes\/v1.19.8\/bin\/linux\/amd64\/kubeadm; \\ sudo wget https:\/\/distro.eks.amazonaws.com\/kubernetes-1-19\/releases\/4\/artifacts\/kubernetes\/v1.19.8\/bin\/linux\/amd64\/kubectl sudo chmod +x kubeadm kubectl kubelet sudo systemctl enable kubelet after enabling kubelet service, i am adding some arguments for kubeadm. sudo mkdir \/var\/lib\/kubelet sudo vi \/var\/lib\/kubelet\/kubeadm-flags.env kubelet_kubeadm_args=\"--cgroup-driver=systemd \u2014network-plugin=cni \u2014pod-infra-container-image=public.ecr.aws\/eks-distro\/kubernetes\/pause:3.2\" i will pull the necessary eks distro container images and tag them accordingly. sudo docker pull public.ecr.aws\/eks-distro\/kubernetes\/pause:v1.19.8-eks-1-19-4;\\ sudo docker pull public.ecr.aws\/eks-distro\/coredns\/coredns:v1.8.0-eks-1-19-4;\\ sudo docker pull public.ecr.aws\/eks-distro\/etcd-io\/etcd:v3.4.14-eks-1-19-4;\\ sudo docker tag public.ecr.aws\/eks-distro\/kubernetes\/pause:v1.19.8-eks-1-19-4 public.ecr.aws\/eks-distro\/kubernetes\/pause:3.2;\\ sudo docker tag public.ecr.aws\/eks-distro\/coredns\/coredns:v1.8.0-eks-1-19-4 public.ecr.aws\/eks-distro\/kubernetes\/coredns:1.7.0;\\ sudo docker tag public.ecr.aws\/eks-distro\/etcd-io\/etcd:v3.4.14-eks-1-19-4 public.ecr.aws\/eks-distro\/kubernetes\/etcd:3.4.13-0 i will add some other configurations as well. sudo vi \/etc\/modules-load.d\/k8s.conf br_netfilter sudo vi \/etc\/sysctl.d\/99-k8s.conf net.bridge.bridge-nf-call-iptables = 1 now let\u2019s initialize the cluster! sudo kubeadm init --image-repository public.ecr.aws\/eks-distro\/kubernetes --kubernetes-version v1.19.8-eks-1-19-4 this output is mostly the same as the usual kubeadm init command output. as you can see from the screenshot, the output has the kubeadm join command for the worker nodes or the configuration for accessing the cluster with the kubeconfig file. by the way, let me do that and access my kubernetes cluster installed with eks distro. sudo mkdir -p $home\/.kube sudo cp -i \/etc\/kubernetes\/admin.conf $home\/.kube\/config sudo chown $(id -u):$(id -g) $home\/.kube\/config let\u2019s run kubectl get nodes command and see the output. as you can see, i am able to connect the cluster and see the kubectl get nodes command output but node is in notready status. the reason is i need to add a pod network addon and install a cni plugin to the cluster. i will use calico cni for this demonstration. sudo curl https:\/\/docs.projectcalico.org\/manifests\/calico.yaml -o kubectl apply -f calico.yaml after installing the calico cni, my master node is now in ready state. \u00A0 i have configured the worker nodes with the same prerequisites like installing docker and disabling swap. i will pull and tag the necessary container image for the kubernetes cluster as well with these commands. sudo docker pull public.ecr.aws\/eks-distro\/kubernetes\/pause:v1.19.8-eks-1-19-4;\\ sudo docker tag public.ecr.aws\/eks-distro\/kubernetes\/pause:v1.19.8-eks-1-19-4 i can now move on with adding a worker node to the cluster. i will use the kubeadm join command from the kubeadm init command output. when i run the kubectl get nodes command, i can see the other node in ready state. as you can see my worker node has now joined the cluster and i can see the pods in the kube-system namespace. my kubernetes cluster is installed with eks distro and ready for deploying application workloads! conclusion having a tested, verified and reliable kubernetes distribution for production workloads is extremely crucial. this is why eks is one of the most used and most popular kubernetes distribution. being able to run the same distribution amazon uses with managed eks service on any infrastructure and platform is a huge advantage. if you have some compliance requirements or regulation restrictions and can not use public cloud platforms you can absolutely give eks distro a try."
},
{
"title":"Way to Microservices: Contract Testing - A Spring\/Pact Implemantation",
"body":"Imagine a system having N services, and all of them are loosely coupled. We know that because of the non-stop business needs, new features are added to the system continuously. Also, in real life, the transition of legacy applications to new architectures is not easy at all. Therefore, we must move each module piece by piece to microservices, leading to a constant deployment necessity. H...",
"post_url":"https://www.kloia.com/blog/way-to-microservices-contract-testing-a-spring/pact-implemantation",
"author":"Baran Gayretli",
"publish_date":"11-<span>Nov<\/span>-2021",
"author_url":"https://www.kloia.com/blog/author/barangayretli",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/kloia-178.jpeg",
"topics":{ "kubernetes":"Kubernetes","contract-testing":"Contract Testing" },
"search":"01 <span>aug</span>, 2024way to microservices: contract testing - a spring\/pact implemantation kubernetes,contract testing baran gayretli imagine a system having n services, and all of them are loosely coupled. we know that because of the non-stop business needs, new features are added to the system continuously. also, in real life, the transition of legacy applications to new architectures is not easy at all. therefore, we must move each module piece by piece to microservices, leading to a constant deployment necessity. how can we know which part of the application these microservices will affect and whether the other services that consume these services will break or not? this is where contract testing comes in handy. contract testing is a methodology to ensure that two different systems are compatible and can communicate. it captures the information exchanged between each service, storing them in a contract, which can be used to ensure that both sides satisfy it. contract testing is not like schema testing; it requires both client and provider to agree on the interactions and allows for change over time. what is the difference? the difference between contract testing and other approaches is that its primary goal is to test each system independently from the others. the contract is generated by the code, meaning it is always kept up to date with reality. the advantages of contract tests over integration tests contract tests generally have the opposite properties to e2e integrated tests: contract tests run fast because they don't need to communicate with various systems at a time. contract tests are easier to maintain, so that you don't need to understand the whole domain to write tests. contract tests are easy to debug and fix because the problem is only ever in the component you're testing - so you generally get a line number or a specific api endpoint that is failing. contract tests uncover bugs locally: contract tests can and should run on developer machines before deploying the code. from a business point of view, it is well known that the later a bug is found in a project lifecycle, the more costly it is to fix. contract testing keywords before going further, let\u2019s get familiar with the keywords used for contract testing. a contract is a document that prescribes the expected api request, response, path, query parameters, headers, etc. a consumer is a side of a contract that consumes or uses a given api. it is also referred to as a client. a provider is a side of a contract that provides or simply owns the given api. provider-driven contract testing in a provider-driven approach, provider drives the api evolution, publishing new contracts (and associated test doubles) whenever there\u2019s a significant change. the contracts\u2019 main goal is to ensure that the provider implementation satisfies the request\/response specification completely, regardless of whether his clients use all or just a tiny subset of the data in a response. in simple terms, the provider defines a \u201Cthis is what i do and what i expect you to do\u201D kind of contract for his clients, and it\u2019s up to the consumers to select the one that is interesting. pros the provider-driven approach to contract testing allows the provider to have complete control of the api development to decide the naming and versioning. this is useful if you are developing a provider service with a well-defined business domain because you will be the first one to know the new features and their effect on the api. this approach is also convenient for a provider that releases a public api and can\u2019t work closely with its consumers. it enables new clients to understand the capabilities of an api just by looking at the published stubs. cons provider-driven contract testing does not care so much about the consumer perspective. it doesn\u2019t stop the provider from producing a hard-to-use api. it\u2019s easy to misunderstand the expectations of consumers towards your api. another downside of not knowing how your consumers use your api is that when you want to modify it in a breaking way, you must assume that all your clients may need your complete response. or, the hard way, ask around and double-check with each of your clients \u2014 if only you know them and have a chance to reach out to them. consumer-driven contract testing in a consumer-driven approach to contract testing, the consumer drives the changes to the provider\u2019s api. the contracts are based on client integrations with the provider rather than specifying the whole request\/response structure. for a provider, it\u2019s a valuable perspective \u2014 each of its clients defines clearly which fields they use. consumer defines a \u201Cthis is what i exactly need\u201D kind of contract for the provider to satisfy. does it mean the provider is no longer the authority of his api and has to follow all the requirements coming from all the consumers? not necessarily. it does not mean that consumers have the right to decide on everything, including naming or data structures. instead, a pull request can be opened in the consumer\u2019s codebase, which consumers and providers may discuss. as a provider, it\u2019s okay to disagree with the proposal and suggest consumers to make some changes. however, the important thing is the discussion between the consumer and provider. if you are the provider, you get to know your client\u2019s needs and clearly see how your consumer will use your service\u2019s api. you can also have an understanding of how your client perceives your domain. pros using the consumer-driven approach to contract tests results in well-designed apis which are easier to use for the consumers. from the provider perspective, the feedback loop on the quality of your api definition is much shorter as it involves the consumers as early as at the design stage. the provider also knows precisely which endpoints or fields from the response are actively used and required by its consumers. it lowers maintenance costs, i.e., makes api\/field deprecation easier \u2014 no more endless texts between you and your clients to find out if anyone uses the xyz field from your response. as a provider, you can simply run a set of tests against the contracts defined by your consumers and get the answer immediately. cons what are the disadvantages then? first of all, there\u2019s no single view of all the capabilities of an api \u2014 they vary between different clients\u2019 contracts. this may put some additional effort into fully understanding the api\u2019s picture on consumer teams that aim to start using it without any background as opposed to the provider-driven approach. regardless of your choice between provider-driven or consumer-driven contract tests, remember that provider and consumer should create contracts in close cooperation. even the most comprehensive tools cannot take over the discussion between the members of two different teams. a well-defined contract that is up-to-date delivers a lot of value to both parties, which makes them both responsible for the contract, no matter whose codebase contains the code describing the contract. consumer-driven contract testing using pact java to demonstrate an example of consumer-driven contracts, i prepared the following microservice application built using spring boot: date provider microservice \u2013 \/provider\/validdate \u2013 validates whether the given date is a valid date or not. age consumer microservice \u2013 \/age-calculate \u2013 returns age of a person based on a given date. starting date-provider microservice, which by default runs in port 8080: mvn spring-boot:run -pl date-provider then start age-consumer microservice, which by default runs in port 8081: mvn spring-boot:run -pl age-consumer since the application is developed using spring boot in java, i will use pact for consumer-driven contract testing. pact provides a dsl for defining contracts. in addition, pact offers good integration with test frameworks such as junit, spock, scalatest, as well as with build tools such as maven and gradle. let\u2019s see examples of consumer and provider tests using pact. consumer testing consumer-driven contract testing begins with a consumer defining the contract. first of all, i have to add this dependency to my project: au.com.dius pact-jvm-consumer-junit5 4.0.9 test then add the below dependency to write java 8 lambda dsl to use with junit to build consumer tests. a lamda dsl for pact is an extension of the pact dsl provided by pact-jvm-consumer. au.com.dius pact-jvm-consumer-java8 4.0.9 test consumer tests start with creating requirements on the mock http server. let\u2019s start with the stub: @pact(consumer = \"ageconsumer\") public requestresponsepact validdatefromprovider(pactdslwithprovider builder) { map headers = new hashmap(); headers.put(\"content-type\", \"application\/json\"); return builder .given(\"valid date received from provider\") .uponreceiving(\"valid date from provider\") .method(\"get\") .querymatchingdate(\"date\", \"1998-02-03\") .path(\"\/provider\/validdate\") .willrespondwith() .headers(headers) .status(200) .body(lambdadsl.newjsonbody((object) -> { object.numbertype(\"year\", 1996); object.numbertype(\"month\", 8); object.numbertype(\"day\", 3); object.booleantype(\"isvaliddate\", true); }).build()) .topact(); } the above code is quite similar to what we do with api mocks using wiremock. we can define the input, which is http get method against the \/provider\/validdate path, and the output is the below json body: { \"year\": 1996, \"month\": 8, \"day\": 3, \"isvaliddate\": true } in the above lambda dsl, i used numbertype and booleantype to generate a matcher that checks the type whereas numbervalue and booleanvalue specify a value in the contract. using matchers reduces tight coupling between consumers and producers. values like 1996, 8 are the dummy values returned by the mock server. the next one is the test: @test @pacttestfor(pactmethod = \"validdatefromprovider\") public void testvaliddatefromprovider(mockserver mockserver) throws ioexception { httpresponse httpresponse = request.get(mockserver.geturl() + \"\/provider\/validdate?date=1998-02-03\") .execute().returnresponse(); assertthat(httpresponse.getstatusline().getstatuscode()).isequalto(200); assertthat(jsonpath.read(httpresponse.getentity().getcontent(), \"$.isvaliddate\").tostring()).isequalto(\"true\"); } @pacttestfor annotation connects the pact method with a test case. the last thing i need to add before running the test is @extendwith and @pacttestfor annotation with the name of the provider. @extendwith(pactconsumertestext.class) @pacttestfor(providername = \"dateprovider\", port = \"1234\") public class pactageconsumertest { maven command to execute the consumer test is: mvn -dtest=pactageconsumertest test -pl age-consumer the test will pass and the json file containing a contract will be generated in the target directory (target\/pacts). { \"provider\": { \"name\": \"dateprovider\" }, \"consumer\": { \"name\": \"ageconsumer\" }, \"interactions\": [ { \"description\": \"valid date from provider\", \"request\": { \"method\": \"get\", \"path\": \"\/provider\/validdate\", \"query\": { \"date\": [ \"1998-02-03\" ] }, \"matchingrules\": { \"query\": { \"date\": { \"matchers\": [ { \"match\": \"date\", \"date\": \"1998-02-03\" } ], \"combine\": \"and\" } } }, \"generators\": { \"body\": { \"date\": { \"type\": \"date\", \"format\": \"1998-02-03\" } } } }, \"response\": { \"status\": 200, \"headers\": { \"content-type\": \"application\/json\", \"content-type\": \"application\/json; charset=utf-8\" }, \"body\": { \"month\": 8, \"year\": 1996, \"isvaliddate\": true, \"day\": 3 }, \"matchingrules\": { \"body\": { \"$.year\": { \"matchers\": [ { \"match\": \"number\" } ], \"combine\": \"and\" }, \"$.month\": { \"matchers\": [ { \"match\": \"number\" } ], \"combine\": \"and\" }, \"$.day\": { \"matchers\": [ { \"match\": \"number\" } ], \"combine\": \"and\" }, \"$.isvaliddate\": { \"matchers\": [ { \"match\": \"type\" } ], \"combine\": \"and\" } }, \"header\": { \"content-type\": { \"matchers\": [ { \"match\": \"regex\", \"regex\": \"application\/json(;\\\\s?charset=[\\\\w\\\\-]+)?\" } ], \"combine\": \"and\" } } } }, \"providerstates\": [ { \"name\": \"valid date received from provider\" } ] } ], \"metadata\": { \"pactspecification\": { \"version\": \"3.0.0\" }, \"pact-jvm\": { \"version\": \"4.0.9\" } } } every interaction has a: description provider state \u2013 allows the provider to set up a state. request \u2013 consumer makes a request. response \u2013 expected response from the provider. then the generated pact file is published to the pact broker by the consumer. now it\u2019s time for the producers to verify the contract messages shared via pact broker. verifying the contract in this case, the provider is a simple spring boot application. first, i have to add the following dependency to my project: \u00A0 au.com.dius \u00A0 pact-jvm-provider-junit5 \u00A0 4.0.10 i need to define a way for the provider to access the pact file in a pact broker. @pactbroker annotation takes the hostname and port number of the actual pact broker url. with the @springboottest annotation, spring boot provides a convenient way to start up an application context used in a test. @provider(\"dateprovider\") @consumer(\"ageconsumer\") @pactbroker(host = \"localhost\", port = \"8282\") @springboottest(webenvironment = springboottest.webenvironment.random_port) public class pactageprovidertest { i also have to tell pact where it can expect the provider api. @beforeeach void before(pactverificationcontext context) { context.settarget(new httptesttarget(\"localhost\", port)); } @localserverport private int port; then i will publish all the verification results back to the pact broker by setting the environment variable. @beforeall static void enablepublishingpact() { system.setproperty(\"pact.verifier.publishresults\", \"true\"); } we will inform the junit about how to perform the test as follows: @testtemplate @extendwith(pactverificationinvocationcontextprovider.class) void pactverificationtesttemplate(pactverificationcontext context) { context.verifyinteraction(); } @state annotation in the method is used for setting up the system to the state expected by the contract. @state(\"valid date received from provider\") public void validdateprovider() { } maven command to execute the provider test is: mvn -dtest=pactageprovidertest test -pl date-provider any changes made by the provider, like adding a new field or removing an unused field in the contract, will not break consumers\u2019 build as they care only about the parameters or attributes in the existing contract. any changes made by the provider, like removing a used field or renaming it in the contract, will violate the contract and break consumers\u2019 build. adding a new interaction by the consumer generates a new pact file, and the same needs to be verified by the provider in the pact broker as well. there is also a more detailed poc for contract testing using pact that we implemented. feel free to check it out! contract testing using pact"
},
{
"title":"Run Amazon ECS Anywhere!",
"body":"Amazon ECS (Elastic Container Service) is a managed service that allows you to run containers on AWS. This service offers a fast, scalable method for managing container workloads on a managed cluster. You can run, stop, manage containers by creating Task Definitions and manage all your workloads with simple API Calls. Before we begin, those who might be interested in Amazon Elastic Kuber...",
"post_url":"https://www.kloia.com/blog/run-amazon-ecs-anywhere",
"author":"Emin Alemdar",
"publish_date":"31-<span>Oct<\/span>-2021",
"author_url":"https://www.kloia.com/blog/author/emin-alemdar",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/run-aws-ecs.jpeg",
"topics":{ "amazon-ecs":"Amazon ECS","elastic-container-service":"Elastic Container Service","amazon":"Amazon" },
"search":"16 <span>dec</span>, 2021run amazon ecs anywhere! amazon ecs,elastic container service,amazon emin alemdar amazon ecs (elastic container service) is a managed service that allows you to run containers on aws. this service offers a fast, scalable method for managing container workloads on a managed cluster. you can run, stop, manage containers by creating task definitions and manage all your workloads with simple api calls. before we begin, those who might be interested in amazon elastic kubernetes service, should go to our amazon eks anywhere blog article. ecs anywhere is a feature in ecs service that brings you the ability to run container workloads on your own environment - whether it\u2019s a bare-metal server or a virtual machine. this feature was preannounced at re:invent last year, and now it is generally available. you can run ecs anywhere in your own datacenter or in a co-location. with ecs anywhere, you need to manage your own infrastructure. that means you have to secure the physical machines, network configurations, power and cooling mechanisms etc. but, running container workloads on your own infrastructure will give you low latency or you can continue to use your existing infrastructure investments. if you have any compliance requirements or local regulations that restrict you to run workloads on public cloud environments, you can fulfill those requirements with ecs anywhere as you won\u2019t move any of your applications or data to a public cloud. if you have any plans to move to the cloud, this feature is the first step of that journey. ecs anywhere is a hybrid option. this means you can run your containers on both on-premises and cloud with a standardized container orchestrator. ecs anywhere eases the operation of managing both environments at the same time. you won\u2019t need expertise in different toolsets. ecs anywhere offers a fully managed control plane. with ecs anywhere, you don\u2019t need to run and operate separate container management software for your on-premises workloads. you can configure your task definitions or container definitions with the familiar ecs interface and orchestrate your workloads for both on-premise and cloud from the same place. you will use the same apis, same cluster management operations, workload scheduling methods, and monitoring options when using ecs on cloud. another benefit of using ecs anywhere is the ability to use the cloud as a secondary expandable infrastructure option for your workloads. you can run the base capacity of your application\u2019s needs on your own infrastructure and whenever you need to scale those applications you can use aws for meeting the load on the peak times. amazon ecs anywhere instances are optimized for running applications that generate outbound traffic or processing data. lack of having a load balancing support makes running applications that generate inbound traffic (like a web service) less efficient. the containers running on ecs anywhere instances must use bridge, host or none network options. also, you need a connection to ecs control plane running on aws. ecs anywhere in action i have prepared a demonstration to show you how you can configure and use ecs anywhere in your environment. i have launched two ubuntu virtual machines running on vmware vsphere. first, i need to create an ecs control plane from aws. when creating an ecs cluster i am choosing the networking only mode. next, i will configure the cluster as usual. mostly, i am choosing most of the default values but you can change the default configurations according to your needs. after creating the cluster, there are no instances by default as you can see in the screenshot below. for adding external virtual machines to this cluster, i need to register those instances. before registering the instances, i have created the necessary iam role for those instances. as you can see in the screenshot above, there is an iam role named ecsanywhererole. you can follow the instructions from documentation and create your own roles. on the register external instances page, there is a generated registration command. i am going to run this command on both of my instances. curl --proto \"https\" -o \"\/tmp\/ecs-anywhere-install.sh\" \"https:\/\/amazon-ecs-agent.s3.amazonaws.com\/ecs-anywhere-install-latest.sh\" && bash \/tmp\/ecs-anywhere-install.sh --region \"eu-west-1\" --cluster \"ecs-anywhere\" --activation-id \"59a61e67-1d8e-475e-b92b-aa07c6253b63\" --activation-code \"w5azrcqaypd+ygt9+ayi\" you can see the installation for ecs agent, ssm agent, , an activation id, and an activation code for successful registration in this snippet. of course that id and code are going to be unavailable when you read this blog post. after installing necessary packages and agents, you will see an output similar to the screenshot below. now i can see those external instances from the aws console as well. as you can see, there is some information about instance status, agent connection status and the external instances parameter is true. if you add ec2 instances to this cluster you can see those from here as well with external instances parameter set to false. now finally, i will run an example nginx container on those external instances. i am starting with task definition configuration. i have chosen the external launch type compatibility for this task definition. next i will configure the task definition and the container definition as well. for running the task, i have chosen the external launch type as well. now i can see the running task from the ecs console. i can now login to the instance and see the containers from there as well. as you can see from the screenshot, i have two running containers on that node. ecs agent and the nginx container that we have configured from the ecs console. i can now reach that container. it is running as expected. conclusion using the same apis and operating model of ecs is a huge benefit for running container workloads on your environment. you won\u2019t need to configure and manage separate container orchestration tools on different environments. moreover, if you have compliance requirements or regulatory restrictions, ecs anywhere can be the right solution for you."
},
{
"title":"Selenium 4.0 Released: New Features, Comparison with Previous Versions and More",
"body":"Selenium 4.0 is officially released! It includes new features, improvements. I wanted to share with you some important updates on this release. First of all, you need to upgrade your Selenium version to 4.0 and let\u2019s start with upgrading. Upgrade Before introducing the new features, I want to share how to upgrade your Selenium dependencies. The only thing you need to do is change your de...",
"post_url":"https://www.kloia.com/blog/selenium-4.0-released-new-features-comparison-with-previous-versions-and-more",
"author":"Selcuk Temizsoy",
"publish_date":"14-<span>Oct<\/span>-2021",
"author_url":"https://www.kloia.com/blog/author/selcuk-temizsoy",
"featured_image":"https://f.hubspotusercontent20.net/hubfs/4602321/selenium-4-0.jpeg",
"topics":{ "test-automation":"Test Automation","selenium":"Selenium","java":"java","qa":"QA","selenium-4-0":"Selenium 4.0" },
"search":"09 <span>feb</span>, 2022selenium 4.0 released: new features, comparison with previous versions and more test automation,selenium,java,qa,selenium 4.0 selcuk temizsoy selenium 4.0 is officially released! it includes new features, improvements. i wanted to share with you some important updates on this release. first of all, you need to upgrade your selenium version to 4.0 and let\u2019s start with upgrading. upgrade before introducing the new features, i want to share how to upgrade your selenium dependencies. the only thing you need to do is change your dependency version from 3.x.x to 4.0.0 - that\u2019s it. if you are using maven or gradle, just change your version and install the new libraries. capabilities the first new feature i want to mention is simplified capability setting. in the older versions of selenium, you had to set all capabilities in the desiredcapabilities object for setting remote driver capabilities. in 4.0, you can set them with options directly. so you don\u2019t need to define capabilities individually. here is the older usage of capabilities; and here is the new way of using those capabilities; waits before selenium 4.0, you had to send two parameters to use a wait: time and type of time. but now you can use the duration class and types of this class directly. here is the old way of declaring waits; this usage is deprecated in 4.0; here is the new declaration format: here is a list of all durations supported by duration. all of these are accepted as long variable types. keep in mind that you should import java.time.duration; and not any other duration class. on the other hand, you will be able to set wait times in the browser options section. generally, we set wait time in the hooks, but from now on there is a method in the options for setting waits. here are some examples; relative locators while writing automation scripts, finding a locator of an element can be painful. there can be multiple elements, finding the correct xpath might be challenging, the xpath itself might be complex. in the new feature of the selenium, you can use relative locators to define web elements in relation to other elements, such as below, above, toleftof, torightof or near. it makes your code more readable and friendly. this feature is my favorite one in this release. you can think of it like writing a relative xpath but this feature makes it easier to find locators. this is because finding a unique xpath or css selector can be hard and you may want to use the simple relation between elements. selenium 4.0 uses the javascript function getboundingclientrect() to locate these relative elements, therefore it will give correct results any time. let\u2019s have a look at the example below and see the differentiation of the relative locators feature. let\u2019s try to get the label for the password in the above examples. as you can see, there are multiple labels on the page and the best way to locate this label is getting hold of the password input which has an id and moving to the preceding sibling. instead of writing a complex xpath, you can use the above method and get the requested element easily: output => password toleftof\/torightof\/near these methods have the same logic as above and below. just define one unique element and move to the left or right side of this element. let\u2019s examine this example; above is a table, and, most of the time, handling tables might be harmful. you should use indexes heavily or find the parent element with text and move the child or vice versa. but with relative elements, you can make this transition smoother. just locate the requested cell and move left, right directly without caring about the relation between elements. let\u2019s click the delete button next to the given website name with the relative elements. or you can use the near method instead of the torightof. notice how i didn\u2019t check any relationships between elements - child, parent or any other. i have just used the above, below, right or left method directly. for using those methods, there is no rule for having any relation between those elements. it will look above elements directly not inside of the dom but on the screen. so this is the tricky part of these methods. opening new windows, tabs for older versions of selenium, if you want to open a new window or tab, you should have a new driver object and use the window handler method for using this object. with selenium 4.0, you can open a new window, or tab with the switchto method easily. let's take a look at an example:. navigate to the one website, open a new window, and navigate to the other website. or you may want to open a new tab, so you need to change windowtype only; devtools protocol with the selenium 4.0 api, you can use chrome devtools such as network or profiler. i will show you how to set your geolocation with devtools; the code block above sets my geolocation to the given coordinates. it is useful for some cases, for instance, if the app requires it to be in a certain location. selenium grid improvements the older version of the selenium grid was complicated and it was not easy to set up. but the new grid comes with built-in docker support and you can easily run the grid in a container rather than preparing virtual machines. besides, you will no longer need to set up nodes and the hub separately with selenium 4.0. the new grid architecture comes with three modes and you can use any of them; standalone mode fully distributed mode hub and node i have walked through the new features of selenium grid and gave some examples of the new features. i hope you will enjoy the new release. at the moment i am working on a new article for setting up selenium grid. stay on the line and keep learning!"
}
];